Shortening release cycle as a goal for 50+ heads organization

How does one measure progress of a technology team. How do we say if we are better delivering good, valuable software than we were last sprint or last year?

There is plenty of of KPI or OKRs going around in the world of tech, but I wanted to share my experience coming up with an overarching goal for a group of around 90 people.

Joining an established team with history we wanted to come up with something that will contribute to the quality (customers said stability of production was a main problem) and then also makes us go faster (as budgets were shrinking inevitably with plenty to do). The group already had established communities of practice, which synchronized Analysts, Developers and Testers, who worked in self-organizing teams and they had their own goals and continuous improvement goals. But how do we align entire group of 10 teams so that their efforts maximize returns?

One of the most successful initiatives, which brought so many people together was a goal to shorten the release cycle, from an absurd quarterly releases, to monthly ones and then on-demand, when ready.

The goal was quite acceptable by everyone, it turns out it is really easy to convince any person that releasing more often is a good thing. The benefits are different for different groups, but there is something valuable for any stakeholder.

  • Client will get more frequent updates and reduce lead time for changes
  • Developers enjoy less last-minute changes, as the next release is round the corner
  • Managers get more opportunity to release and don’t have to crumble adjusting demand delivery to a stretched release dates
  • DevOps folk get more stable environment and less firefighting
  • Testers will inevitably have to focus on automating as much as possible, but will in turn reduce manual, repetitive tasks

Once the goal kicks in you realize there is tons of stuff to do to make it happen, and most of it is automation. The doubt kicks in, and in some organizations there might be external pressure not to release often (as it generates overhead or creates risk). In fact opposite is true. Releasing each 3 months is a terrible risk of big change going live all at once. We observed our defect leakage to production and incidents plummet in the process and our production to thrive in uptime.

Each community of practice and team chipped in a bit and it was really easy for them to do so. Majority of the improvements can contribute to shortening the release cycle – anything that automates testing (manual regression testing especially), simplifies the release process, divides monolith into smaller more manageable components, streamlines your demand and backlog so it can be easily translated into specific release trains, etc.

The group overachieved the goal starting to release monthly after just 6 months of improvements, rather than projected 12 months. They then switched to ‘what if we release more often?’ or even ‘What if we go towards continuous delivery goal’?. This has led to even more impressive KPI results and customer feedback improved tremendously.

The simple goal led to a fantastic self-improvement journey transforming legacy app team into a modern software delivery house.


Colors of personality

Apart from Computer Science I also graduated in management at the University of Economics. During my eye-opening tenure there I attended quite a lot of  fancy named ‘psychology’ and ‘human resources’ courses. Quite a lot of them had different kinds of tests and models in the curriculum – in effect I learned over 30 ways to categorize people into different boxes.

Most of those I instantly rejected as not useful (I’m not gonna quiz you with 100 questions before I even speak to you so I can know your mental model!). Others seemed to either categorize people in a very blurry way or when they seemed ok they didn’t give any clear benefit. When something came close to useful it was usually intended for use with factory workers. Overall – psychology tests in IT teams – I wasn’t a fan at all!

Then, a couple of years back, when in Switzerland I had an opportunity to listen to a German profiler who works with special forces. My very naïve explanation of a profiler is a person who builds a mental model of someone to understand and predict their actions. And the model she presented was so straightforward and convincing it immediately had me thinking – about myself, about my colleagues, about my ex-classmates.. and it finally seemed useful!

After a year or so of mulling this over I crafted a speed talk I presented on one of my Agile trainings in Krakow with a group of 30 people. The feedback I got was quite encouraging and it made me evolve the talk a bit more.

The model assumes people can have 4 different colours. And you usually don’t need a test to say which colour you deal with at the moment, you are fine just by observing body language and signs in the way of talking or writing. While some people will be end of spectrum of one colour, some will be a mix of two, three or even four. Sometimes it is impossible to say, but hey, no one says this model is perfect. It’s just the best (or the only good enough) of those I learnt.

Here we go.

Continue Reading

Breaking changes in AppStore / Google Play releases

So we have this app going on and it’s my first on iOS. Coming more from web/backend development world I’m used to continuous delivery, where I push my code through continuous integration and once it’s passes all the tests it can get easily and automatically delivered to production in a nice server-by-server manner. Usually no downtime involved, all done in minutes if not seconds. This gets me nice stream of new features, experiments and fixes flowing to end users.

Now, how do I do the same for apps that sit on our users’ phone. What if I want to do an API breaking change – how do I ensure my users always get the compatible version. How do I time my backend/android/ios releases?



Continue Reading

Generation gap in Software Development

Running an “Agile Experience” training for University students at AGH in Krakow made me realize the young engineers see world differently. I always knew and was prepared to work with up and coming professionals, but what does it mean for the Agile team?


One of the rules during our Scrum Simulation Game is that we limit work in progress to just one item. The team can work on one item at a time and one item only. While this rule is usually easily accepted by thirty-something and forty-something corporate professionals it was a huge struggle for 22 y.o. engineers to be! The intent of pushing this Kanban rule to the extreme is to show that we can achieve faster cycle time, less unfinished stories and greater team work when we limit the concurrent tasks within the team. However the new generation prioritized team’s throughput over those things and was really ferocious when raising the issue during game’s retrospectives. The argument that the team could be much more effective if only we allowed for more concurrent tasks.

This repeated for 6 groups (18 Scrum teams) over and over again, leading me to further discussions. The conclusions I quickly drafted was that the new generation prefers isolated work and prioritizes short term wins over not so easy to see sustainable effects of knowledge sharing and collaboration. Talking more about it to Remik Dudek I also realized that the younger the professionals are the more open they are to form quick task force teams rather than long shot established product teams.

While I and people I work with daily usually present ‘my team – my castle’ approach with a strong statement that the longer we work together the more sustainable good results we can deliver this University group showed that they are much more open to so called ‘tent teams’. We form, we solve the problem and we go on for more challenge. And this was visible even when I tried to differentiate by the colors of personality. Usually more stable and security seeking personalities showed that they are up for this type of work as well.

This observations will definitely will change our way of working in the future and I am excited to see how Rules of the Game change for the industry.


How we changed the university studies in Poland

In 2009 when I was still studying computer science in Krakow polish universities were very different to each other. The ways students gathered and exchanged their knowledge and learning resources were different as well. However they all had a common characteristic. Despite our broad access to Internet, they were kept in a closed circles, difficult to locate, uncatalogued and usually thrown away after one year when no longer needed by certain group.

So year after year, students kept starting from scratch, gathering knowledge, mailing links, PDFs, words, writing study notes, copying them, distributing by mms, closed forums and then throwing them away when they were no longer needed. It punched us as well, as computer science students we had our own, shiny forum. As a group we figured out there is no point in wasting hundred people effort to gather and catalog knowledge. We just invited our younger friends to join the forum – and by this they inherited our archives and efforts.


Obviously this worked fine for any student of our computer science major, but not outside of it. At that time me and Marcin also studied Marketing at the Krakow University of Economics and everything we needed to learn was usually… emailed between students and lecturers. In fact email was so deeply rooted in people’s minds that they didn’t even think about moving to forum (which at that time required some non-basic computer skills to set up correctly). Of course there were online places where students could upload their notes, but they provided no additional motivation to use them, apart from the fact that you were helping  someone to build their knowledge base out of your sources.

Here came the idea – give people an email-like tailored experience to collaborate, talk, organize events, but also let them keep this knowledge for their future, younger friends. And then maybe let them share knowledge between courses, majors or even universities.

When ( was founded in 2011 we quickly gained a lot of support and users willing to jump on board. They still had their emails, but they also got forum-like categorization and ability to tag, sort and search their knowledge base efficiently. All set up for them at no charge in seconds, just like email.

Four years later has over 300 000 registered users in Poland and gathered over 1.2 million community-reviewed study materials which are shared and accessible to everyone. They’ve been found by more than 5M people in Poland. According to similarweb this knowledge archive is ranked around #900 site in the country, which is really nice for a higher education related portal.


But the most satisfying part is that when you start attending a university course, you no longer are left with a long, arduous task of finding your way around. Whether your group joins our communication platform or not, you still can benefit from browsing the archives. And they are not just a random knowledge written by one person. They are accumulated archives of consecutive years of students trying to figure out their efficient learning. What is more, we have come up with an engine that suggests study notes based on what courses and teachers/professors you have picked, so that you don’t even have to search.

mailgrupowy preview

We leave managing this knowledge pool to our community, where moderators also make sure all copy right claims are properly and timely handled, now concentrating mainly on good infrastructure and ways to provide better suggestions. The archive keeps growing and is kept up to date each day, so we’re looking at it being even better in the future.

This model has worked well in Poland where portal was very strictly tailored to the needs of students. We see some successes on other markets such as India and Turkey now. There is an excellent tool like ours ‘passai direto’ available in Brasil, another one called Koofers in US and some other local tools available on other foreign markets.

While historically universities sought to ensure equal access to study materials by placing materials on reserve in libraries, or by posting materials on their individual website, times have changed. These documents are no longer difficult to locate, not catalogued appropriately, or unavailable when needed. I am really happy that a side project of four software ninjas has grown to have the privilege to help students achieve their academic objectives – by making them spend less time searching and rewriting, so they have more time to do things that matter.


Code reviews in multiple, globally dispersed teams

Do you believe in code reviews?

I’ve seen many review ‘processes’ fail during my career. Code reviews that tended to turn into abomination 2 hour meetings, reviews that were done only on ‘good commits’, while all bad were passing unnoticed and so on.. I have also found two or three that gradually evolved into something that actually worked – without impairing team’s effectiveness, but helping to share system knowledge, programming practices and finding possible bugs.

Small startup review

Is the code review something that can be afforded in a dynamic startup that wants to grow? In a group of 4 remotely working individuals I’ve been reviewing code post-commit (trunk/master development), mainly to acquire knowledge. All the comments we placed in mails or assembla’s code review tool. This was very lightweight, very ad-hoc. One may argue that everyone reviewing code of everyone else is an overkill, but this worked brilliantly for us. Not only did we manage to keep consistency across the system even when it was 100k+ lines of code, but also were able to easily cover for someone’s task when others were swamped.

Continue Reading

Surefire, jenkins and NoClassDefFoundException – why not to use white spaces in Jenkins job names

Well, jenkins errors can be mysterious at times..

message : Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) 
on project XXX: ExecutionException; nested exception is 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
The forked VM terminated without saying properly goodbye. VM crash or System.exit called ?

No, we don’t call System.exit() from our tests :-) Why would surefire be so rude to cause VMs not say properly goodbye?

Maven -e option coming to the rescue. Analyzing entire error and the command line being invoked by surefire here we can see that jenkins attempts to fork in a directory with spaces, which will fail later on in linux when java starts. I haven’t found any immediate solution other than renaming the job to something safe.

Forking command line: /bin/sh -c cd "/etc/web/jenkins/workspace/commit full build/xx/yy/zzz" 
&& /etc/web/jenkins/tools/hudson.model.JDK/jdk1.6.0_45-jaxwsEndorsed/jre/bin/java 
-Xms512m -Xmx1024m -XX:MaxPermSize=256m full build/xxx/yyy/zzzz/../../src/test/resources/java.policy 
-jar '/etc/web/jenkins-home/workspace/commit full build/tesscoll/pu/customer/target/surefire/surefirebooter2206155077895318006.jar' 
Exception in thread "main" java.lang.NoClassDefFoundError: full 
Caused by: java.lang.ClassNotFoundException: 
full at$ 
at Method) 
at java.lang.ClassLoader.loadClass( 
at sun.misc.Launcher$AppClassLoader.loadClass( 
at java.lang.ClassLoader.loadClass( 
Could not find the main class: full. Program will exit.

Rapid prototyping in fast environment – part 1

At the begining when I and my friends started working on our startup we tried to design, develop and ship a complete product. At some point we decided that we’re running out of time and pushed out what we had – crooked, sloppy version of what we wanted to ship. Even though a product that was supposed to be a mailbox was failing when sending emails it turns out it was a very good decision.

At that time mail4group was only available in Polish and its main target were Polish students. The academic year of 2011 was about to start. We pushed the incomplete product with some features working, some not working, some breaking in the middle and emailed it to some of our friends. We quickly got around 1000 of first users, that were quite happy with basic functionality but often encountered an error page. In fact, it was so common that we even gave it a fancier design with a ninja running through the screen. Ninja soon became an official, front page mascot of the portal.

Each error page would generate an email and send it to all founders. Sometimes we were flooded with ~1000 error emails a day, but this was a good amount of testing we would never get ourselves. What is more, we were able to notice which paths users follow so we were able to quickly removed completely unused features and enhance those that were used. Google analytics was and still is a most important part of knowing which features to develop.

A year after that we met with our friends from, and the main point of discussion was that we don’t measure enough. Even though we attempted to measure each of our main features and analyze the outcome, in their eyes we were still novices in measuring what we do. At that time attaching events or A/B experiments to every single change that we do sounded extreme and in the clouds, but soon, as we moved forward we began to appreciate structured metrics.

Continue Reading

Code standards

Are there general principles that should work for every project? I think so..

Code conventions should follow market conventions

Picking up code conventions that are widely known and used greatly increases chance of hiring right people who can touch the ground running when joining the project. Usually most of the cost of software is associated to its maintenance and hardly any software is maintained for its whole life by the original creator. Widely used conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly.

Some of the widely used conventions to follow in Java would be

Oracle Java conventions

Google styleguide

Maven Standard Project Layout

Another useful one to decide upfront is to prefer 4 spaces or tabs. The choice of tabs versus spaces is arbitrary and in the absence of opinion 4 spaces look more common.

Self Documenting Code

Code comments do not form part of the compilation process and nor are they statically typed/linked. As such comments are ignored by the compilation process and more importantly by the testing processes. There is nothing to assert that the comments reflect what the code does or should do. It is therefore unreasonable to assume that, as the code organically grows, the comments will stay in line with the code that they narrate. What was once a salient point will, over time, become irrelevant or downright misleading. As such code should be expressive and document its behavior in preference to code comments.

Continue Reading

MySQL table corruption recovery – how to

It happened to me that on a big database one or more tables got corrupted. Basically DB would hang when querying specific rows in specific table or the application log would show something like

OperationalError: (OperationalError) (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)") None None

Then investigate further in mysql.log and look for trails like this one

InnoDB: Error: tried to read 16384 bytes at offset 0 5914624.
InnoDB: Was only able to read 12288.

It show a likelihood that one of the tables is corrupted. First step to do now should be a backup/dump of everything. If mysqldum/mysqlcheck disconnect at the concrete table try using --ignore-table.

You can attempt fixing concrete table by copying all content to a temporary new table using SELECT INTO. If it fails on entire table you can do this part-by-part using LIMIT and OFFSET keywords. If this doesn’t work either try running mysql with innodb_force_recovery = 1 in [mysqld] section of config – this will make your mysql ignore corrupted rows.

If everything else fails, and you can’t recover any data you can turn innodb_force_recovery = 6, but keep in mind that it works destructiveli and will even deepen corruption of your database. When I last tried it after dumping everything with innodb_force_recovery = 6 the database didn’t run AT ALL without this option.