Code reviews in multiple, globally dispersed teams

Do you believe in code reviews?

I’ve seen many review ‘processes’ fail during my career. Code reviews that tended to turn into abomination 2 hour meetings, reviews that were done only on ‘good commits’, while all bad were passing unnoticed and so on.. I have also found two or three that gradually evolved into something that actually worked – without impairing team’s effectiveness, but helping to share system knowledge, programming practices and finding possible bugs.

Small startup review

Is the code review something that can be afforded in a dynamic startup that wants to grow? In a group of 4 remotely working individuals I’ve been reviewing code post-commit (trunk/master development), mainly to acquire knowledge. All the comments we placed in mails or assembla’s code review tool. This was very lightweight, very ad-hoc. One may argue that everyone reviewing code of everyone else is an overkill, but this worked brilliantly for us. Not only did we manage to keep consistency across the system even when it was 100k+ lines of code, but also were able to easily cover for someone’s task when others were swamped.

Lessons learned

– get email updates about your collegues commits

80 people team reviews

Well, when having 80+ people committing you kind of need some principles who and when reviews the code, or this is going to get messy. I still believe in lightweight informal reviews being a key to a consistent team and architecture, though it may pay up to write down some ground rules to follow.

My ground rules

Peer review

All code should be reviewed on commit by peers. It usually is a colleague from the same team, but nothing against doing this lightweight review by someone else who more or less understands the part of the system. Other valid forms of peer code review include (in order of importance and value): pair programming, face to face review where someone just visits your desk or a screen sharing remote session. I’ve seen successful reviews done on Skype and Lync. The last one and in my opinion least effective form of peer review is static one using tooling such as Crucible or Github. This kind of review promotes negativity and criticism instead of dialogue and learning.

What is more – lightweight peer review quickly promotes good practices such as refactoring and unit testing.

Lessons learned:

  • don’t review each commit in github/crucible
  • find a quick and lightweight process to raise issues before commit
  • mention reviewer in commit’s comment to keep evidence and second person to contact

Other team / component owner review

To achieve a level of objectivity and consistency of the changing codebase, it is always advised to do pull requests by a different team to the team that did the actual work or a system expert. It also helps to have globally distributed so as to provide a uniformly acceptable turn around on pull requests for all teams. I would expect a good working environment to deal with pull requests within 24 hours.

I would encourage both face to face as well as Github reviews here. Sometimes tooling can be a needed help on bigger pull requests and a checklist of what to do may be helpful. Review request can contain things such as details of the peer review outcome, CI builds outcome (can be automated), acceptance criteria of a change and a change request itself.

Other team review leads to greater confidence that the right thing is being built for the problem at hand, increases diversity of knowledge in terms of approach, quality and design.

Objectivity of code reviews

Important thing to note, again, is that subjective assessments of anything are whimsical. To avoid criticism and negativity among the teams one should only pushback on pull requests when it stands against defined principles and not on a whim. There will be grey areas where stylistic interpretations are debatable but through a structured walkthrough and avoidance of static code reviews a consensus can be formed.

piotr

Leave a Reply

Your email address will not be published. Required fields are marked *