Department of Things you don’t have a clue — How to do Code Review and work with coders
A good book for those working with coders and how they approach software development- Working with Coders: A Guide to Software Development for the Perplexed Non-Techie by Patrick Gleeson;
Different organizations have different approaches to code reviews. Some consider them optional, and only use them when the original author is unsure of themselves and wants a second opinion. Some require every piece of code to have been reviewed before it can be released. Some forgo code reviews entirely, while some insist that at least two developers review every piece of code. The process for review can be very informal (“Come and look at my screen and tell me what you think”), or can be formalized and enforced by software that prevents code from being committed into source control until it has been reviewed.
The difficulty with code reviews is that they take time, and they break up the flow of work. For me to get my code reviewed I need to stop another developer from doing what they’re doing to look at what I’ve written. If it’s a big chunk of code that I’ve written, it’ll take them a while (maybe an hour or so) for them to review it, which means that I’ll have to wait for them to find a free hour. Until they’ve reviewed the code, I shouldn’t really write any more code that relies on the first chunk, because if their review throws up major problems that require a significant rewrite, that could make invalid any code I’ve written that relies on the original draft. So unless there’s a completely unrelated task I can be getting on with, I might be stuck twiddling my thumbs for a while. Even if I find something else to do, by the time I get review feedback on the code, I might have forgotten the nuances of why I wrote the first draft the way I did, which means I might forget to include some key element when I rewrite it, introducing a new bug in response to the original code review.
What I want is faster feedback on my code. One way to do this is to write my code in smaller chunks, so that each chunk is easier to review. If I and my colleague each spend a few hours writing a small chunk, then half an hour reviewing each other’s code, then repeat, then we spend less time away from the code we’re writing, and with it fresh in our minds we can incorporate review feedback more effectively. Of course, in practice this doesn’t work as elegantly as I’ve described, because two developers will never really be able to sync their work so as to get chunks into a reviewable state at the same time. But in theory, smaller chunks can keep things moving faster.
So: The next time you’re at loggerheads with a non-coder about project planning and delivery, remind them:
-Software is about designing human processes, not static constructs
-Software is about solving lots of new small problems, not repeating understood processes
-Software is about building a complex conceptual model, not performing a series of independent tasks
The biggest lesson is: You know your model is not perfect, which means you know there are ways in which the answers it generates are wrong. You need to know what those ways are, and see whether the results are good enough for your purposes. If they aren’t, you need to modify the model. My initial mindset was more like “this is the model that was used in the past, let me check a few things and see if it makes sense”, and that’s completely wrong. My goal should have been to try to demonstrate that the model was inadequate, not to try to check whether it’s adequate.
The second lesson is: I really do need to code better. I did not find any really major errors in the work that I had done to generate the first set of numbers, but I could have; indeed, it seemed so plausible that I had made a major mistake that I assumed that must be the problem. Right now I am very sure that my analysis, and the code that implements it, does not have any mistakes of practical significance, but that was definitely not true at the time that we submitted our first results, and it should have been. This is partly a matter of simply allowing more time, but there’s a big component of better practices: create modules that can be tested independently, and then create tests for them, those are two of the big things I didn’t do initially.
Finally — and this is a minor one compared to the others — I should have been a lot less trusting of the previous work. In fact there were some major red flags. I already mentioned the use of two highly correlated variables in a regression, with no attempt to ensure that they could reasonably be jointly used to extrapolate beyond the range of the data, nor even any discussion of the issue. Also (perhaps related to that one) I had of course noticed that the previous analysts had not tabulated the regression coefficients of their model. I assumed that was just because the intended audience didn’t care about, or wouldn’t understand, such a table…but shouldn’t they have put it in anyway? (In fact, it is a requirement by the regulators, had anyone chosen to check). In essence, I had made the mistake of judging the book by its cover: I assumed that since the report appeared to have been done reasonably well, it actually had been.
I may not be able to eliminate these mistakes completely — especially the coding one — but even half-measures there are better than no measures at all.
I would say that if you have any responsibility for software projects without any technical knowledge, and know even less than you would ever be prepared to admit, this book would be well worth your while. Given that one of the main reasons big software projects go wrong is that the executives or managers in charge have so little understanding, the cover price is a very reasonable investment in chipping away at the information asymmetry — alongside, of course, the classic (1975) The Mythical Man Month: Essays in software engineering by Frederick Brooks (“Adding manpower to a late software project makes it later.”)
Gleeson: I think the most potent one is the love of the new. Software development is about constantly trying new things — because the stuff that’s repetitive can almost always be automated away — and this can lead to a tendency to fall in love with a tool, technology or technique simply because it is new. This can create a bias, and occasionally a total blind spot, that can cause developers to advocate bad decisions. Managers should always be on the lookout for situations where a preoccupation with the new is skewing a developer’s judgement.
Almost as powerful is hate. For a few different reasons (often to do with insecurity), developers can find themselves really investing in loudly disliking a particular language or tool, sometimes unreasonably, and occasionally quite counter-productively. In the worst case scenario it can create a toxic environment where other developers get belittled simply for the tech they have worked with in the past, creating tensions in a team. If a developer gets themselves into a state where they’re constantly railing against a particular piece of tech, it’s important for a manager to stop it becoming a problem, because it’s not productive and not healthy.