Show Notes
In this engaging episode, Mon-Chaio and Andy dive deep into the topic of peer code reviews. They begin by providing historical context, tracing back code review practices to IBM’s research in the 70s and 80s, and examine the efficacy of modern-day peer reviews. The hosts debate the true benefits of code reviews, discussing whether they genuinely enhance quality or merely serve as a process ritual. They also explore alternative methods like pair programming, and propose innovative ideas for improving peer review processes, such as detailed walkthroughs and code review checklists.
Listeners are encouraged to experiment with different tactics and to consider feedback on their own peer review approaches. Join Mon-Chaio and Andy as they navigate the intricacies of peer reviews and share actionable insights for refining this critical practice in software engineering organizations.
References
- Investigating the effectiveness of peer code review in distributed software development based on objective and subjective data
- Advances in Software Inspections
- The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data
- An empirical study of the impact of modern code reviewpractices on software quality
- Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise
- Are Two Heads Better than One? On the Effectiveness of Pair Programming
- From Async Code Reviews to Co-Creation Patterns
Transcript
Mon-Chaio: Hello everyone! Glad you can join Andy and me again for our wandering conversations on tactics and technical leadership. And where are we going to wander off to today, Mr. Parker?
Andy: Well, I think we both started from the words “peer review”, and then we took that in very different directions. So we had a fork in the forest road. You took one path. I took the other.
Mon-Chaio: Exactly! So what we had talked about was we wanted to explore whether peer reviews mattered. That was I think the title in our guiding document for this episode.
Andy: Something like that, yeah.
Mon-Chaio: And I believe you Andy put the link in and the paper was actually discussing about code reviews, right?
Andy: Mm hmm.
Mon-Chaio: And whether having peer reviews on your code commits mattered in terms of various factors. I, on the other hand, said, meh, Andy’s going to take care of this. He posted this paper about peer reviews and I’m sure he’s going to study everything to do with code reviews. I can’t imagine the research is that big.
So I’m going to go to the other route. What else would technical leaders be interested in, in the concept of peer reviews? And so I went down the rabbit hole of performance reviews. Do peer reviews matter when you’re doing performance reviews? And the intention was that we would arrive at some sort of framework around how your peers give you feedback in relevant ways or non relevant ways.
That was the intention. Did we get there? Well, I think everybody that’s listening can judge for themselves as we go into this discussion.
Andy: I think we can try to pull it together. I think there’s overlaps. I think we can do this.
Mon-Chaio: All right, Andy, why don’t we start with the more structured part of what we were going to talk about, the code review part. Do peer reviews matter when you’re doing code review?
Andy: I would say it really depends on what you want. So I’ll put it this way. Yes, they absolutely can, but it really varies based on what you think you’re doing as a peer review.
Mon-Chaio: Uh huh!
Andy: So I think we were all raised on the story of if you peer review your code, you’ll have higher quality.
Mon-Chaio: Mm hmm.
Andy: So any engineers starting now or almost anyone working now, your day is probably spent either on GitHub or GitLab or something like that, looking at pull requests or merge requests – I think that’s what they’re called on Bitbucket, is merge requests – and doing a review of whatever’s in there. Now, the thing is, is we often do it because we’re like, oh, this is how you catch bugs. You might ask other people why it’s done, and you might hear other answers about, well, it’s to keep in touch with how the code is changing. Or it’s to give more people than just the author an overview of what’s happening so that other people kind of apprised of the changes going on in the code base.
Now, when I was starting out, it was all about quality. It was all about finding bugs, that’s why you did reviews.
Mon-Chaio: Mm hmm.
Andy: And the research from the 70s and 80s was all about quality. And so the peer review process, this idea that you kind of sit down and look over the code and make comments on it and say, oh, here’s a problem, you need to fix that or you need to change that, that actually came from IBM in the 1970s. And then they had a bunch of numbers – IBM was very much a numbers-oriented company – and they had a ton of numbers to back up that by doing this process, adding this to their software development, they reduced quality problems substantially.
But what they did as peer review is not at all what we know of as peer review today. So, it kind of gets down to, it depends on what you want to get out of it. Are you going to do something like what they actually did and what they researched? Or is it what things have turned into?
Mon-Chaio: That’s interesting. Well, let’s start maybe with what people might want to get out of it. You mentioned a few things I heard. One was quality, generally.
Andy: Mm hmm.
Mon-Chaio: Then there’s socialization of the code base, I think was another area that you mentioned. Anything else?
Andy: There’s socialization. There’s familiarity. So this would be trying to reduce the bus number or the lottery number on the code base, so that more people are familiar with the changes going in.
Mon-Chaio: I don’t know that I think that socialization and familiarity are different, necessarily. I think they’re probably the same. Maybe we could group them into the same bucket. One other one that I think was missing that we haven’t touched down is to satisfy regulatory concerns or requirements.
Andy: Yeah, the idea that no change can go in without another person signing off on it.
Mon-Chaio: Exactly. And sometimes another person from a different area of the company, depending on whether you’re under Sarbanes-Oxley or some other type of regulatory framework.
Andy: So in those terms, whether or not your peer reviews do anything at all is really going to come down to what is it that you think they’re going to do for you. And if it’s the SOX compliance where you say, yes, we’ve had someone else and they have to match these particular criteria, have looked at it, and they’ve signed off on it , then pull requests, code review type thing will satisfy that, and satisfy it in a way where you can see exactly who signed off on it. It gives you that traceability.
Mon-Chaio: Mmm hmm.
Even there, I think If we focus on the quality aspect of does the product do what it’s intended to do, and even if we narrow it down to just logical errors, things that logically are wrong with the product, I think that generally satisfies the regulatory compliance aspect insofar as the spirit of the regulatory context is to do these code reviews so that people can find these things that put the product at risk, right?
And so if we think about the spirit of the law, I think we can bucket regulatory compliance, mostly under logical defects of the product.
Andy: Okay.
I hope that that’s what the regulatory compliance is actually about.
Mon-Chaio: Although oftentimes I think the compliance mechanisms themselves, for real or for imagined reasons, are so onerous that they can’t meet those requirements. But I think at the core of why the regulators came up with those requirements, I think these defect reasons are probably the right thing to focus on.
And , I don’t know if you have numbers from your studies, my guess would be that the majority of companies when they talk about code review, that is the first thing that they will bring up. And that is probably the highest concern for most companies. And then the other stuff I think kind of falls, you know, and this and that.
Andy: Yeah, all of those other things came from surveys of developers. I would imagine that the main thing would have been, well, it’s to find bugs. Now, the thing is, if it’s to find bugs, how do you find bugs? And I’m going to posit, I’m going to just put my opinion in here right now, you actually don’t find bugs through the peer review process that I see at almost every company I’ve ever seen this being done at. You actually find the bugs through a much more directed and facilitated inspection of the code.
And I think to drive that home, IBM in the 80s, when they studied it, and then the Software Engineering Institute, when they studied it in the 2000s, they found that to get something that’s reasonably likely to find bugs, to find the majority of bugs I should say, you can’t inspect more than 200 lines of code per hour. And they found that at 200 lines of code per hour, the reviews were picking up around 55 to 65 percent of the bugs. And if you went up to say 800 lines of code per hour or more, that dropped down to 30%.
So if your thing is I want quality, I want to find these issues, I want to get a better understanding of the quality of my software, then you limit how much you review in a given amount of time.
Mon-Chaio: That ends up being around 30 lines of code every 10 minutes
Andy: That’s a lot! If I’m trying to understand it to the level of I’m finding all of the bugs in it, if you think about it, I can sit there and easily look at a 10 line function sometimes for half an hour before I kind of figure out, okay, I see how this is fitting together now.
Mon-Chaio: It may be true that it’s a lot. I would say, naively, a listener listening to this might say 30 lines every 10 minutes. I mean, I review five PRs right before lunch, right? I stop coding at 11:50am, I take five PRs, I look at them and I sign off on them, and then I go to lunch at noon.
Andy: And that absolutely happens. And what they find, if you look at those kinds of PRs and that engagement, what they find is that you’ll get fewer comments, you’ll get less meaningful comments, but it will take you even longer to get it through. So that person who’s like, I just reviewed five PRs and then went to lunch, the overall effect is that a very small pull request, takes a not insubstantial amount of time to get through, gets more comments, gets more attention than a much larger pull request that takes even longer to get through, but gets , per line, less attention.
And so to get a good review, 200 lines per hour, you’d expect the comments, the feedback to at the very least grow linearly with the size of the pull request, but they actually drop more like linearly with the size of the pull request.
Mon-Chaio: I don’t think I’m surprised by it. And I don’t think that most of our listeners would be surprised at that. I think most people understand that concept that as the pull request size grows, the number of reviews, and the quality of your reviews, is going to shrink.
Andy: So I think people understand it. I don’t think people actually do it all that often. I don’t think we’re very clear always on what small is. From the data I found small is less than 20 lines. But now you have the next problem, which is that a 20 line pull request will take almost as much time as like a hundred line pull request.
Mon-Chaio: Right.
Andy: Now, you don’t want to spend all of that time sitting on that 20 line pull request and waiting for it to go through. So you’re much more likely to do the 100 line pull request. But by going to 100 lines, you’ve dropped the quality of the review you’re going to get substantially.
Mon-Chaio: Right. Yeah. There’s a queuing problem here where you can divide a hundred line pull request into five small 20 line pull requests. But why would you do that?
Andy: If the processing time is only slightly larger for a 100 lines.
Mon-Chaio: Right. Or, I mean, in most organizations, having five pull requests out for review actually takes quite a bit longer, right? Because they maybe go out to different people, they come back at different rates. And so …
Andy: And so now you’re context switching between them and …
Mon-Chaio: And also the thing I hear a lot is, well, why wouldn’t I combine them? It’s just all logically one thing.
Andy: Which gets to the next thing my predilections took me to is, does pair programming solve all of this? And the answer is, well, not really. I mean, it solves some of it, but it doesn’t solve all the other parts. So if you actually want the really high quality, there’s very little evidence that pair programming radically changes that.
There’s some evidence that it gets it a little bit better, but of course the effort that goes into every change is quite a bit higher. But what it does do is it gets rid of all of that waiting time on those pull requests. So you get all of the review without having all that waiting and all of that context switching.
Mon-Chaio: That’s very interesting. And what about the time it takes to generate that pull request? Does that dwarf their savings and review time?
Andy: Oh, I can’t imagine it would. This now gets to something that I’d have to go off of a bit more anecdotal. There is one paper that I have found – actually not a paper, an InfoQ article – that was based off of some data. But it kind of talked about the thing that I’ve watched, which is a pull request will very easily sit there. It will take. a couple hours to write up the change, and then it will sit there for a review for another few hours, and then the next day you see the review and you make your changes, and then you need to get it reviewed again to get those approved. And so you’ve got another few hours of waiting.
So you might have that 20 line change go in after two days. And that actually fits the data that I found. Like a 20 line change, around two to five days is about what they were seeing. A pair programmed one, it will take that initial two hours and then it’s done. So your latency on change will be radically reduced. The time to make any individual change reduces … maybe slightly. Quality increases, maybe slightly. So you basically get all of your code changes in much more quickly.
Mon-Chaio: So if we’re to believe the research then, it seems to be telling us code reviews the way most people do them, have no benefit.
Andy: Yeah, for me personally, they don’t seem to benefit the things that I would care about in a project team . Now, what I found interesting, in going to the one that improves quality, the formal inspections, IBM in the paper that they put out in the eighties, I think it was early eighties, kind of telling the world much more about what it is that they do, they actually created a kind of fun little fishbone diagram saying, well, this is how you get quality of inspections. And it’s kind of like this causal tree. You kind of say, well, to get quality inspections, you need this, and to get that, you need this, and to get that, you need this.
On almost every branch of that, you needed management interest. You had to have it that the management structure of the organization was interested in that thing happening. Not just that they allowed it to happen, is my interpretation of this, but they were actually interested in it happening. So things like preparation rate or concern for quality, management had to be showing an interest in that to promote that that is something that you want to do. So just saying you’re going to do a pull request process isn’t actually enough to get effective peer reviews.
Mon-Chaio: And that’s really interesting because I think for the most part, I wouldn’t know of any leaders that say, hey, I’m not interested in product quality. I mean, usually that’s why they’ve instituted a pull request process in the first place.
Andy: Mmm hmm.
Mon-Chaio: But going back to our concepts of espoused theory versus theory-in-use, it’s not just your support of PRs that indicate your support of quality. When an engineer comes up to you and says something to the effect of, “oh, we have a lot of technical debt that we need to work on because that reduces quality” or anything else: “we need to do a redesign of the entire system”, “we need to stop taking product features for a while”, “we need to X, Y, and Z”, anything quality related. If you start to not engage with those, it doesn’t matter, I don’t think, how much you support pull requests in that process because it shows that you don’t really care about quality. You care about pull requests.
Andy: Get the review in, get the pull request in, just get the thing through. But if you care about quality, you’ll start caring about how do you do the pull request? What do you look at? What kind of feedback are you giving on it? When someone brings up ” we have all this technical debt”, you start asking, why do we have it? What is happening? What is preventing us from working on it right now? Are there approaches that we can be using that will start addressing it? Like, how can we incorporate that into our reviews? You’d start addressing it from the whole quality standpoint, rather than just the, how do I get my pull request in?
Mon-Chaio: And this isn’t to say that everything somebody brings up is correct, that we need to pause product features in order to deal with technical debt, or we need to completely re-architect. I think it’s about how you engage with that process.
Andy: Yeah. Yeah. Whenever a developer comes to me and is like, no, we need to just rewrite this whole thing right now, I’m like, mmm, mmm, you’re losing my trust by doing that.
Mon-Chaio: Absolutely. But when you say you’re losing my trust by doing that, no, that’s a different. feeling that you give the developer around how you much you care about quality than if you say something like “tell me more” or “interesting, do you have data on how that affects quality” or something else around the fact that I really do care about your core concern about quality. I don’t think that this thing is it, but like, you have some concern, let’s dig into that.
Andy: Or I do want to address that thing. Stopping product development in order to do that is very dangerous from my experience. What would be much better is if we drove improvements to it through the work that we’re doing on the product features. So let’s talk about how we can incorporate. improvements to this as part of that work.
And that may not be everything, but by that point, we’ll have a better understanding of what else we can improve that’s not directly driven by a feature. So let’s work on that.
Mon-Chaio: Absolutely. I agree, I agree. I think, also, people listening to this episode, and people not listening to this episode, will come out of it saying, well, I’m not going to do pair programming. Because whoever would, right? I don’t think we are going to convince any technical leaders to do pair programming that aren’t already convinced. They’re not going to listen to this podcast and say, oh my goodness, you’re reducing queue time and you get the same quality. I am now all in because that was my main concern. What they are going to do though, is they’re going to listen and they’re going to say, oh, well, all right. So there are some problems with my pull request review process.
Andy: Mm hmm.
Mon-Chaio: So let me figure out how you can iterate on that process in order to improve things.
Andy: All right. All right. A bit of improvement kata type thing.
Mon-Chaio: Yeah, improvement kata type thing. And so I thought we might spend a little bit of time talking about how we’ve seen people try to improve the edges of this process and whether we think that works. So, of course, I have an example that I would like to start with, which is why I brought this up.
Andy: Mm hmm.
Mon-Chaio: At Meta, they do this pull request review process. I think every company does it. They do it. And they want to be structured about it, and they want it to be good.
Andy: Mmm hmm.
Mon-Chaio: So in the year end performance reviews, or however often they conducted performance reviews, sometimes it was every half year, sometimes it was quarterly, depending on how their process was at the time, they would have an engineering excellence section of their rubric. And that engineering excellence section for ICs would talk about code reviews often. And most of the time the stats in there was number of PR requests reviewed or whatnot. Which we know is terrible. It’s not only terrible, it’s useless.
Andy: It seems to incentivize incorrect behaviors.
Mon-Chaio: Right. It’s worse than useless. It actually wastes time because I have to read that number and then I have to talk about a useless number, whatnot. But I did have an org leader that said, “hey, that number is useless, I don’t want to look at that number”, which great, great! What that person did is, he had a script, he’d run his script across the code repository. And what he would do is he’d pull out all the comments that people left on the reviews.
Andy: Oh.
Mon-Chaio: And his script did some very, very basic, I mean he wasn’t a data scientist, so they did some very basic stuff like what are the average number of characters per comment left by this person? You know, on how many code reviews did they leave comments on? And so on. And this purportedly was to increase the quality of the code reviews, and make sure we were rewarding people that were actually giving meaningful code reviews …
Andy: Mmm hmm.
Mon-Chaio: … and meaningful discussions. So what do we think about that, Andy?
Andy: I think that is a useful improvement on the just number of pull requests reviewed . Uh, it gets closer to what a bunch of these papers were studying, which was comment density, and participation. Because I imagine it was like, well, there were 5,000 pull requests and you commented on 200 of them, something like that.
So I think that is a useful thing to kind of start moving things toward a more beneficial interaction through the peer review process. And it also adds a little bit more of useful data to be talking about in that yearly review. I would say just like the pull request though, if you’re waiting an entire year to get to the point where you talk to someone about this, you’re kind of, a long duration, long cycle time thing going on there.
Mon-Chaio: Yeah, and I think Meta has, you know, waffled between how often they do these reviews, right? And so my opinion is very similar. I think it’s better than the previous. But how far does it really move the needle?
Andy: Um, that’s hard to say. I would imagine it would noticeably change things. It would change it from purely a performative act to something a little bit more meaningful. Now it’s just as easily gamed. Because someone’s just writing screeds about things where they’re just like, “oh, I hate it when people call ‘save’ here, you need to call ‘save!’” and whatever else. So, it could help, I think it would change things.
One thing that I’ve often been interested in is more of measuring the duration of changes. So the time from which you started working on something to the time at which it got merged. And I wouldn’t do this at an individual level. I would do this more at a team level. You want to start looking at the duration that things take. So that would be one aspect of it, but that’s only one aspect. It’s like, okay, then people game it by just going and getting stuff through as fast as possible.
Mon-Chaio: Yeah, in my previous example, it wasn’t even about gaming. I think ostensibly the outer behavior improved in that the density of comments increased. But in my more detailed review, because I was interested in this when I there, you have these code reviews, pull requests coming across your desk and you’re incentivized for your individual behavior. So you’re working, working, working, gosh, it’s time to review my pull requests. “Okay. Let me pull these up. Oh my goodness. I better have meaningful things to say in all of these lest I come to … but there’s so many and like some of them are large and …”
So what ends up happening?
Andy: You have people saying like, “uh, that’s a long if condition, just invert your if else.”
Mon-Chaio: Or, oh, did you know that the last time this file was touched, we had a bug? Can you look at the bug thing and make sure that this …
Andy: Make sure this isn’t going to trigger it again?
Mon-Chaio: Something like that, right? Again, these are all better, but do they make meaningful improvements? And I don’t actually think so. I think they’re better around the edges, but even implementing this process, I don’t think meaningfully improved the amount of defects that you’re going to find, or the quality of code that’s going to get produced.
Andy: Now something I’ve often proposed as a another possible improvement, is code review checklists. So saying things like, have you thought about the security of what’s happening here? Have you thought about interaction with the database, is this going to be creating more database load? And you update that depending on what you’ve learned from bugs that have come in and that kind of thing.
What are your thoughts on checklists as a way of improving a peer review process?
Mon-Chaio: I think better than nothing, just like the previous thing, it’s better than nothing. I still don’t think it makes meaningful improvements in terms of the time it takes to create and maintain the checklist, and the time it takes to read the checklist, because I think it’s about the way that we produce product.
And when we produce product by saying, I have done the difficult work and I have distilled it down … It’s funny, Andy, we were talking about this video that you were watching about the engineering method or whatever, right? And, whether we agree with it or not, the method, poorly stated as something around, we use rules of thumb to solve difficult problems that have unknown solutions, or something that effect. And so if you think about that from an individual engineer perspective, this engineer has taken all this data and distilled it down into a solution. They’ve done all of the difficult work. You as the reviewer, you’re supposed to review the final product, not the chain. And when you’re reviewing the final product, even if you have a checklist, it’s very difficult to go back in the chain, right?
So you can look at a final project and say, ooh, this might increase the database load. And you’re going to put a comment on that. And what is the person going to do that? Oh, you know, I consider three different design proposals. Um …
Andy: I’ve actually measured that already. It’s …
Mon-Chaio: Right. I’ve actually measured that already. Or even if I haven’t measured that already, it’s, ” I thought about it, but I thought this was the quickest way to get this out and then we can measure it in production. And really, if we wanted to have less database load, this requires us to use a Redis cache, and we don’t have that installed in production, and it would take this, and then we’d have to go to IT to see if we can requisition some clusters in order to run this.” So then what happens? Then it’s too late, right?
So there are a lot of things that I think when it’s time to do PRs, it’s an it’s too late. And so I think the checklist is fine, but then you get into that like, well, what do you want me to do, redesign the whole thing? I spent three weeks working on this before you saw this PR.
Andy: Yep. I’ve seen that many times. What do you think of this? You mentioned in order to review it, I kind of almost need to recreate the thinking that went into what I’m looking at. And often I’m trying to do that on my own.
And so I end up having to create all sorts of assumptions about what the thinking was that went into this and why they made that choice or this other choice. What about if I just called up the other person and said, hey, take me through this.
Mon-Chaio: Yeah. Well, let’s do an analog of that first. I like where you’re going, but I’m going to do an analog of this first, because in the year of our Lord 202, almost five, calling up another person is blasphemy. We don’t call them. Maybe we Slack them. We probably don’t Slack them either. What do we say? We say, please write a wiki doc or put something in your commit notes …
Andy: Mmm hmm.
Mon-Chaio: … about how you arrived at this change and then I can read it asynchronously, right? Because who wants synchronous work these days? Nobody.
Andy: I will say, I am a big fan of really descriptive commit messages.
Mon-Chaio: Sure, as am I. I don’t think this level of detail belongs in a commit message. I don’t even know if commit messages have a maximum character limit.
Andy: No, I’m pretty sure they don’t, given the length of commit messages I’ve sometimes written.
Mon-Chaio: Um, so let’s start there, Andy.
Andy: Mm hmm.
Mon-Chaio: A document that’s written in addition to your commit or in your commit message that talks about how you got to that commit. I think, again, better than nothing, but probably poor use of time.
Andy: I wouldn’t say poor use of time. I would say it doesn’t address this problem because those commit messages, as you said, unless you spend a huge amount of time on them, they’re probably not going to contain quite enough information to really explain absolutely everything. Plus also, since at the time you wrote it, you didn’t know what questions someone else would come up with about what you’ve done.
So the best you can do is kind of describe a bit of your own reasoning. It’s a bit of a halfway there kind of message. I don’t think about it as a thing that’s there entirely to support a reviewer. I think about it as a thing that’s there to support me or another developer weeks, years down the line who are looking at this and trying to figure out like, why is it written this way?
And then they can pull up the commit message and get a clue as to what I was thinking at the time, but it’s not enough of a clue to have like fully reviewed it at the time.
Mon-Chaio: And I think we’re in agreement here, Andy. I think what I’m saying is the amount of time you would have to put into writing those messages, to writing those screeds …
Andy: Yeah.
Mon-Chaio: … to get the value that you’re looking for, I don’t think you get the value in terms of time spent. And so I would not advocate that. I mean , again, it’s better than nothing, but for the goal that we were talking about, as you mentioned, I don’t think it would meet it.
So now let’s jump to your proposal, which was call up the developer and them, walk me through it.
Andy: Which, hey, one, an advantage that you get out of that is you’re now not getting those long wait times where it’s like, okay, I finished and now I need to wait eight hours before you finish up your task and you get around to it. I’ll just interrupt you and get you onto this because partial completed work needs to be finished before we start new work. Now even that, I think if we’re going for the like, how much better can you get? I think even that, I’ll take a quote from that IBM paper. It says: “in contrast to inspections, walkthroughs, which can range anywhere from cursory peer reviews to inspections, do not usually practice a process that is repeatable or collect data, as with inspections. And hence, this process cannot be reasonably studied and improved. Consequently, their defect detection efficiencies are usually quite variable, and when studied, were found to be much lower than those of inspections.”
So, doing a walkthrough, which is what this would be, it would be “walk me through this”, better, faster than not doing it. So, I think that would be an improvement. What do you think?
Mon-Chaio: I do think so. And I think this is probably the thing that I’ve heard that’s the best of anything, better than the “let me see how often you engage with commits stats” strategy, pulling up length of commit messages, better than the “write a screed in your commit message”. I think this is the best thing I’ve heard.
I think this actually has the potential to actually make movement that’s reasonable. Only if, though, both parties are equally concerned about the same outcome.
Andy: Yes.
Mon-Chaio: And this isn’t a knock on an engineer at a company. Everybody knows, like, you’re here for the company. So Andy, when you and I work together, if you want to see whether your code’s performing well, I care, right? Because it gets released to production and maybe I’m on the on call loop. I do care about that.
But there are different levels of caring. And I’ve often seen this where the biggest problem is not being on the same page about the goal. So as you’re walking through, I need to be able to ask you questions, dig into context, and that may require you to now go to the PM to get the original requirements document. That may be requiring you to go to the original place where the customer ask was documented. That may require us pausing and going off on another tangent to ask a different platform group, why did you design your platform code in this way? But we have to be on the same page about how deep we’re going to go into this and why.
And I think most of the time we’re not. Especially as more and more organizations work to these individual ownership type of workflows, right? This is your code, Andy. I’m here to help you, but ultimately it’s your code. And when I say, uh, are you sure this is what the customer is asking for? And you say, yeah, yeah, yeah, this is what the customer is asking for. I stop because it’s not my code, you’re the one who’s responsible for it. And so I think with that sort of working style, even this has much more limited utility.
Andy: Correct! Yeah, yeah. So you could do the walkthrough, but if people aren’t bought in that your goal in this is as the person getting the walkthrough and giving the feedback, I am creating this code just as much as the person who initially wrote it. You have to go into it with that mindset. I am just a much a creator of this change as anyone else who’s interacting with it.
Mon-Chaio: Now, I just thought of this, Andy. Maybe the only thing you get credit for in your year end performance review is the quality of the code you reviewed versus the quality of the code you produced.
Andy: Ooh!
Mon-Chaio: And so we no longer look at code you wrote, we only look at code that you reviewed and the bugs in that code and the product discrepancies in that code. What do you think about that?
Andy: I think that could be very interesting. I’d want to see it done on a small scale and a bit of an experiment before trying to roll it out.
Mon-Chaio: Absolutely!
Andy: ‘Cause I immediately have kind of like, ooh, what’s going to happen with that? It might lead to exactly what you’re interested in, which is like, people are like, no, I am interested in the outcome here.
Mon-Chaio: Well, and I think that this gets closer to something that we espouse all the time, which is measuring these types of things on a team level. Right? It’s not the individuals that you’re concerned about and how many code reviews they did and whether they did their share and what they produced. But that requires this concept of team ownership and people are like, eh, well I can’t judge the individual developers. Okay. But this process gets closer to that while still allowing this concept of individual ownership.
Andy: Yeah. Yeah. Yeah, it’s interesting.
Mon-Chaio: So for those that are loathe to give up individual ownership, they can still reap some benefits.
Andy: I find this very interesting. I like where we’ve gotten to.
Mon-Chaio: And since this seems so profound, I think we should just leave it here for everybody to think about. Any, uh, last thoughts, last words, last insights that are just as profound?
Andy: No, I don’t think I top that.
Mon-Chaio: All right, well, thanks for exploring this topic around peer code reviews with us today. We’ll link all of the research in the show notes and hopefully this has given you some sense as to the efficacy of your peer review process. Now if you’re interested in changing the efficacy of your peer review process we have definitely given you a few tactics in this episode but those aren’t the end all be all.
And if you’d like help thinking more through the individual challenges you or your company are facing, and perhaps tactics that would work for you specifically instead of more generally, let us know. We love to help individuals. We love to help companies. Our goal is to make the practice of leading a software engineering organization better.
And so we love to hear from practitioners about their challenges and how we can help them. Write to us at hosts@thettlpodcast.com. Also write to us and let us know what you think about the podcast. Did you like it? Did you not like it? Did you completely disagree with us? That latter thing is often the types of notes we most like to receive. Because it gets us thinking, and it gets the discussion going and really helps us solidify, well, where are the errors in our thinking or the changes in our thinking? Again, that would help make everyone better leaders. If you have those, write to us as well.
Lastly, think about if there’s other people in your lives that would benefit from a little more TTL and recommend the podcast to them, help us grow our listenership and maybe help them improve their technical leadership style.
All right, listeners, until next time, be kind and stay curious.
Leave a Reply