Show Notes
Can you measure developer productivity? Prompted by an article by McKinsey and already reacted to by many, we ask if McKinsey is telling us to measure something that is at all useful, how to think about the problem McKinsey raises, and what you, as a technical leader, can do to address that problem better than what McKinsey suggests.
Opening quote from The Psychology of Computer Programming by Gerald M. Weinberg.
References:
- Mckinsey article – https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity
- Kent Beck’s response part 1 – https://tidyfirst.substack.com/p/measuring-developer-productivity
- Kent Beck’s response part 2 – https://tidyfirst.substack.com/p/measuring-developer-productivity-440
- Dave Farley’s response – https://www.youtube.com/watch?v=yuUBZ1pByzM
- Dan North’s response – published after we recorded – https://dannorth.net/mckinsey-review/
- Untangling the metrics request – https://podcasters.spotify.com/pod/show/tactics-tech-leadership/episodes/Untangling-the-Metrics-Request-e27906b
Transcript
Andy: The remoteness of the project leadership from the workers is the source of many social problems in large projects. The first level manager maintains at least some contact with the actual work being done. And the nth level manager only sees the work indirectly through other managers. Whenever a supervisor is responsible for work he does not understand, he begins to reward workers, not for work, but for the appearance of work.
Mon-Chaio: Thanks for joining us for another episode of the TTL podcast. Normally on this podcast, we like to consider a problem, think about the research behind it, and we think make sort of measured suggestions around tactics that you can employ at various levels of your organization in order to solve said problem. We tend to not be very reactive. We’re not issue of the day often, right?
We’re not saying, this is what social media is talking about. Let’s jump on the bandwagon. I think 15 or so episodes in though, we’re going to do one, which is a little bit more reactive. At least that’s the way I see it.
Andy: This is our reaction video.
Mon-Chaio: What we’re talking about is the infamous McKinsey article titled, “Yes, You Can Measure Software Developer Productivity.”
Andy: I think the way that we should approach this Mon Chaio is we should talk a little bit about what does the McKinsey article say. We should talk a bit about what’s their central thesis.
Mon-Chaio: Mm hmm.
Andy: I think we need to talk as well about why would they have written this.
To me, that’s, that’s important to understand it, why they might have written this.
Mon-Chaio: Agreed.
Andy: then I think we should go into a little bit of a of an understanding of what different people have said about it.
What do you think about that, Mon Chaio?
Mon-Chaio: Yeah, I think that’s a great way to go about it. And maybe, maybe I’ll start. I think if I were to summarize the McKinsey article, and because it’s me, it’ll be sort of a long summary, but I think they start out by saying that software is so ubiquitous now that it’s not just Silicon Valley. companies that have software. And we should be treating software much like other disciplines within a company, whether that be sales or recruiting or what have you. And we cannot continue to say that software is this opaque box, which is only understandable by super technical people, and that we can’t make meaning of it unless we’re these technical people. They then go on to say…
There have been a lot of metrics and ways to introspect your software development process and the success of your software development organization. They posit that most of these fall into an outcomes focus or an optimization focus metric. So we could talk about what they are later. I don’t think it’s super important though. And they say what’s missing is this concept of opportunity focus. Again, not really important because as you’ll hear us state, we don’t really think that’s a thing necessarily, but they say that they are going to deliver some insight into how using this opportunity focus and their new set of metrics can help make the opaque software box more transparent, especially to people who may not have specialization in software, right?
So you might think about your investors, or you might think about your non technical CEO, those sorts of people. Their additive metrics fall, I think, into three buckets, or I would put them into three buckets. One of them they say is called the developer velocity index, which is an index they’ve developed where you can take a survey and it will tell you how well you’re doing on certain practices. Another is what they call contribution analysis. They posit that it’s important to track every individual’s contribution to the various metrics that you’re measuring. And they have this concept of how you can do that.
Andy: Mm, not to the metrics. The contribution analysis is by individuals to the, to a team’s backlog.
Mon-Chaio: Sure, to the, yes, to the team’s backlog, but they also talk about putting metrics measurement on top of the backlog.
Andy: Yes. Yes. So
Mon-Chaio: And so then you would say,
Andy: I think the thing I was thinking about was the contribution in this case is contribution to that that backlog or that work, not contribution in terms of how much is this developer contributing to the various metrics.
Mon-Chaio: fair, absolutely. And they use examples like, you know, does a QA tester have enough to do? Or are people spending enough time writing code versus you know, doing deploys or whatnot, which they think is less valuable,
that sort of a thing. And
Andy: how, how do we analyze the contributions of individuals of a team?
Mon-Chaio: right. And then their last bucket they would talk about is this concept of talent capability score. So how do you think about whether. Your team is rightly constructed in terms of junior senior talent. How do you assess what is junior senior talent? And so developer velocity index, contribution analysis, talent capability score, they think are their unique add ons into this space. And they think that by adding these on to the metrics like DORA and space, which already exist, that allows the opaque development software engineering organization box to become much more transparent and helps investors and CEOs and those type to really understand what’s going on in their software development organizations.
Andy: Mm
Mon-Chaio: So that would be my overview of what McKinsey is trying to say.
Andy: I’m going to say the thing to keep in mind with McKinsey’s article and this whole thing is at least the article. I believe we should read as more of a marketing thing than a scientific research thing.
The article is from a consulting company that wants to sell you services. They want to sell specifically those CEOs and executive suite people who are having these questions services to help them out. And I think that’s something to keep in mind is that they’re not only, they’re selling a service, but it’s a service that they want to believe in, that they want to help people out.
I don’t believe that they’re out there just to make people’s lives worse. I think they do want to help. I’m not sure that I agree, I don’t, I’m not, I don’t agree with them. I won’t say that I’m not sure. I will take a stance on that. I don’t agree with them that this is the right way of doing it. That said, I think there are aspects of what they have which are useful and important.
And one aspect of that is, as much as I said that this is marketing, it does point out a very real problem that we deal with in technical leadership, which is that they said in the very first paragraph of their article, “regardless of industry, leaders need to know that they’re deploying their most valuable talent as successfully as possible.”
Which sounds really familiar, Mon Chaio, doesn’t it? It’s really familiar. Our metrics episode. We had this exact same question and we discussed it.
Mon-Chaio: That’s right, Andy, a while ago, we did an episode called Untangling The Metrics Request. And so if you haven’t listened to that after this episode, go listen to it.
There will be a little bit of crossover just because. I don’t know, maybe we’re prescient, Andy, and we knew that McKinsey was going to come out with this. And so, you know, we did the metrics episode first. But yes, it does sound very familiar. This concept of what is the problem they’re trying to solve, right? It’s a really important question to ask because oftentimes we can think of these companies, maybe preying on people’s fears. Oh gosh, I’m a CEO and I have this fear about how I should spend my developer money. These companies come in and say, well, you have this fear.
It may or may not be real, but I’m happy to solve you. I’m happy to sell you a solution for this fear that you have. I don’t think this is really the case. And I do think there’s a real problem. And I also think that oftentimes, as we mentioned in the metrics episode, a big problem is when the CEOs start talking to the technical people.
The technical people don’t really drill in to the real problem that CEO is facing. And so there’s a communications disconnect, right? And Andy, in the Untangling the Metrics Request episode, you offered TDD for Humans as a way to draw out that understanding of what is the real problem that you’re facing.
Andy: Yep. Absolutely. And, and I think that’s, that is something that I did get from this article. It was kind of buried in it a little bit, but it was there. This, this idea that you need to have that conversation. They kind of get at it a little bit, which is that the C suite leaders who don’t understand how that technology group works, they recommend they’re going to need to learn at least some basics of the software development process.
Because it’s going to be very difficult to have that conversation if there’s not at least that ability to talk about the discipline that we’re, we’re working in. And I think that’s actually a very good thing to raise. And it’s actually great to have McKinsey standing behind that. That if you are going to be managing a team, a company that does a lot of this, and it’s really important, there’s going to be some need to understand what they do.
And I think that’s very useful.
Mon-Chaio: I agree.
Andy: sorry, but this, that, that take was kind of a little bit from the other direction. That was the CEO understanding the technology. In our last episode, we did talk about that the technology needs to understand the CEO.
Mon-Chaio: I think that understanding both ways is really important. There are a lot of problems wrong with the McKinsey article. We haven’t said them all, but you will have caught the undertones just from the way we’ve introduced it and the way we’re approaching this episode. If we come at the article, or if a reader, an engineer, comes at the article and says that the entire article is bunk, and that we don’t need to make software engineering organizations more transparent, Or, the only way you could understand them is if you’ve been a 20 year engineer and deep in the weeds. I think we start to lose our credibility there, right? We start to be able to not have empathy towards the people that are asking these questions, and that’s not, that’s not a great way to go.
Andy: I want to bring up this idea of why, why could this end up happening? I think it’s very easy sitting inside the engineering side to think that this is happening purely out of malice or laziness, or just no desire to engage with the complexities of this. But I think there’s another way of thinking about it, which is that it is a fundamental issue of delegation.
A fundamental problem of the structures that we have. Not to say that there’s another structure that resolves this.
I don’t think there is one that we know of. But to, to acknowledge what that problem is. And that’s in a book that I have called Corporate Theory. Is something called the principal- agent relationship. And I’m going to very quickly read through a short quote. But I think it gives a flavor as to the problem that we’re facing here.
And it says, “Insiders”, in this case, insiders are people within the technology organization that understand software, “may have private information about the firm’s technology or environment, or about the firm’s realized income. Alternatively, outsiders,” the people outside of that group, so the C suite that doesn’t understand the engineering,
“cannot observe the insider’s carefulness in selecting projects, the riskiness of investments, or the effort they exert to make the firm profitable.
Informational asymmetries may prevent outsiders from hindering insider behavior that jeopardizes their investment.” And that right there is the crux of the issue. Just structurally, they are missing information. In this case, it’s worded as hindering insider behavior.
But another way of thinking about it would probably be a bit more beneficial in terms of a collaborative relationship is aligning insider behavior .
These informational asymmetries may prevent outsiders from aligning insider behavior so that it doesn’t jeopardize their investment. It’s about finding that understanding. And helping to guide things into the correct direction. And I think that’s, that’s kind of the crux of what all of these things are looking for.
Now that’s what the McKinsey article is about. This brings us to the responses by Kent Beck, who identified that metrics can actually serve other purposes as well.
One is this attempt at clarifying what’s going on. And he, he kind of, he kind of says that is to compare two investment opportunities.
So this is, I want to make sure that you’re working on this and not on that, or I want to compare the effort we’re putting into these. Another one that Kent Beck brought up was that a CTO wants to identify which engineers to fire or the CEO wants to identify which engineers to fire. This is the individual performance leveling that, that, that might be desired.
And the, third is they want to manage performance. They want to improve their developer productivity. And so they want to manage what’s that performance level. And they want to move that along. Or they want to help an individual software engineer grow.
That might be what metrics down to the individual level are about. And, and I should say that was something we kind of skimmed over, which was that the McKinsey article talked about system level measures. Team level measures and individual level measures, and that
a, a important aspect of something like the metrics for a sales organization is that it’s team wide, but it’s also individual, and so it gives them an insight. So what they’re aiming for is, they’re looking for, can we get something to look at it at the high level of the whole system of software development, as well as something all the way down to the individual developers?
When things get down to the level of individual developers is when I start to cringe. I start to worry that we start getting these bad incentives that get built in. Because once something starts getting measured, it will eventually, it will, it has to, if anyone knows that that’s what’s being measured, it will start changing behavior.
Mon-Chaio: Mm hmm.
Andy: And at some point it will probably get turned into a target. And that’s the point at which I think everyone starts getting afraid.
Mon-Chaio: I like the fact that Kent et al had a big section in the response around where does the need to measure productivity come from and Andy talked about, the four questions that the CEO might want to answer or somebody might want to answer. I do think that there’s a fifth that they missed out on.
I think we touched on this in our Untangling the Metrics Request episode, and it’s a much higher level one. It’s this concept of, how do you figure out whether you’re maximizing your engineering dollars spent? And I think that’s a difficult question to answer, and so people look for ways to measure, and often they can’t, and so then they go down into the individual levels, they kind of dig deeper and deeper and deeper through the systems. And it’s interesting because, consider if you’re a CEO of a company, A question you might ask is, I want to do X dollars in sales in these territories. How many salespeople do I need to accomplish that? And I’m not in sales, my guess. And so salespeople come and tell me I’m wrong and leave comments and whatnot. My guess is generally if you took two sales executives or three sales executives, they would pretty much agree. You give them the context around what you’re trying to sell and your product and they would talk through it. Oh, well, you know, it was a high. It’s a high touch, low touch, et cetera. But then they would say, okay, well, you’re going to need three in LATAM.
You’re going to need six in the U. S. whatnot. I think it’s very similar with other disciplines like recruiting. I want to bring in X number of people. I want to fill this many headcount in the next year. How many recruiters do I need? I think that’s a very easy thing to answer. I think you could talk to other companies who are saying, oh, well, I have six recruiters and they’re staffed in this way.
And this is how I get headcount into the system. And so for a CEO, I think you’re really confident. In how you spend your recruiting money or in how you spend your sales money. But when it comes to engineering, Hey, I want to create an app that does this. How many engineers do I need? I don’t think that that question is as easily answered and critically, I don’t think it can be answered by saying.
“I’m going to talk to my pal over here, who’s also building an app, solving a different problem. And he will tell me that he’s spending 6 million on engineers a year, and that’s the right amount to spend.”
Andy: And I don’t think that their metrics or none of the metrics that they talk about, because they also talk about the Dora metrics and the space metrics answer that question. Because it’s not a question of measurement of an existing system. It’s a, it’s a question of, of kind of hypotheticals and what ifs and alternative realities.
Mon-Chaio: I, for one though, think that that is actually the most important question to answer. I’m not saying it’s answerable, but if you can give the CEO confidence. Or if you can give whatever investor confidence that the money they’re spending is maximally utilized, whatever that means. I think you get around all of this other stuff around, well, are they performing?
Or I need to lay people off. Who do I fire? At least that’s my opinion. What do you think, Andy?
Andy: I would imagine so, if you can give a good account of how things are progressing, why they’re progressing in that way, what alternatives you have, and that kind of thing. I think that. You have now the possibility of to ask that question, to think about it, and to come up with something that isn’t necessarily, like, the quantifiably correct answer, but it’s something you can all believe in.
Now, I think there is a question of the optimization that you’re bringing up, which is, like, could I spend less?
Mon-Chaio: Mm
Andy: And in most cases, the only way to find out is to try, but that has That has a lot of of consequences along with it, that you need to talk about and you need to talk about. Like, why do we want to try?
Mon-Chaio: And I think trying, like trying is that next step, right? The first step is confidence in… the fact that we need to even try.
Andy: Yeah.
Mon-Chaio: I’ve seen a lot of software organizations that I’ve worked in do very well in peeling back the layers and giving transparency. This is what we’re working on, here’s what’s coming up next, here’s how often we ship, here’s what customers are saying. I think that there’s often a fundamental, I don’t want to use the word distrust, I think that there’s often a fundamental fear that you could be getting more.
Andy: And this, this is that question of from system to individual
Mon-Chaio: Mm
Andy: because it’s, you could getting more by having the individuals do something more.
Mon-Chaio: hmm,
Andy: I, I find it, I find it interesting. I think it’s useful. In your example right there, you said something about how often we’re deploying, what our customers are saying.
You basically, you grabbed the Dora metrics.
Mon-Chaio: mm hmm, mm hmm,
Andy: Which are metrics that I think a lot of people for a while have started to get behind and say, Actually, those are useful metrics. Those are useful
things to think about and aim for. And I think they’re useful. And, and the… Kent Beck et al. They produced a useful model. I, and I actually, I love this model. This, cause it’s useful for so many things, which is their effort, output, outcome, impact
way of thinking about things. And what that kind of optimization, could we, could we do this with fewer developers is asking, can we put in less effort to get the same or better impact?
Mon-Chaio: hmm.
Andy: is a valuable question to ask. And that’s something that we should be willing to discuss. And that’s
Mon-Chaio: Right. Or,
Andy: the development organization, we should be thinking about all the time. And those Dora metrics, the way that Kent Beck brought this together, and I thought this was a really interesting way of thinking about it, was that those are all about outcome and impact.
Those are all about the whole, like, how the system as a whole works together. And they’re the things that really matter. If we change those, we do something for the business, for the users.
Mon-Chaio: right.
Andy: Whereas if you change effort, it’s so far removed from that, but so close to the individuals that You don’t have any control over the impact you’re going to get, but people do have complete control over, like, the amount of effort that they put in, and that’s where you get those very strange behaviors about, like, I’m going to code myself a new minivan, because if you measure the effort, people can, can game that very easily and, and that is actually something I think correctly that the McKinsey article talked about.
They said don’t take simplistic measures. that can easily be turned into bad results. I, I think they fell apart though, because I think they very quickly got to simplistic measures that can get them bad results.
Mon-Chaio: Right. I think what I, I agree the article definitely has some really good points. You’ve pointed out a few already. The last one you said was around simplistic measures. So glad that they said that. And they talked about things like lines of code. I’m so glad that they said those are simplistic measures that can get bad results. A big problem in my mind is they took slightly less simplistic measures, what they might call nuanced measures. And they’re advocating that you use that to make meaning. And, and I think what we would agree on is that it’s a little more complex than that,
right?
Andy: Like I think one of the, the worst things that they did, and I saw it in the notes that you wrote about the article, and I was thinking the same thing. One of the worst things was by, was saying that all of their things seem to be based on the idea that There’s this outer loop of development and this inner loop of development.
The inner loop of development is your build test, your code build test cycle or test code build cycle. And your outer loop of development is deploying security and compliance, integration and meetings. Just, just meetings, any, any kind of meeting apparently is in your outer loop and, and something you shouldn’t be doing.
They, they did give the little caveat that activities listed are non exhaustive. But only on the outer loop, I noticed that. But that’s, that’s absurd. It creates that whole, that whole thing of saying like coders, you are most valuable when you’re typing lines of code. And that’s just so fundamentally flawed and wrong.
Mon-Chaio: I’m surprised we’re having this conversation. In the year of our Lord, 2023. There’s definitely some aspect of it, which makes sense. Again, if you think about the nuance, I’m sure we’ve all been in places before where we see maybe our strongest coding engineer, the person that does the best in decomposing classes and getting code down and is really fast and understands how to put abstract problems into code, spending a ton of time talking to customers or doing design documents.
Maybe because somebody told them or maybe because they themselves believe that’s what you should do as you grow, right? You should become more of an architect or remove from code. Not to say that those things aren’t valuable, but it is interesting to say, Hey, by the way, you’re really good at writing code and now you’re not only not doing it, but not mentoring people.
who are doing it. And so please do that more. Absolutely. Very valuable. We should do that and continue to look at the places where we can get that done. To say overall in a system that your engineers are only valuable when they’re doing the inner loop stuff, and I don’t even know if McKinsey would say that explicitly, but their whole article talks about not doing or automating the outer loop stuff as much as possible. That’s just such a, it’s just a non starter for me in terms of even starting a conversation. How do you even start when you’re so far apart from the basics?
Andy: Yeah. Yeah. So, they got that wrong. what else do they have?
Mon-Chaio: Let’s talk about a bugaboo of mine. A big thing that they say is that we need to reduce the opacity of the organization, of the software engineering organization. Specifically because there are these people that aren’t engineers and they need to be able to understand that. Is that true? Do we believe in that? Do we believe that we should put a bunch of effort into allowing people who don’t understand tech to then be able to measure tech without understanding it?
Andy: I, I think that’s a leading question.
Mon-Chaio: I,
Andy: Hmm.
Mon-Chaio: I don’t, I don’t, I’m not sure that there’s only one, like I can imagine, so I can answer for me. I don’t believe that’s true, but I can certainly imagine an argument in where that is true. And let me give you the straw man argument. The straw man argument might start to say, look, the skills needed to run a company are different than the skills needed to build a product. And in fact, we might even take that so far as to say, managers don’t necessarily have to be technical. There’s these ton of management skills. You just have to be good at organizing people, mentoring them, that sort of a thing. They don’t have to be technical. They don’t need to understand the details. And if you take that straw man argument up a notch, then you say, well, these CEOs or other people, even CTOs that are great at talking to investors, managing expectations, dollars and cents, they still need to know what’s going on within the organization.
And there must be a way for them to see and optimize even if they don’t understand the details. So that would be my strawman argument to say, yes, you should be able to have people who don’t understand tech turn levers and have metrics to turn those levers to optimize technology. I don’t think that’s a far fetched or way out there argument to have.
Andy: No, I think, I think that’s actually a very common way of thinking about it. And so I think, I think it’s… It’s critical here that we get to the, to the kernel of what’s wrong with that way of thinking about it. And I’m going to take a stab at it. I’m going to say that the problem with that way of thinking about it, that you have to present essentially like this control panel of levers.
And Little dials and, and readouts showing the health of everything. And they have, they can, they can turn this knob and, and, and code commit frequency goes up. And then they turn that knob and code commit frequency goes down a little bit, but release frequency goes up. And then they turn this and then they, they, they kind of switch this one lever from position one to position five.
And product market fit now starts growing on the, on this chart. And that, that’s what they want to be able to do.
I think the crux of the problem with that way of thinking about it is that mechanistic belief that this entire organization will operate as a machine. That the interactions between these parts are Controlled, controllable, and somewhat known, or maybe even completely known. And it will behave like a series of interconnects and gears and levers and rods.
Mon-Chaio: It’s a discrete state
Andy: It’s a discreet, it’s a discreet machine. It’s a, it’s a, it’s something that you can control. And with enough time flipping those things around and turning those knobs, you’ll start to figure out how it all plays together. And then a good manager will be able to be a symphony conductor. And they’ll just do, they’ll just do what you say as you, as you flip the levers around.
And I think That is a, just a completely flawed way of understanding the way a company built of humans interacting will work. And so, we’re back to that distinction between I don’t remember which episode we talked about this. The socio technical systems versus the
Mon-Chaio: Mm
Andy: mechanistic systems. And, and that, that theory of you can have those levers is the Taylorist mechanistic system, whereas the other theory is the socio technical system where you can set constraints and you can set requirements, but it’s going to be a very complicated interaction best formed by the people working within it.
To come up with the approach that will maximize those outputs. Once it’s all understood through the human system, once it’s understood through that social system, what it is that we’re trying to maximize and what it is we’re trying to perform and do. And so, that, that would be my take on why is that, that strawman, it, it, it’s not so much a strawman.
I actually think it’s a hard one to knock down.
Like a straw man argument is one that you stand up and you knock it down really easily. But this one is actually very hard to knock down because it’s a fundamentally different way of thinking about it. You have to switch your way of thinking about the way human systems work.
Mon-Chaio: I agree. I think the other part that’s very difficult is you mentioned setting constraints and then letting the people close to the work figure out how to meet those constraints in the most effective way possible. I think for software engineering, setting constraints itself is difficult. That you have to have some base level of understanding to even be able to set the right constraints.
Andy: Yeah.
Mon-Chaio: And for me, that base level of understanding is not an eight hour crash course. In how software engineering works. Now, do I think you have to be, have 20 years of software engineering in order to be able to set those constraints? No, I don’t think you have to have built 20 years of production software, but it’s somewhere between that it’s somewhere between your eight hour crash course and 20 years of software development. And I think that too often we, or, or the folks that think about this problem, really think about software engineering is much easier than it actually is. And they have that Dunning Kruger effect where they think they know what’s going on and they think they’re quote unquote experts, if not at engineering software, at least at managing software engineering organizations.
But managing those organizations is much more difficult than I think a lot of people give it credit for.
Andy: Yeah. Oh yeah. And I think it is. It’s, yeah. I always tell people that it’s, as software engineers, we, we have produced the most complicated things that humans have ever created. So complicated in most cases that there’s no person who understands how it all works. So the, the system of people that has produced that is also going to be very complicated.
Mon-Chaio: So I think that means that it’s quite complex and one of the problems with the McKinsey article for me is that they’re trying to make it less complex. And
Andy: Yeah.
Mon-Chaio: you want to be able to do that, but it’s so nuanced that I don’t think their mechanisms work. Like if you look at their DVI metric. And if you click into it, it’s a bunch of measures and recommendations.
Like it almost looks like to me, like a dashboard, right? Like a control panel to your point where you look at it, you’re like, Oh, that’s red. And that’s red. So let me go down to my managers and say, Oh, code reviews are red and CI is red. Do something about it, but that’s not the right way to build constraints.
Right. And it’s funny to me. I spend a lot of time talking about not managing by dashboard. And this isn’t just for non technical people. I think for a lot of technical people, as they start to support larger and larger organizations, they end up managing by dashboard. They start their day by pulling up all these dashboards because they feel like, again, that they should have these knobs to turn.
And the dashboards are going to give them the knobs to turn versus just a smell, right? An indicator of where perhaps they might want to do a closer Gemba walk or dive into for the week. Things like that. And so I think that what McKinsey has given us is just a more advanced dashboard that they purport you can manage your engineer org by.
Andy: Right. So what? What do you think people should do instead? If we, if we shouldn’t have these dashboards, what, what should that CEO ask for, or what should the CTO provide that would be a better alternative here?
Mon-Chaio: Let me start off by saying, I don’t think you should not have dashboards.
Andy: Okay.
Mon-Chaio: think dashboards are really, really important.
Andy: True.
Mon-Chaio: think managing by dashboard is. Is bad. But if you use I want to make this a really important point I’m sure a bunch of people know it, but maybe I didn’t state it well enough and I don’t want it to get misunderstood. I really do think about dashboards as canaries or as smells. They’re They’re they’re signals and triggers for you to dig deeper And if you’re not willing to dig deeper and if you’re not willing to spend the time to dig deeper then you should not have that on your dashboard Whatever it is that you’re not willing to dig deeper on if you’re like look, I really don’t want to know about the various causes of outages and I don’t really want to dig in and sit through A retrospective on outages every time there’s a sev zero or higher You should just remove outages from your dashboard, man, like It’s just going to
be noise. you’re just going to cause noise,
Andy: care about outages.
Mon-Chaio: right? I only care about enough about them to not dig into them, right? Because I don’t have the time. I’m meeting with an investor today and goodness, I’ve spent five hours meeting with investors. I can’t take one of those hours to sit through a retrospective. That’s your job, right?
So just take it off your dashboard at that point. So, having said that, getting back to your question, Andy, about what should they do, I don’t know that I have great answers. I have some answers or some suggestions, some tactics.
Andy: I think that’s perfect. That fits our show well.
Mon-Chaio: I think the first thing is if you are non technical, you have two choices. To do things right, in my opinion, you either get technical, and I know that we haven’t said where on the spectrum between eight hour crash course and doing 20 years of software development, it is to be rightly technical. We could debate that maybe on another episode, but you have to get technical.
So that’s one thing you can do. Commit to getting technical and understanding the systems. That might be, hey, what I’m going to do is I’m going to learn some coding off to the side. That might be in addition to that. I’m going to spend some time with the engineers I’m going to sit with them and watch them work I’m going to sit with them in their sprint meetings and As sort of a fly on the wall to like look at what are the issues that keep coming up that are preventing us from shipping but get technical so either that or Partner with someone your cto your vp of engineering some external advisor that you have come in Three times a month or whatever.
Partner with someone that is technical, that you could, that you have supreme trust for, and that you feel like that person, you could rely on them without really understanding their entire chain of thinking, without having to go back to first principles on everything they talk to you about.
Andy: Someone who you would trust as an advisor and explainer. Where you could go to them with the question of, “Hey, so, my CTO was saying this to me. I didn’t quite get it. Can you explain it? And tell me a bit of how I should think about this?” Someone, someone who you could see yourself asking that question to.
Mon-Chaio: Absolutely. And again, that someone, I’m sure everybody knows this, but it’s worth being stated explicitly that someone can’t just be you in another form, right? It can’t be someone that comes back and be like, yeah, your CEO is full of S. It needs to be someone who has the trust of the engineers, who again, has that technical background and understands the technical systems, perhaps because they were an engineer.
For 15 or 20 years. It has to be someone like that. That then you can counsel
Andy: I
Mon-Chaio: or who can
Andy: I would say, I think that’s true. I think if they can get that, that’s very valuable. There is a danger in it, which is that that other person knowing how contextual, how a software team works is, that other person may not have information to advise well, if they don’t have contact with the team or with the department.
So if at all possible, if that can be a technical person within your organization, that would be excellent because then you get a view of how things there are working and, and you trust their judgment and they can explain it to you.
Mon-Chaio: Additionally, I do like what mckinsey said about what do they call a system team individual? I think that’s a very reasonable breakdown of the boundaries possibly between context states or between lens magnifications. And so I will touch on how I think about tactics for that a little bit. So let’s start at the individual level. I think for the individual level you have to let the leader of the triple a team determined that in and of themselves They need to have a hundred percent accountability to determine the individual level stuff.
They know those people. Okay, so However, they want to track metrics it is irresponsible to roll up a bunch of individual level metrics into some dashboard. Now, if that AAA team leader wants his own, his or her own dashboards, whatever, right? They’re, they’re allowed to do whatever they want within that context.
Andy: know the full context of how it’s being gathered, and what it was being done for, and why they’re doing it, and why they want to know this, and what they’re going to do with it.
Mon-Chaio: right. So let them do their thing, but any individual decisions belong to them and them alone. Of course you can set constraints. You can say, hey. I need to fire 20 percent of the engineers. And actually it’s not 20 percent of the engineers. It’s 20 percent of the budgeted headcount.
Andy: Mmm.
Mon-Chaio: so senior people cost too,
Andy: for them to work with.
Mon-Chaio: right?
Senior people cost twice as much as junior people, right? Whatever. So give them the right context, give them the right. What did, what did you call it? Give them the right
Andy: Constraint, uh,
Mon-Chaio: give them the right constraints. But they are a hundred percent responsible and trust the decision. So that’s what I would say simplistically for the individual level.
At the team level, I think those Dora and space metrics work really, really well. Right. They’re about what are the processes and flow of what you’re creating? And can we continue to improve them? Where are the bottlenecks? Where can we continue to improve them? I think that’s very perfect. And so have metrics around that sort of stuff and use them as continuous improvement levers.
Right. This is, I think, where those dashboards fit in really well. Look. My flow dashboard shows me that there’s a red flow in between requirements and development, and that number has been going up and up and up for months, the time between requirements and writing code. You still have to dig into it. Yes, but go dig into it, right?
You have these systems that you trust to produce software, optimize them and use the metrics to figure out how can I optimize them? How can I get them better at the team level? Then when I think about the systems level, I think that is the difficult question to answer. So you have the individuals, are they doing everything they can and I optimize?
You say, okay, my AAA manager does that at the systems level. Does my process and team, are they optimized? Right? Am I optimizing my process or am I optimizing my flow? Am I optimizing my value chain? Okay, that’s great. The systems question though becomes, am I optimizing my organization? Which gets back to the question we touched at the almost the very beginning of this episode.
I’m spending six million dollars a month on my engineering organization. Can I get more from them? Or could I, or are they, compared to their peers in industry, producing much less than they could? Even as I optimize for everything else, right? Because we know, let’s say, for example, you have a very archaic software development methodology like Waterfall.
You can optimize Waterfall as much as you want, but there’s a certain limit on that process and flow. So how do you know when you’re in the wrong process? How do you know when you’re in the wrong flow? That’s that systems level optimization.
Andy: Mmm.
Mon-Chaio: That’s just a much trickier question to answer.
Andy: I think actually there’s a, there’s a key to this, and we talked about this a little bit when we were preparing. Which you were just talking about at the system level, you are within a paradigm of working. You’re within a model of working. Metrics are designed about measuring parts of a model that is currently in place.
The metrics can’t tell you what model to shift to to get completely better. They can tell you a bit of where there’s bottlenecks and whether or not things are matching up. That’s about it. So if you’re actually trying to fully optimize, there’s not a consistent set of drop in metrics that are just going to tell you the answer.
Because that is impossible. The metrics can only work from a particular model. Likewise, if you take a whole bunch of Agile metrics, and you just apply them to a waterfall model organization, they’re gonna tell you absurd things about what to do. And they probably won’t work too well. Some of them might work just fine.
Some of, some of those practices, some of those changes will just make things better, but they’re going to tell you all sorts of stuff that your organizational structure, your system will just fight against because they’re telling you stuff about a different system that doesn’t exist.
Mon-Chaio: I love that insight, and it’s rare that I hear something that sort of makes me really dig in and think for the first time, often. And Andy, this was one of them. I really like the fact that Mod yes, metrics measure what’s inside the model. They can’t tell you to move to a different model.
Andy: I’m glad that I finally got that across, because we had an argument about this, and I was just like, Oh, he has to understand this somehow, and I’m glad that I finally found a way of saying it, that it was like, Oh, yeah, I got it now. Okay.
Mon-Chaio: and, and live too, right? And live. The other important part that you said was, if you take metrics from one model and apply it to a different model, you’re necessarily going to get conflict. In the best of worlds, it just fits, right? But how rare, or how often does that actually happen? I think pretty rarely. One good thing about this McKinsey. article and the, well, it’s not even the article. One good thing about the DVI stuff I think we chatted about is it does give you some sort of drop in model. It is metrics. It’s a bunch of metrics that assume that you have a model that you’re using. Right? And it is a model that’s worked, generally speaking, for a lot of software companies. And so, if you don’t know where to start, I would say you could start in much worse places than the model that the DVI metrics are measuring.
Andy: I think if you go into it eyes open, saying, I’m gonna try it, it may not fit, it may be nonsensical, it may fit in some spots. Go into it. Use it as a way of kind of exploring what you have.
But don’t go into it thinking that it will just automatically tell you all of the answers.
But that said, you might wonder, well, why can we say Dora works? Why is we can say fairly clearly Dora often just fits?
And I would say there are probably organizations and approaches where it doesn’t fit and they still do fine. There are going to be outliers. But I think the reason that Dora fits, and the reason that I really like the model of Effort, Output, Outcome, Impact, which is another model that you could actually start building metrics around, the reason I like them is because the model that they assume is very simple.
Dora basically assumes that you’ll be releasing software. That’s about it. That’s about the only assumption that they make, is that you’ll be releasing some software out into the world, and you’ll have to release it again at some point.
Mon-Chaio: mm hmm,
Andy: That’s basically the assumption that they make about the model. The effort output outcome impact makes a little bit more, makes this kind of sequential thing.
But there’s not much to it. It’s very li there’s nothing about teams. There’s nothing about how teams interact. There’s no scrum ceremonies. There’s no DevOps. There’s no compliance department. There, it makes no assumptions about that. It doesn’t make an assumption that you do pull requests versus no code review versus pair programming.
It doesn’t make any assumptions about this. All it says is that there’s effort that produces outputs that result in outcomes that has an impact. So, these very, very simple models, the simpler the model, the fewer assumptions it makes about how everything else might work that’s not directly part of it the more widely applicable they’re going to be.
Mon-Chaio: And that also means you have to fill in the details yourself.
Andy: Yes.
Mon-Chaio: And so that means understanding the context, digging into the details, which I think is very important.
Andy: I wanted to give one or two tactics as well. And what I’m going to, I’m going to derive it. from the Kent Beck article as well. So basically, I think I’m a Kent Beck fanboy. I’ve been doing XP since, since I was in my 20s. And now I’m finding his articles again, and I’m just like, Oh, this is great.
But that said Kent Beck and Gergely they in one of their articles recommend that you could use this metric, just this simple. It’s more of a command than a metric. It’s more of like a do this and let’s learn why we can’t do it.
And they say it’s please the customer at least once per team per week. So they’re like, Hey, look, think about the impact you want to have and try to do that. Now that that’s kind of like a metric. That’s a thing that you could measure. It’s a, it’s a binary yes or no. Were they able to do it? But the interesting part, that I don’t think they got into a little bit, but I’m going to add on to this. They did say, don’t be afraid of accountability.
And I’m going to say that’s what this is entirely about. And to use their model again, from the engineering, the technical organization, to the non technical organization, that accountability is going to be nonsensical if it comes from descriptions of effort or outputs.
If the metric that you want to be reporting is pleasing the customer at least once per team per week, And you report that this team closed 15 pull requests, and released the software 25 times. But they have the feature hidden behind a feature flag. So they haven’t pleased a customer. You’ve provided no information that someone who’s non technical Can use to understand better what’s going on.
All that they can hear is they didn’t please the customer. So what you want to do is you want to talk at the level of the impact and the outcome. So you might say that we’ve put together the new feature. It’s hidden from most customers at the moment, but we’ve exposed it to two. When we exposed it to those two customers.
We got them on the phone, and we talked to them, and they told us that it didn’t work. So we immediately turned it off for the two of them. And now we’re investigating to understand what’s happened. , you’ve immediately given much more information that’s non technical in the slightest, but actually shows that you’re interested in your customers, and you’re interested in understanding why that didn’t work.
And you don’t need to mention pull requests or lead time on, on tickets or anything like that. But you can talk about the impact and the outcomes that you had. So, that’s accountability. Providing accountability of what have you achieved, what have you attempted, and what have you gotten out of that, I, I think is, is the key.
It’s not trying to hide the technical details, absolutely offer that if it gets to that level, if people start asking in, you can just start providing it, but it’s not where you start.
Mon-Chaio: All right. I think We said, look, there’s good things about the McKinsey article. They present some good nuggets. We’re happy they wrote down some of these things that maybe people are going to get wrong. Things like using the metric as too simplistic metric for the wrong use, but overall, they’re not trying to solve the right problem and their solution that the problem doesn’t get many people closer to an ideal state. Would you agree with that?
Andy: I would say I think that they’re aiming for the right problem.
I don’t think they actually end up solving it.
And, and I, think they’re going to just create more problems.
Mon-Chaio: All right. Well, if you haven’t already. Give the McKinsey article a read. Give all of the show notes a read. We’ll link to other people’s responses to it. Let us know what you think. Rate us on your favorite podcast platforms. And yeah, even if you don’t, please keep listening. I think keep listening is like the best thing that we could hope for for all you listeners.
Andy: All right, so long for now. Till next time, be kind and stay curious.
Leave a Reply