Show Notes
In this episode of The Tactics for Tech Leadership Podcast, the hosts discuss personal computer issues (and a missing episode!) before diving into the main topic: the continued search for a system. They review the DevOps Research and Assessment (DORA) model, its methodology, and its continuous improvement efforts. Comparing it to other models like CMM and TSP, they conclude that DORA provides a more comprehensive and scientifically-backed system for understanding and improving organizational performance. The hosts also explore DORA’s latest insights, including the impact of AI on software delivery, emphasizing the importance of continuous improvement and mindful interventions based on DORA’s findings. The episode provides actionable advice on leveraging DORA’s research for tech leadership and maintaining organizational health.
References
- The Role of Continuous Delivery in IT and Organizational Performance – https://www.researchgate.net/publication/302567338_THE_ROLE_OF_CONTINUOUS_DELIVERY_IN_IT_AND_ORGANIZATIONAL_PERFORMANCE
- DORA Research publications – https://dora.dev/research/
Transcript
Andy: Welcome back for another episode of The Tactics for Tech Leadership Podcast. Mon-Chaio, how has your day been? What have you been up to?
Mon-Chaio: Oh, it’s been okay. A little challenging. My laptop died, uh, and well, it didn’t really die. What happened is it started moving really slowly and I, uh, I thought, well, let me try rolling back to previous update checkpoints that didn’t fix it, and then I did a entire reset and that didn’t fix it.
My guess is maybe the SSD is dying and that slows everything down. Um, maybe there’s memory chips that are bad or something. Uh, but, uh. It’s really frustrating resetting a laptop that’s already slow. I think the reset itself takes like 18 hours.
Andy: Oh yeah. I, I, a few weeks ago, my laptop died and I had to rush to Manchester to get it to the Apple store to get it fixed. Uh, yeah, that, that was, that was unpleasant. But, uh, it’s mostly fixed. They fixed the thing that was obviously broken, and now it just some randomly reboots. So
Mon-Chaio: Uh, it just, you know, it just tells you that it needs some time.
Andy: it needs some time to itself. Thankfully, usually when I’m not using it, I don’t think it’s happened yet while I was using it. It just, when, when you put it to sleep, it decides, I’m not gonna sleep. I’m going to crash.
Mon-Chaio: Yeah, in some ways I wish I had a good computer fixing solution. I’m on Windows and, you know, there’s no Microsoft stores really anymore. I don’t think so. Not even in the Pacific Northwest. I don’t think. So, I don’t have any place to bring it, so to speak.
Andy: That’s a nice segue to our discussion today, how to get your, your computer stuff repaired.
Mon-Chaio: Mm. Mm-hmm. Yep. That’s right. That’s right. Well, how to know whether you need it repaired,
Andy: Yeah. Yeah. So if your organization crashes, that’s probably a good, a good bet. Or if you’re going really slow, that might be a good, uh, a good indication and so Mon-Chaio over the past few episodes. And did we miss an episode? I feel like we might have missed one.
Mon-Chaio: Yeah. Uh, we did miss an episode,
Andy: Yeah, that was. Somehow we lost it and yeah.
Mon-Chaio: uh, that that was actually technical difficulties. Um, also.
Andy: so, uh, yeah, we’ve been talking through different ways of diagnosing organizations and we’ve looked at CMM, we’ve looked at TSP, we’ve looked at, uh, what else have we gone through?
Mon-Chaio: Um, the spheres of knowledge.
Andy: Spheres of knowledge. That’s what really got us kind of on this path of like, is there, is there something that full fulfills those spheres of knowledge? this time we’re gonna go to the, I was gonna say the new hotness, but actually it’s not, it’s been around for at least a decade, which is DORA uh, the DevOps, DevOps
Mon-Chaio: Research and assessment.
Andy: Assessment. Oh, I was like, what does it stand for? I’ve just been reading the papers and I never looked up what it actually stood for. And does it as a, as a system that has been scientifically built up through surveys and through methods of finding causation, does it provide us a model for understanding what Mon-Chaio you were saying is:
what’s the why? Why we have these numbers or why these are, this is a healthy
Mon-Chaio: Mm-hmm.
Andy: Does it provide us something? At least getting closer to answering that question. Something, something that you can say, well, as long as I’m not just taking do this or do that, it’s now, it’s giving me much more of like how things connect together.
Maybe we can use it. Does DORA provide us that?
Mon-Chaio: Yeah, I mean, I think that’s the reason that, uh, we thought DORA might be an interesting discussion, but trying to connect it to see is it a system? Right? Remember, we have systems, system, symptoms. Uh. Solutions, standards and solutions, right? So, uh, does it provide us with a system and can we think about DORA as a system to help us understand diagnosis and getting better?
Andy: Yep. So drum roll. Does it? Let’s find out. Okay. So the, I’m gonna argue, I, and you might change my mind on this Mon-Chaio I think you have already once. Uh, I think it might, I think it does actually. I’m gonna, I’m gonna put that as a firm. I think it does. The reason being that it actually provides what I was saying last time we needed, which was CMM provided a whole bunch of solutions.
Nothing that connected them together into, if I did a change here, I should expect a change somewhere else to also
Mon-Chaio: Mm-hmm.
Andy: It didn’t provide that, and what I was hoping for was we would find something like the kind of cause and effect diagrams that we find in research papers they say, well, we’ve got this concept and it is positively correlated with this other concept.
Mon-Chaio: Yep.
Andy: Meaning that if you increase concept A, you should expect to get more of concept B, maybe not a hundred percent. And then you might have like mediators and moderators in, in there telling you why it might not happen in your particular situation. And I thought DORA would have that and DORA does have that. They, they make the pictures. Usually they make the pictures a bit more colorful than find in the research papers, but they are there, I can see underlying them that is what they’re presenting to us.
Mon-Chaio: Mm-hmm.
Andy: And, and it’s built up over more than a decade. So they, the research, the DORA, uh, the DevOps assessment that happens.
All of this is based on, they change every year to investigate different questions, and over that time, they’ve kind of built up a pretty big model. I don’t know if they’ve ever put the whole thing together. They must have. They probably have somewhere. I just never found it. They built up a pretty big model of what all these elements are and how they interconnect, and sometimes you’ll even find them having written a report.
That gives you the actual like R values or R squared values of what’s the connection between these concepts?
Mon-Chaio: Yeah, I mean, I think, um, I think DORA gets a lot closer, right, than a lot of the other, um, some of the other things that we looked at, like TSP for example, uh, which are simply practices. They certainly do have a core model, right? That they ascribe to and that they tinker with, uh, and they iterate and improve on every year.
Now I’m not gonna pretend that I’ve looked at the research archives for the last 10 years. They, they do write a lot,
Andy: yeah, there, there, there’s a lot. I, I, I read, I’ve read through just for this, I’ve read through, mostly read through, I should say 20 19, 20 18, one of their probably original ones from 2016, that was actually published more as a full on just research paper. And the most recent one from 2024. That one I was interested in because I was like, I think they, I think they, uh, looked at AI in that one, and I wanna see how it affects things.
of interesting what it did.
Mon-Chaio: Yeah, I, I saw that I didn’t have a chance to read through it. I’m actually really glad they did this, um, because I think it’s, uh, it’s gonna be an interesting read when I finally do read through it. But their core model, they have, um, a lot of research behind it. And most of the research, I would say is just industry interviews as far as I can tell.
Andy: So they do some qualitative interviews. The majority of their data collection is, uh, is a survey. So they’ve built up now over the years. Survey questions that ask into these different things, and I think they go through the full process of before they send it out to everyone, if it’s any new questions, they test it with a smaller group, look for statistical anomalies and things that just don’t seem to be matching up.
Tweak that, try again. And then eventually they send it out to the full
Mon-Chaio: Mm-hmm.
Andy: And, and I should say that full group is, in one of their older reports, they, they said it’s a snowball sample. So a snowball sample is where you say, we don’t have, as researchers, we don’t have access to a population. We have access to a small number of people.
And in order to get enough of a response to be able to do anything, you have to snowball. Meaning that you start with some people and then you ask them, who else should we talk to? Or you ask them, send this on to others. And so what they do is they do an online survey and then they make it available to a lot of people.
Mon-Chaio: Mm-hmm.
Andy: I imagine anymore over the past 10 years, they’ve probably built up a pretty large database of people who they can reach out to. So yeah.
Mon-Chaio: Right. And I think that part of it, normally, I tend to eschew sort of, um, qualitative research like this at scale. You might recall that, uh, that was one of my criticisms against a lot of the research in our remote work series. I. It’s relying on people to assess their own productivity, uh, remotely versus not remotely.
Um, and when we found quantitative research, it was often at odds, uh, with people’s self-assessments, right? Uh, and researchers looking at quantitative versus qualitative that hey, they were sometimes often at odds. But for DORA, I feel decent about sort of their sample size and their methodology. Now, I think it’s a little bit difficult still from a systems level, um, to say, well, why does fast feedback lead to, um.
Uh, or, you know, lead to better performance. And, you know, if I do fast feedback, then it should, decrease one of the four key software metrics. Like, or, or increase or decrease them depending, right? So fast feedback leads to higher frequency of deployments, for example. Um. I don’t know that it does a great job of talking about why it surmises, why at times, this is actually why I like to bring in social science research. And I think both, we like to bring in social science research to say, okay, well, can we figure out from. Psychology or other things, well, why these behaviors might trigger these other behaviors. Um, and so normally, again, that would be my major criticism, but I think DORA has run for so long, and it’s had such a big sample size by now.
And they’ve refined their thinking over the years. And I agree with sort of their methodology that like, I feel pretty decent about it.
Andy: And, and what they do is they pay attention to what are the patterns or what are the clusters that they find in the data. Each year they try to come up with a different hypothesis to test. So, so because they’re building, what these big diagrams are is they’re the underlying theory that they’re working from.
And the hypothesis is there to test different connections, whether or not that connection exists or is it a positive or a negative connection. And as they’re doing this, they notice different kind of clusters. Or covariance, uh, where one number changes together with another number, and they want to be able to explain it as, Mon-Chaio, as you’re saying.
It’s like if they can’t explain why these things are happening, they’re not all that useful to us. It’s just like, well, it happens, but we don’t know why. So what they do is each year they try to come up with what’s a new thing that we can test that gives us more clarity on what’s going on. And that’s exactly what we want out of a system like this is we want constantly growing understanding of what should be there, what is there.
And actually this 2024 report I thought was really interesting because you know, there’s these four DORA metrics. I think for most people, DORA is just these four metrics. It’s like, oh, they do all this research and then they come up with, there’s these four metrics and you should just judge yourself on those four metrics.
The, so the four are, uh, change, lead time, deployment frequency, failed deployment recovery time, and change failure rate. And those have been around, I think, since DORA really came out probably almost 10 years ago. But they said that those four had always kind of varied together. Which meant that there should be some underlying concept that holds it all together, but they could never find it.
They could never, statistically, their statistical methods would not let them put it all together, meant that there was something that they were missing. And so this time they, they hypothesized what could be this missing thing, and it was, the change failure rate was always a bit different from the other three. And so they were like, so something is going on there. And so they hypothesized that actually change failure rate is connected to another thing that they called rework rate. And once they did that and they did it in the, the survey and they collected the information, they did their analysis. They came up with, yeah.
Yeah. They split it this way. If they split it into, you’ve got those three lead time deployment, frequency failed deployment, and then you’ve got change failure rate and rework rate. Those two groups fit together and now they’ve got like this new thing. That rework rate is actually one of the drivers in what’s going on.
And it makes sense because, well, you rework more if you get change failures. And so that’s what they do through each iteration of this is they kind of come up with where is the oddity in this and how can we come up with an explanation either to say that maybe we measured it wrong and it wasn’t what we thought it was. Or we are measuring it right, and we just don’t have, what’s the explanatory variable here? Like what? What’s going on? What’s the causal system going on here. And so they try to, uh, build that out. And so out of their system, what you start ending up with is things like, like, uh, it’s, I think it’s pretty well known that also DORA has said psychological safety is a big aspect and good teamwork. But they’ve also investigated, uh, which we’ve talked about Mon-Chaio, and we’ve also talked about the Westrum cultures.
Mon-Chaio: Yep.
Andy: They’ve investigated that as well. So they’ve found that the westrom cultures are connected to, software delivery and operation performance, uh, which is connected to our organizational performance.
And trust is a positive influence on the Westrum culture and voice and autonomy. And so they found all these things. And what it starts giving us as leaders in these organizations is it starts giving you those. Independent variables that you can tweak, you can start saying what might happen here. Not just like everyone, you’re going to write a unit test, which hey is a, is an independent variable.
You can tweak that, but now you start having a model that tells you that. Let me see if I can find that one. Uh, deployment automation, continuous integration. Actually, here’s one my bug bear. If you are aiming for lower delivery pain, you want a decreased delivery pain, you, you should be aiming for continuous delivery, small batch sizes. That’s what they found again and again. And if you want small batch sizes, you actually want trunk based development. So their independent variable was trunk based development.
Mon-Chaio: Mm-hmm.
Andy: Their dependent variable was continuous delivery, and then their further dependent variable. Further along that chain was deployment pain and burnout. So continuous delivery increasing decreases burnout.
Mon-Chaio: Yeah, and I like the that their outcomes are both sort of the standard performance outcomes, which if you look at some of the other models. It’s around performance, right? It’s around how much code did you deliver, how much money can your company make? Um, uh, how fast was the project done? Was it done on time and on schedule?
Right? Um, believe they would call that perhaps commercial performance, I think is their term for that, uh, sort of stuff. But their outcomes, their correlated outcomes also include things. Uh, Andy, like you mentioned, burnout, job satisfaction, and, and like things like reduced rework or productivity.
Some of these, which. I think anybody would care about. And some of these, maybe the most cynical would say, well, I don’t really care about job satisfaction as long as I get my commercial performance. But what’s really interesting is they weave that, uh, I think at the core from their original surveys into the surveys, um, and they were able to find correlations, not just in terms of, you know, the commercial performance, um, but the wellbeing and how those were pretty tightly tied together at times.
Um, which I found interesting.
Andy: Yeah. So, so where they came from, a lot of it ends up centering on continuous delivery. I mean, one of the original people involved in this was Jez Humble, who I think he wrote the book Continuous Delivery. So it’s kind of understandable that a lot of their stuff goes through that, but they have branched out a lot.
Some of these models that they’ve come up with don’t have continuous delivery at all. So for instance, another one was outsourcing. Like what is the impact of outsourcing on, on organizational performance? And, and the answer is, functional outsourcing. This is from their research.
If you functional outsource meaning, like if you say, oh, our developers are in Lebanon,
Mon-Chaio: Mm-hmm.
Andy: But we are gonna do all the project management in-house you have much worse software delivery performance. Which means you have worse organizational performance. And, and that, that, I think they even found, they said the, the, the, the factor was three point some percent or three point some times, uh, worse if you outsource than if you
Mon-Chaio: Mm-hmm.
Andy: So they, they go through all sorts of things. It’s not just the continuous delivery. It goes into all everything. And one of the things. Kind of a slight tangent that I thought was interesting. In this 2024 report, they, they looked at AI and the impact of AI. And AI, one of their things was, AI increased flow, increased engagement clear, decreased burnout.
But surprisingly, even though like all of those things are normally positively correlated with performance, it decreased overall software delivery performance.
Mon-Chaio: Interesting.
Andy: Yeah. Yeah. So, so they’re kind of like, well, something’s happening here.
Mon-Chaio: Yeah, that is super interesting. Um, like I said, I haven’t read their AI research. Did they have a hypothesis?
Andy: Uh, they did. Um, so I’ll read it to you. So it says, “we hypothesize that the fundamental paradigm shift that AI has produced in terms of respondent productivity and code generation speed may have caused the field to forget one of DORA’s most basic principles, the importance of small batch sizes.
That is since AI allows respondents to produce a much greater amount of code in the same amount of time. It is possible, even likely that change lists are growing in size. DORA has consistently shown that larger changes are slower and more prone to creating instability.”
Mon-Chaio: Interesting.
Andy: And I can believe that I, I can believe that in watching what I’ve seen people doing. And I can also, uh, uh, understand that those change set sizes in the flow that I see people get into with AI would increase in size it’s just like. 20 seconds and I just produced a, a, uh, three methods and, and I’m in a flow.
And so now I just kind of keep going. I don’t have as much of that mental overhead that might cause me to step back and change it and, uh, say like, let me push out this small thing. You just kind of keep going.
Mon-Chaio: Right. Yeah. I mean, I think, um, all of the stuff that talks about why large change sets are, poorly correlated with software performance and software delivery, I think all applies here. Right. Um, this idea that it’s difficult to review, um, it’s difficult to break apart and understand the components, how they interact with each other.
All of that sort of stuff I think applies. Uh, and I think this kind of dovetails well with what we talked about with AI last time, which Andy is your, was your point around where do you find the times or the, the areas to inspect or to change, right? Like you have to understand, um, what are the critical decisions that you have to make at each step.
And with a smaller change set size, you know, you can more detailed read into it. You think about prs, right? Like when you get a big change set, small change set, um, you’re essentially doing PR of AI generated change sets.
Andy: Yeah.
Mon-Chaio: or you’d better be,
Andy: You, yeah. hope you are.
Mon-Chaio: um.
Andy: I, I’ve, I, I would hope people are, I’ve seen a lot of, of review of AI generated stuff to be just like, read it and say, yeah, I understand what it’s doing. But then I see an anchoring effect start happening where people then get anchored to what it’s doing is then the right thing, rather than their critical facilities going in and saying like, alright, I see what it’s doing.
Is that what I want to do? I don’t see that as often.
Mon-Chaio: Well, and I think it’s especially dangerous because if AI came about at a different point in the software maturation cycle, we might be having a different discussion. Engineering as of, let’s call it three years ago, as a discipline, didn’t really do code review well. I think we’ve had episodes about code review and like, does, code review matter? Actually, I don’t think we had an episode on that. I think we,
Andy: No. We did, did. We talked, we talked about did, does code review matter?
Mon-Chaio: Okay. Okay. so I am remembering correctly. Um, and, you know, so we already, as a, uh, we already as a group, don’t do code review well. And now, with AI coming along, we’re essentially saying now we have to do code review at scale. When we couldn’t do it, when it wasn’t at scale.
Andy: Yeah.
Mon-Chaio: and I think that is, uh, that leads to a lot of this dangerous stuff and I can see, um, how that would flow through. And DORA might find signal around that.
Andy: Yeah. Yeah. Alright, so DORA produces models. We’ve been looking for models that help us, uh, describe why things are happening. Does DORA help us with that? Is, is DORA’s models, are they going to be enough for us to do diagnosis? Like can we, it’s almost can we work forwards and can we work backwards?
Mon-Chaio: I mean, I think we can clearly work forwards.
Andy: yeah.
Mon-Chaio: I think that’s the easy part, uh, for me to answer is look like if we look at a specific capability around deployment automation, or loosely coupled teams, uh, I think DORA does a great job of sort of tracing that all the way forwards to an area of, uh, outcome or improvement.
Right. Um, I think working backwards is a little more difficult in my mind. Um, so if, for example, you don’t have high job satisfaction Yes. You can kind of trace back to all those other branches,
Andy: the things that could be impacting
Mon-Chaio: that’s right. And you can look at like, you know, the r factors in the reports or the causation or whatever.
But where do you start? I think is, uh, much more difficult. And again, it’s because of like, while I do think it is a system, I think it is a system built out of, observations and correlation, versus strong causation.
Andy: I think they would, I think they would say that that, that they have identified causal relationships. Like that’s, that’s what all of their, their stuff is about is finding causation. In fact, there’s a, I unfortunately did not have the Accelerate book. I, I used it. don’t seem to have it anymore.
And I’m pretty certain that there’s a chapter in there talking about most people will attack their research as this is purely correlation, it’s not
Mon-Chaio: Hmm. Okay.
Andy: And they say that like, that is a concern you should have. But the, the mathematical methods that they are using and the, the survey methods that they are using, let them tease apart correlation and causation.
And so that they, when they say that they have found a causal relationship, they are fairly certain that they have found a causal relationship.
Mon-Chaio: And I can, under, like, the mathematical part of me can understand and appreciate that even if I might not be able to understand and be able to criticize them on saying, oh, your mo, your mathematical model says this is causal, but I still think it’s, you know, non causal or whatever. I don’t have that sort of math depth to be able to dive in there, but.
It still is a little, um, a little,
Andy: not a hundred percent. And I think that’s, that’s the thing you’re not gonna get from them, and you’ll almost never from something like this is, you’ll never get that. Well if you do this, this will happen. Because there’s so many other factors and their model has not yet been able to express how they all interact
Mon-Chaio: Right.
Andy: and so they will have things missing.
Now they can say that there is a causal relationship. But they, they have to admit that there are many other factors in play. And so it’s not a hundred percent that if you do this, that will happen. It’s if you do this, that could happen and it will have happened because of something you did.
Mon-Chaio: Right, and I think the challenging part there is I did this and this didn’t happen. Now what?
Andy: And, and I think, I think they actually even have, uh, a way of thinking about that. And CMM even gave us the way as well. So I, I, as I was reading through, I, I wrote in my notes echoes of CMM. So DORA has always split the population. They’ve always said there’s the low performers, the middle performers, the high performers, and I was there a fourth, I can’t remember.
And they said “we made a decision to call the faster teams, high performers, and the slower, but more stable teams, medium performers. This decision highlights one of the potential pitfalls of using these performance levels. Improving should be more important to a team than reaching a particular performance level.
The best teams are those that achieve elite improvement, not necessarily elite performance.” And I think what they would say is, well, our model tells you that these things are connected, so we want you to be an improver. And an improver, what you do is you come up with hypothesis. You make a change, uh, you make an intervention, you see what the impact is.
You, uh, analyze that. You learn from it, and then you decide on what’s the next thing you try. So in, in that case, you would say, okay, the DORA models tell us that there is a causal connection between these
Mon-Chaio: Mm-hmm.
Andy: So what we’re gonna do is we’re going to, start, we’re gonna do, uh, this particular thing to improve psychological safety and we improve psychological safety. But psychological safety, increased we measured that we can, we can use the whole inventory for doing that. But, uh, sorry, I’m now scrolling down to see which, uh, what things are connected Psychological safety. Um. But, uh, uh, software delivery and operation performance hasn’t changed really. Okay, so that didn’t seem to change it. Maybe one hypothesis we’ve hit the limit of what psychological safety can do on that particular thing.
Mon-Chaio: Mm-hmm.
Andy: Maybe we were already high
Mon-Chaio: That’s right. Yeah.
Andy: so, so that could be one hypothesis. Um, uh, another one is that, uh, uh, that there’s a a mediating factor in there that ADORA has not yet identified that we’re falling victim
Mon-Chaio: Mm-hmm.
Andy: Okay. So that’s another idea.
What can we try again? What can we try instead? Psychological safety didn’t seem to do it. We could try different things around it. We could try shooting in the dark for that mediating factor. Uh, we can, or we can try, um, decreasing how heavyweight our process is, another thing that they say is directly connected.
Mon-Chaio: Right.
Andy: Okay, well let’s, let’s try that. And so the idea is you get into this practice of mindfully improving yourself. But doing it in this kind of guided method, you, you have that each one of these is a reasonable bet, but it’s not a hundred percent.
Mon-Chaio: Right, and I would say it’s just a lot more of a reasonable bet bit than CMM, but at its core. The same practice right from CMM level five would say continuous improvement is to go back and take one of these things that we’ve identified and change it. Uh, and one of our criticisms there was, it’s like kind of like shooting in the dark, like you were saying around the media.
80 factors. Why don’t I change this? Why don’t I just changed that? I think the DORA model gives us a better inventory of, Hey, if I change this, this is what I can expect to change and this is what I expect. Probably not to change, um, they don’t seem to be correlated. Right. Um, so I think it’s a lot better than CMM. But like know, in some ways it’s still unsatisfying because of the fact that it still seems like there is a lot more to learn. Now that’s not a criticism of the model or the organization. I think DORA is out to learn like they’re improving themselves every year and they’re, you know, redo, re um, assessing sort of their thinking every year, uh, which I really like.
And it may be that this social technical model. The social technical system is just too complex for us to, in a 10 or 20 or 30 year span of, and, you know, these folks aren’t doing research like day in and day out, I don’t think it’s like, um, and so maybe, maybe it requires more effort, funding, whatever, into this group to, to be able to do that.
But it may be that it’s just so complex that we can only get close to understanding the system, or we can only get 70% of the way there. But. Honestly.
Andy: what I’d expect, but it’s still guided. I think it’s, it’s, it, it still tells me that like what CMM never gave me was if you’re seeing this issue, uh, these five practices would have nothing to do with that.
Mon-Chaio: right. Exactly
Andy: Even if, or even worse, this practice, which seems like it would be connected to improving, it actually would make it worse.
Mon-Chaio: right.
Andy: And that’s what they can tell us. So, so for instance, you might say, you know what, our, my delivery team, my, my programmers are terrible. There’s this, this other group, they’re gonna cost 50% as much. And I’ve heard from someone else that they’re great. I’m just gonna outsource everyone and it, my, I’m, I’m gonna get better.
I’m gonna get higher performance. They would say that’s a pretty bad bet.
Mon-Chaio: Yep. Yep. No, I agree. And so I think where, you know, where a 60 or 70% model still feels to me anyway, um, a little squishy. It is, I think the best that, or one of the best that I have seen, and it is actually useful in practice. So now can we say that CMM and CMMI and TSP are not useful in practice also? I don’t think so.
I think there are tons of consultants that go out and use it, um, and find success with it. Um, but I would say that like I have a better belief in the DORA model. Um, and I
Andy: I, I would say
Mon-Chaio: guides us.
Andy: I would say it’s because the DORA model is doing that continuous improvement itself.
Mon-Chaio: Yes, absolutely.
Andy: they, if they had done, if they had re released the 2016, I think it was paper, and left it at that, it would be like, well, this is interesting. And we probably never would’ve looked at it again. say, well, that’s cool, but the fact that they keep revisiting it and keep looking for other ways of explaining what’s happening, um, an asking and answering questions as our industry changes, I think makes it.
Really valuable .
Mon-Chaio: Right, and getting things like thinking about things like offshore, you’re thinking about things like AI stressors to the model in, in some ways. And you can say, well, now let me see if I throw this stressor to the model, does it. Enhance my model, does it disprove my model? Um, I think is something that makes me have a lot of confidence, uh, in the interventions that they suggest, um, and in the relationships that they posit.
Andy: Yeah. Yeah. The AI one I think is gonna be really interesting, especially seeing how, how it modifies over time because it’s a stressor to the entire model of how do we develop software, at least with what people are doing. And so it will probably prompt them to start finding some of these mediating and moderating factors that have kind of sat there unattended to, because they’re just kind of noise in the system.
Mon-Chaio: Right.
Andy: An AI might accentuate some of them and cause them to come to the fore, and so we might get an even more rich model that says, ah, this thing mediates that connection.
Mon-Chaio: Mm-hmm. Absolutely.
Andy: Alright. So Mancho, I think we might leave it there, that the tactics for this are people, please read up on the DORA system, and go far beyond, go all the way into it, is what I would say. Don’t just read up on the DORA metrics. Read up on the DORA model,
Mon-Chaio: Right.
Andy: What it is that they’re proposing why all this stuff fits together.
And because that will tell you that. To improve your change lead time doesn’t mean just release more often. In fact, I think they even say in there, you have to do other things to get to that point.
Mon-Chaio: Right. And yeah, tactically that’s absolutely correct, Andy, and the stuff that you’ll find most of is the four measures. That’s what you will find. And there’s a lot of reasons for that. That’s easy and succinct and catchy. It’s also easy to build into tooling. So, uh, you know, JIRA or whatever will have those in there, or Atlassian will have those in there and other companies will say, look, we have a way of measuring these types of things.
Um, but those in some ways are the least interesting. Right? Those are the most. CMM like, just change
Andy: yeah, yeah. Well, those metrics, those metrics are the measure of health.
Mon-Chaio: Mm-hmm.
Andy: That’s it. They, they’re, they’re that, and I will say, and I, I will say the way that they measure it is purely through a survey. So if you want these measures, just have your developers fill in the survey every once in a while.
Mon-Chaio: Uhhuh. And I think
Andy: Don’t, don’t pay Atlassian 500 a month or whatever for something.
Mon-Chaio: I think they’re, uh, uh, their thing is called a health check or something, um, that you could just click on and it’s just a, uh, it’s an instrument that’s just the survey. Um. Right. But tactically, um, I would echo Andy’s suggestion like, go to DORA Dev. Um, there is a ton of research there. And the research itself isn’t, I wouldn’t say super dense.
So if you’re like, if you don’t like to read research, ’cause it’s all about dense, uh, concepts around mediating factors and, you know, big math equations or whatever. I haven’t found that the research is full of that, so, um, it’s pretty consumable in my mind. There is a lot of it, so. I don’t know that I can say, look, start in the 2014 archives and work your way up.
Um, so I don’t really have a tactic on how you should consume this. Um, but there is a lot, and I would say even just reading a year’s worth anywhere, um, will kind of give you a sense and then you can kind of dig back and forward from there. Yes, but don’t just look at the four metrics.
Andy: Yeah. Uh, I would suggest you don’t need to go too much into their explanations of how they calculate, uh, their systems of equations or anything like that. ’cause they, they do provide some of that information, but track down, uh, the survey questions,
Mon-Chaio: Mm. Okay.
Andy: understand what they’re actually measuring. Because it changes, I think it really helps you understand. It’s called the operation operationalization of the concept. It really helps you understand what they mean by that. It’s not just the association you might have, but it comes down to what are the questions that they’re actually asking people. And it might, it might, uh, reduce the quality of the research in your mind, but at least now you understand what it is that they’re actually talking about, rather than what you might associate it with without having studied what it is that they’re talking
Mon-Chaio: Mm-hmm. That makes sense. I actually found their quick check also. It’s right at the top of their website and it is four questions. It takes you 60 seconds if that.
Andy: Yeah. Yeah, I, I think, I think, I think you just, just do that. ’cause then, then you’ll have, in fact, I would say that would be even better because now you’re the most comparable to what the research is actually
Mon-Chaio: Mm-hmm.
Andy: Alright, Mancho, I think we have beat this dead horse enough
Mon-Chaio: To a bloody pulp.
Andy: to a bloody pulp. We finally landed on a possible model, a possible system that can be used for diagnosing. We’ll have to figure out where we go next in diagnosis after this. I don’t think we have an exact plan at the moment.
But, uh, we’ll, we’ll probably continue down this route and, uh, as I said in my vacation cast, we’ll start experimenting with different, different, uh, formats for this in different ways. We’ll be doing the podcast as, uh, I step back and as you, you make it your own, you, without my meddling fingers. You, you’ll, you’ll start, you’ll start figuring out how, how do you want to turn this into the Mon-Chaio show? So, uh, if you are interested in giving us suggestions about how this could continue, uh, how Mon-Chaio should be doing things, or must be doing things, if you wanna give him orders and demands, send us an email at hosts@thettlpodcast.com. Hosts with an s it’s very hard to say, at the ttl podcast.com.
And until next time. Be kind and stay curious.
Leave a Reply