S2E42 – Does Your Software Methodology Matter?

Show Notes

Andy and Mon-Chaio question the true impact of various software development methodologies on team performance. They look into whether the research gives any clues about how methodology choice affects team happiness, quality, and speed. The hosts critique the state of software engineering research, discuss effective team dynamics, and highlight the importance of adaptability and interdisciplinary research. Listeners will gain insights into the limitations of plan-driven approaches, the nuanced realities of how software teams work, and why understanding the ‘why’ behind practices is crucial. This episode offers an overview of the research into the effectiveness of different methodologies and practices.

References

Transcript

Andy: Welcome back, everyone, to yet another episode of the TTL podcast, today I want to delve into something that’s a little bit different from what we’ve gone into before. I wanna find out, is there an answer to this question? Does the software development methodology that you choose to run on your team, does it matter at all? Is there, is there any real difference that we can find in what happens? And this is a big, oh, this is like a big question, isn’t it Mon Chaio? Like, is this about how people are happy working on your team? Is it about the quality of what you produce? Is it about the speed? Is it, like, all of these different aspects that could matter here.

Mon-Chaio: It really is. And it’s something that I’ve seen throughout my career. People spend a lot of time on this. So I do think we should talk about it, even if it is a bit nerdy, even if it is a little bit I don’t know, not really leadership per se. Engineering leaders for some reason spend a lot of time on it.

How many companies have you been in, Andy, where they say, Oh, well we need to go through an agile transformation of some sort?

Andy: Thankfully, none.

Mon-Chaio: Oh, man. How many have I been through? One, two, three, four, I think, four maybe? And even people that have been finished through agile transformations, perhaps they were a waterfall or something else and they’ve decided, Look, we’re going to introduce this thing called Scrum.

Nowadays, everybody is already doing Scrum, supposedly.

Andy: Every single due diligence I’ve ever done. I’m like, so how would you characterize your software development methodology? I think. I think 99 percent of them, it’s Scrum, and my very first rant on this is then when you look at what they’re actually doing, it’s like, this isn’t Scrum.

Mon-Chaio: Mm hmm.

Andy: Which gets into the whole question of what does it even mean to be doing one of these named methods?

Mon-Chaio: Right. And, honestly, Andy, when we do these due diligences, how many times do we say, well, your methodology is holding you back? And maybe that’s something, as we explore this topic today, we’ll say, yeah, maybe we should stop with that. Although, you know, of course, biased, I think we do a pretty good job of couching that and not, and it’s not saying, hey, you should move to RUP or you should move to Kanban or something like that.

Um, so we’ll get into that, but leaders, I feel like do you spend a lot of time thinking about this? Is my scrum running well? Should I go from three week iterations to two week iterations? How many retrospectives should I have? All of that sort of stuff,

Andy: And I think

Mon-Chaio: but it doesn’t

Andy: that, that debate and the way that that debate is handled, I think is something that has changed over my career. Very early in my career, it was the early days of Agile. And at that time, everyone was talking about are you doing Waterfall or RUP or Scrum or, or Extreme Programming or Crystal or I want to say DFDS, but that’s a ferry company.

There’s

Mon-Chaio: Okay.

Andy: There’s something like that that kind of was around for a while and then disappeared. But all of these different methods that were named and kind of like there was a book about them and then you did it and we even said, are you doing it by the book?

Mon-Chaio: Mm hmm.

Andy: But anymore, I don’t, I don’t hear people talking about that.

Usually they just say Scrum or Agile. And, and then much more of the discussion ends up being about how often do you do retrospectives? Or what do you, how do you structure your standup meeting? How many planning meetings are you going to have? That kind of

Mon-Chaio: How do you do estimation? Do you use story points or real days? That sort of a thing. Does any of that matter? So, so you’re doing RUP, so you’re doing Waterfall, so you’re doing Crystal. Or you’re doing fake Scrum, or you’re doing real Scrum.

Andy: you doing ScrumBan?

Mon-Chaio: Mm hmm. Does it matter? Why do we focus so much of our attention on it? We being the global engineering leaders,

Andy: Mm hmm,

Mon-Chaio: and should we, should we just forget about it and say, you know what? It just doesn’t matter. Do whatever you want.

Andy: which I really hope that that’s not the answer because then kind of like well then then what are we doing? Like what is it that we’re asking people to do if if any given thing doesn’t seem to matter?

Mon-Chaio: Uh, I mean, we could, we could posit an answer to that. I could give a straw man answer, which is as long as software is shipped who cares what they’re doing and that leaders have better things and engineers have better things to spend their time on than figuring out are we going to meet this two week scrum release train? Do I need to schedule my next after action review? I don’t think that’s the case and we’ll get into what the research may or may not say, but I think that’s a perfectly valid argument to present and I could certainly see that there could be a world where yeah, that is the case and we should just forget about all of this and burn these books.

they’re not useful.

Andy: So, where should we start in this, Mon Chaio? Do you have, I mean, I can see in our notes, I can see so, so many different links to different papers. Where do you want to start?

Mon-Chaio: I will say for me, I started by trying to research this exact question. Is there a correlation? Is there research that posits a correlation between your software development methodology and some sort of measurable performance? Whatever that

Andy: Mm hmm.

Mon-Chaio: Or measurable effect, I should say, whatever that is. Now the effect I was more interested in were things like, did your software ship more often?

Were your customers happier? Was it of higher quality and less of how happy were your engineers? How happy were your agile consultants? Now, not to say that those latter things aren’t necessarily important, but I kind of steer towards the, if you’re an executive, and you want to try to sell a methodology.

I think the profitability of your company, the happiness of your customers probably is paramount on your mind.

Andy: Yeah, I, I can that.

Mon-Chaio: so that’s where I started and I ran into, I think something that you knew, Andy, because you have been doing this for a while. I ran into a set of research that differs so greatly from we’ve been doing this for over a year

Andy: Mm hmm.

Mon-Chaio: The set of research that we’ve been reading for this last year, it really differs greatly. And I actually, in these notes that I’m reading from right now, where I summarized what I’ve read, I listed a bunch of them under “not as useful studies.” There’s a heading saying “not as useful studies,” and I have a bunch of links to things that I found. Maybe I should give some examples. So I found a, I found a paper that surveyed teams that are already using agile practices about how great agile practices were.

Andy: Oh, so just a, a, a qualitative research question, open ended interviewing of them, tell us why this is great, type thing.

Mon-Chaio: And it was sort of confirmation type things, right? Where they would say at the beginning, well, agile practices bring shorter time cycles and more, you know, more frequent customer review. Now let’s interview these teams and hear that they’re doing shorter development cycles and getting more frequent customer review and then some sort of like long expository about, well, you know, these things are good in general for software development.

Andy: hmm.

Mon-Chaio: Okay. Alright. So that was one. There was another one that seemed good. So they attempted to measure how many software development practices you were using. And then tie that back to, yeah, or sorry, how many this is specifically, let me, let me step back here. They were trying to measure how many agile practices you were using.

So things like, were you doing retrospectives? Were you doing, uh, uh, poker, planning poker, that sort of a thing. And then measure it against things. Their self reported difficulties in things like team communication, customer communication, that sort of a

Andy: Mm hmm.

Mon-Chaio: Now already we are seeing that they didn’t come at this from these are things that are important in software delivery.

They sort of came at this as team communication and customer communication are strengths of agile development. And so let’s see if more agile development brings more strengths. It’s a little bit better because I think we can all agree that team communication, customer communication is important. So it’s not like they just took that out of nowhere. And then they said, well, do the number of practices that we, agile practices that teams do, that teams practice, the number of agile practices that teams practice, do they correlate with fewer issues in team communication, customer communication, that sort of a thing. And they seem to find some correlation.

But if you look at their numbers, and I’m not a statistician here, so somebody definitely read it and tell me if I’m wrong, all their R values were under 0. 3.

Andy: So an R of under, of R of 0. 3. If, if you don’t know, so R values is a, is a measure of the correlation. And then a, an interpretation that is often provided on R values is what’s called R squared. And so you take that R value and you square it. And then the interpretation, there’s multiple different ways people try to interpret it, but the common one, the most common one I’ve ever seen, is that you interpret r squared is how much of the dependent variable is explained by the independent variable. So in this case, it’s how much of that team communication score is explained by this, this agile practice process count or agile practice count.

And in this case, a score of our, an r score of 0. 3 is 9%.

Mon-Chaio: hmm.

Andy: It explains some, but not really that much.

Mon-Chaio: And the funny thing is they even mentioned that there were things that were, they said, highly correlated versus lower correlation. But I could not find why that was in terms of r value and r squared values. Just the values seem pretty much the same. So, um, that was one paper. There was a paper, it was a literature review of studies of software development methodologies that show that Waterfall, Agile, Scrum each have their own pros and cons.

Makes a lot of

Andy: Mm hmm.

Mon-Chaio: And then they concluded that while in the modern world, adaptability is the most important thing for good software, so therefore Agile of course best. So things like that, there are more in here. Those aren’t super useful, I don’t feel

Andy: Yeah, usually, so you, you mentioned it. So for a while in London, I ran a little tiny study group. of software engineering research papers, because I was interested in the, in the question of like, can we answer these questions? Is there research out there that tells us like, test first versus test last will get you better outcomes, or pair programming versus after the fact code review will get you better outcomes, or just no code review, feature branching versus trunk based development. I wanted to find out was there, was there something out there that could guide my thinking on this? And after several months of doing this, I can’t remember how many months, I think I did it about a year after that, however long, I, I came to the conclusion that software engineering academics are well intending, but are not, they’re not trained well enough in the methods needed to answer these questions.

And the, the, the research that they come up with is in the end, pretty much just inane. Sometimes the, the experimental setup is such that you look at it and you’re like, well, that is immediately just uninteresting to anyone practicing this as a professional. Cause almost always it’s, we took our first year programming students and asked them to do Technique X after two hours of, of lecture.

Andy: And then we found no correlation between it and improved outcomes. And you’re like, and how is that supposed to help me? I’ve got a team that after two days of doing this, will have more experience in this technique than, than what you’re basing this off of. So I, Yeah, I, I encountered that many times.

I also encountered just questionable statistics. I’m, I’m not, uh, well versed in it, but I have taken a couple courses in, like, statistical analysis for social sciences. And I would look at their analysis sometimes and just go, like, that’s, that’s fishy. Like, you haven’t, you haven’t properly, um, adjusted for multiple hypothesis tests in here.

And so I was like, yeah, so you’re reporting something that’s statistically significant, and it very likely isn’t, um, because for our listeners, uh, uh, something like a p value, what it’s measuring is how, how likely is what you saw, um, just noise? How, how likely is it that this, this outcome would have just happened randomly?

And so a p value of less than 05 says we think that this is less than 5 percent of a chance that this is just a random outcome. If you look across, like, 20 different variables applying the same test, well, you would expect to find something in there, randomly, to have an appropriate level that it would come out.

So you have to adjust for that, and if you don’t adjust for it, basically your conclusions are meaningless. So I found this again and again. Eventually I came to the conclusion what software engineering researchers for the most part need to do is just get out of their seats, walk down the hallway, find their, their local psychology and sociology professor, statistician, and sit down with them and say, I’d like to investigate this idea.

Help me do that.

Mon-Chaio: Mm hmm. Mm

Andy: Because, because, uh, my, my biggest criteria, the, the most, the most profound, and I didn’t do it during this research, I should have, but I, I didn’t have time. But the most profound thing that I found in assessing the quality of software engineering research is in looking at the list of authors. And is there one of them who is not from the computer science department?

Mon-Chaio: Oh, interesting.

Andy: If you get someone who’s not from the computer science department, you’ve almost certainly got some okay research.

Mon-Chaio: Interesting. Huh. I’m going to have to look that up. I’ve been doing research on a completely different thing and finding some papers that I’m going to use that, I’m going to that metric that heuristic. Mm

Andy: And they don’t even like to, to, to this point, one of the best papers, it was a qualitative research paper. I think I might’ve mentioned it before. It’s called Reconciling Perspectives. It was a grounded theory, qualitative research paper, the non computer science, non software engineering person on it was from the nursing school and, and it was good. And I’ve talked to one of the, I’ve talked to the primary author and, and he was like, Oh yeah, she, she told us how to do this and it was good.

Mon-Chaio: So you have a lot of experience here. Unfortunately, I mostly read social science papers. Even when I was doing my own reading groups, it was very much about things like trust building or feedback giving, that sort of a thing. Didn’t dive into the, uh, software engineering, computer science groups, writing research papers much on this type of thing. Did you, Andy, find anything better in your research than I found in mine? I found a couple things that were so so, but maybe you found things that were even better than

Andy: No, I, I found basically so so stuff. So I, I spent most of my time looking through, because I thought it sounded promising, the Journal of Empirical Software Engineering, and, trying to see what kinds of things about the software engineering discipline they would have in there.

And it was all, from my perspective, all fairly so so. There were a lot of things that hint in a direction. but don’t give clarity.

Mon-Chaio: mm hmm,

Andy: And it’s the same things we see. It’s like, there are a lot of hints in the direction that Agile practices help, but it’s not just Agile. Like, I think that’s one of the things I always found the most frustrating about so much of this research is they’re like, do Agile practices help?

And it’s like, well, this is a whole universe of practices and philosophy and ways of doing things that you can’t just say, Oh, do you do Agile and know what they do?

Mon-Chaio: mm hmm, right?

Andy: And, and so, What I started finding was,

let’s see, how do I say this? So, so I found one paper, for instance, that’s, uh, that says, um, there, there was a small effect size, but statistically significant, meaning that it’s, it seems to be there, it’s not random chance, but it has a small effect. And, and they’re not sure if it’s even big enough to be practically relevant, as they say. That how you scale your Agile teams. So are you using less or, or safe or one of these other scaling approaches, has any kind of impact.

Mon-Chaio: Mm hmm,

Andy: So basically, and, and this, this matches with my understanding of what like practitioners would say, which is however you decide to scale, it’s all going to go poorly.

So, just stop trying to scale and find ways instead to keep small. The other things I found were, I tried looking for, I ‘cuz I, I love TDD. I tried looking for, is there anything about TDD? And it’s all mixed. Everyone finding mixed results. But on the other hand, it’s, it’s like psychology of the 1960s. Everything is being tested on, uh, the, um, Western Anglo Saxon University student.

Mon-Chaio: hmm, mm hmm, mm hmm, mm

Andy: and, and then, uh, one, one though had the insight. They, they tried to do a, a look at does TDD versus iterative test last is I think what they called it ITL, is there a difference there? And they actually, uh, they were finding that, um, TDD was less effective than test last. And they said, but this flies in the face of other research that says the TDD is better.

And so they had to come up with a hypothesis and their hypothesis was, well, it, they found that it was slightly better for the students. And slightly worse for the professionals because they did study of professionals and students. And their hypothesis there is the professionals are not in a mindset to do something different.

And so when they were asked to change what they were doing, their heart wasn’t in it. And I’ve seen that in many other papers, is that when you go into an industry setting and you’re like, we’re going to do this. And you, I think naively try to control the situation. People push back, don’t actually do it or do something different.

I mean, it’s what, it’s what we experienced in our own lives. Isn’t it Mon Chaio is when you go in, you’re working with a team and you’re like, Hey, I want you to do this. And then you start having the people saying, but I don’t think that’s effective. I’m not going to do I don’t, and then you end up with it not actually happening. So,

Mon-Chaio: It gets into what we’ve talked about in the past with things like performance reviews or other types of things. I can’t remember the most recent topic that we talked about this with, but it feeds into a lot of topics. This enumeration of expectations, processes, practices that are supposed to make things better. And remember, whichever episode we were talking about it in, What the research showed is that so much of it comes from outside process work, right? It’s some somebody who’s not in, not following the process. Maybe this was that chaos episode that we were talking about.

Andy: yeah.

Mon-Chaio: somebody that understands the intent of the practice but is not exactly following the practice that ends up making all these things work around the edges.

And that is the big problem with putting change in place where people’s hearts aren’t in it. Because what they can say is, for my performance review, so I don’t get fired, I did each of these 10 things that you asked me to. When I did test driven development with my pair, and I saw that they were not looking for the next test that we were supposed to write, I didn’t stop them. Because my job was simply to write this test before. Or when I noticed as I was writing this test that this module was very janky and difficult to test, instead of stopping and having a conversation, I just wrote a janky test. Because what am I supposed to do? I was supposed to test first. So it’s things like that that prevent these processes from really taking shape when you don’t have buy in.

Andy: And, and I think there, there was one paper that I found, what was the title of this? It was a teamwork effectiveness model for agile software development. And what they wanted to come up with was these kinds of things. What are those aspects of a team working well together.

Their approach was focus groups and literature reviews. So, we sometimes harp on that as a terrible thing, but it’s great for theory building, for putting together ideas of what things are, but not to come to an answer about what things are. And so their theory, and I think this matches with a lot of what we end up talking about, is there’s, they said, their theory of what makes effective teamwork is shared leadership, peer feedback, redundancy, That it’s not just a single person can do each thing, adaptability, that you will change based on the situation, team orientation, that you’re there to help the others rather than just help yourself.

And then that there’s communication, mutual trust and shared mental models that all underlie that and hold it all together. That kind of research in software engineering, I actually find quite useful. Because what they’re saying is that there are these commonalities out there. What I would really love to see in software engineering research is then people trying to think of like, how would we check to see how these things play out,

Mon-Chaio: Right.

Andy: and whether a particular technique and in a, in a natural experiment does one thing or does another.

Mon-Chaio: Mm hmm.

Andy: Um, and I’ve, I’ve almost never found that kind of research.

Mon-Chaio: Right. And even when we were digging in this time, I was unable to find that research. It was either, this is my mental model. There was definitely one paper that said, I took a bunch of research around the way teams perform, and there was then, now there’s this well studied mental model around collaboration, and I merged the two, and here’s my hypothetical model on a great way to perform software engineering.

Andy: Right. Well, I, I, I’ve looked at all these other methodologies and here’s a new methodology.

Mon-Chaio: Right. And, and that’s useful, but it doesn’t, it only gets us halfway there. Not even halfway there right now. The question is, okay, can I take, say RUP and Kanban, and can I say, I’m going to measure which one produces better shared mental models? Or which one leads to more adaptability or which one leads to more interdependence between groups, things like that.

Andy: Answering those kinds of things, which one of these allows for teams to be more autonomous? Now I am going to say, I think there is one thing that. The research does fairly consistently find, but it’s not going to be a surprise

at all. And I’ll use this one bit of research, which is the “effect of moving from a plan driven to an incremental software development approach with Agile practices.” And what they found was,

“the study came to the surprising result that plan driven development is not suitable for a large scale organization as it produces too much waste in development.” They call it a surprising result. I don’t think it’s all that surprising. Um, But, but that’s the consistent thing that is found, which is that a set, and here you have to, I think we have to get into a little bit of like, what is the terminology used in this discipline for this stuff?

Plan driven development is what the rest of us would call waterfall, where you decide up front before you’ve done anything, what your timeline is, what your scope is, you could, you, uh, and, and how much effort you really want to put into it. And then you kind of. piecewise refine everything to the level of code, and then you test that everything is what you want.

And the idea, the way that everyone seems to think about it is, well, you just move from one to the next and it just works because that’s easy to plan. It’s only easy to plan if you only think that everything’s going to work right.

Mon-Chaio: It’s the happy just like how a lot of people write automated tests.

Andy: Uh, and, and the result is, fairly consistently in almost all of the research, really, like, it doesn’t work. It leads to pretty terrible outcomes, um, in only, in all except the most simple of cases. Uh,

Mon-Chaio: I agree. And this gets into what we’ve talked about before, this complex domain, whether you want to use any sort of framework, whether it’s Cynefin or something else, where you can’t predict what’s going to happen in any sort of regularity. So except in the most meaningless of cases, the most small case studies, this is not really going to work.

And the funny thing then is, A lot of people who purport to be doing Agile practices, whether that’s Scrum or whatnot, They still have plan based, plan driven development inside of their Agile practices.

Andy: or, or outside of it, where it’s like, oh, an entire plan has already been devised, and now you’re gonna do scrum to deliver it.

Mon-Chaio: right, you’re feeding it into the meat grinder right? make your sausage, but like the, everything else is defined. And, I think we see that a lot. That’s not great. I think we also see inside of Scrum a lot of, well, now I’ve defined my two week iteration. Now that plan is set. Now anything that needs to enter that plan needs to go through a review.

And if you didn’t execute that plan, we need to do a retrospective on why wasn’t that plan executed exactly as I planned. All of that is plan driven, right? Now we should, I will pause here and say, um, maybe this is something we want to get into next. It’s not that you don’t have any plan. I think that’s also pretty terrible, but.

A lot of people who think they’re doing Scrum or Agile are following plan driven development more than they are not, and I would say should not be surprised if they’re not getting maximum value out of their development methodology.

Andy: I, I’ve, um, I’ve been working with a client that. They tried to tell me that they do scrum because they have sprints. And then as I learned more and more about how they do this and how they interact and, and that kind of thing, and I finally just said, no, you guys keep saying that you use scrum, but you don’t, you’re, you’re actually just waterfall. Um,

Mon-Chaio: Right. The term I think I’ve heard before is mini

Andy: yeah, yeah,

Mon-Chaio: Right, where you just waterfall every two weeks, which is actually probably pretty worse.

Andy: actually, yeah, it might be, it might be worse. Yeah. It’s definitely even more stressful, but, um, yeah, so, so I think that is the most conclusive thing is that. But basically it just says the Project Management Triangle. It’s real. You don’t get to get away from it. So that, that shows up I think quite a bit in the literature that I find.

Mon-Chaio: This is a good segue for us to talk about what is next. So we have gone through a bunch of the research around software development methodologies and tried to find research where they relate to some sort of positive outcome, value of some sort, profits, S& P 500, stock movement, customer satisfaction.

And found it to be fairly lacking. So, does that mean that we’re on the other end of the spectrum now? Where we say, well, why does it even matter what you do?

Andy: I feel like a naive reading of what we found is it doesn’t matter.

Mon-Chaio: Mm hmm. Mm hmm.

Andy: I’m trying to come up with a less naive reading and I think, I think it’s that, it’s that what you’re doing is so often context dependent that we can’t rely on the researchers for in a lot of cases because they’re unable to create scenarios that are controlled and yet nuanced enough to reflect what it is that we’re trying to answer. And so we almost need to take on personally the mindset that we want those researchers to have. And I think we’ve talked about this before of we need to take on the like, okay, what is the kind of research question?

What is, what is the goal? What is the, the thing that I need to understand? Now how can I experiment my way? to, to understanding that and being able to do it. So I think it’s, it’s not that it doesn’t matter. It’s that it’s so complicated that it’s almost too hard for the researchers to come up with an answer.

Mon-Chaio: Right. I mean, a lot of these things are difficult to measure. That’s, that’s part of the problem. I’ll give a example of some of the remote work studies try to measure effectiveness or whatnot by whether a company’s stock price moved. And so they’ll say, well, there’s these return to work initiatives, which were purportedly to make the company more effective and their stock price didn’t move. So therefore no correlation. So therefore not correlated to more effectiveness, which is one way to think about it. If you think that stock price is a strong measure of effectiveness of a company. But I think a lot of these things are difficult to measure. And so in the research world, it is really difficult.

Andy: And, and I’ll also say it’s so difficult to measure. If it wasn’t difficult to measure and they didn’t matter, there would be a very strong signal that it doesn’t matter, but since they’re difficult to measure and they seem to matter, what we’re getting is a lot of very confusing signals about which one matters.

Mon-Chaio: right. I like the way that you put it there. And I think I’m going to repeat that because I think it’s really important for our listeners to hear, which was if the naive reading was true, and it really didn’t matter. We would see a lot stronger signal that it didn’t matter. We wouldn’t get all of these weird, oh, it matters in this case, or there’s this weird thing where it mattered a little bit and then it didn’t later and that sort of a thing. So I think you’re right. That, that reading of the research shows that it does matter, but we just don’t know how. We don’t know how it matters. And at least we haven’t been able to study how it matters as well as we would have liked. So that’s problematic for practitioners who aren’t researchers, right?

We can’t sit here in our companies and say, okay, let’s now do what these researchers have not been able to do. And we’re going to spend a six month cycle and we’re going to have a control group and we’re going to have, you know, an experimental group and one is going to practice not talking to the customers ever.

And we just can’t do that. So,

Andy: for Hawthorne effect and, and, uh, reactivity and, and all these other things.

Mon-Chaio: so what do we do? What do leaders do of for profit software organizations?

Andy: Uh, you quit your job, you move to the coast, you find a cave, uh, you start a fire and you just sit there and stare at it while the water rolls in.

Mon-Chaio: Very, uh, I would definitely

Andy: Okay, see? See? I got some good advice.

Mon-Chaio: But no, right, you know, I mean, I think What do you do? I think it comes back sort of to that cultural perspective in some ways again. And you mentioned it about what do you think is important? And so I think about as an engineering leader whether you’re a CTO or whether you have you’re head of a triple A organization What is important for you?

Is it do you feel like more customer interaction is important to your product. Do you feel like higher collaboration between engineering silos is more important to your product? And really be honest with yourself because you know, it’s really easy to deceive yourself and be like, yeah, yeah, yeah.

Wave your hands. Everything’s important, right? Like, of course, customer feedback is important. Of course, uh, you know, collaboration is important, but really it’s not because you are in, I don’t know, some sort of a research field which takes 10 years and so customer feedback is not as important as scientific community and you’re really set on individual measurements of engineer productivity. So collaboration isn’t that important or whatever the case may be for you. Be honest with yourself about what is important and make that a small number as possible, right? Kind of like the culture thing. The more you have, the more difficult it is to reconcile.

And I think once you arrive at that point. Here’s what I will point, here’s what I’ll ask people to do. Once you arrive at that point, find a study, or find a, I wouldn’t say study. Find a process that has been documented and tested that matches with what you care about and start their template there.

Don’t just invent your own process because none of them match perfectly, right? And then really try to get into that process and ask questions and be curious. Oh, that didn’t work for me. Retrospectives didn’t work for me. Why not? Okay, how, how many times am I going to try it? What are the tweaks I’m going to do the next time I do it to make sure it works for And then over time, you will of course evolve that to your specific context, which is exactly what you should do. But don’t start by evolving it too early and saying, well, that didn’t work. None of this works. I just, I’m just going to do this one thing because my company is a snowflake and I, I’m so unique.

Andy: I like that advice. I really like that. Understand what it is that you want. What is the group that you want to be putting together? I would add one thing which is, and I think you were hinting at it, understand why. Not just, not just like, “Oh, I want customer interaction”, but why do you want this kind of try the five whys on yourself as you’re trying to figure out, like, why is this important?

And then, yeah, pick one of the existing methods that’s out there. There’s, there’s so many of them, like, even though they’re not coming out, like out of the woodwork anymore, like they were in the early two thousands, there’s, those are all still around. There’s ones from the eighties. There’s one from the sixties.

Just pick up one of these methods that people use. and really get into it and see because it worked well for them. And I think that’s always the key. These things were published because it worked well for them. And now you want to kind of get to the point where you understand what it is, why it worked, and try to figure out like, does it work for me?

It matches my way of thinking. Now, maybe I can get it to work for me.

Mon-Chaio: Mm hmm. Yeah, agreed. Agreed. And this all really, Andy, comes back to that learning mindset, that curiosity perspective. And that also has to extend down to your group. Going back to something we said way at the beginning of the episode, if your team does not buy in, it doesn’t matter what method you choose, you’re gonna get but a shell of the effectiveness of it. And so this again goes back to culture building, it goes back to storytelling, it goes back to all of these things we talk about in all of our episodes. How do you then communicate to your team so that they can be bought in from the bottoms up. So they can have, you all can have the same shared mental model of what and why you’re trying to achieve and why it’s important.

Andy: I want to, I want to leave our listeners with a few of the things that we did look into that I think they should think about, but this is my own bias and my way of thinking about it. Was pair programming better than solo programming? Slightly stronger is the, is the result it seems. Is TDD better than test after?

Maybe just slightly. It’s hard to say, but maybe just slightly is the hint. Are increments and and adapting to the change better than working to a plan. That one seems to be very clearly, yeah, you should, you should, you should do incremental or iterative and accept change. Does it matter what way you try to make a giant team?

Not really. It’s just, it’s just gonna be terrible no matter what you do if you’re trying to get a giant team.

Mon-Chaio: And maybe we’re naive here. I mean, I’ve worked at some pretty large companies. I don’t feel like I’m naive, but maybe we’re naive and you actually have to be able to somehow roll up all of these teams and they have to operate at the same increments and produce the same assets so that you can somehow aggregate them at the CTO level or whatnot.

They all have to be in JIRA so you can see all of their boards, you know, your 13 teams laid out in a line and you can see lines moving, but I don’t, I don’t think so.

Andy: There is in that research that said it doesn’t seem the scaling approach matters. The thing that they did say seems to matter a bit. It says, “among the control variables, a team’s experience with Agile emerged as more influential.” So, my reading of that is, if you can get your small teams working really well, that will have a much larger impact on how the scaled larger group is able to operate than if you choose less, or safe, or scrum of scrums or whatever.

Mon-Chaio: Of course. This gets back again to so many models, right? Whether it’s Expand or Explore, Expand, Extract or whatever, the context that you’re in, especially at a larger company, you’re going to have many different contexts, hopefully. And as a leader, you’re going to need to still be able to keep those different contexts in your mind, even if it’s not a JIRA roll up.

Now, we’re not going to talk about that today, but we do talk about it a lot and counsel leaders a lot in that sort of work, don’t we, Andy?

Andy: We do. And to wrap this up now So that we stop our rambling on terrible software engineering research. If you would like us to help you out and think about these things, what kind of team do you want to be putting together? We are very happy to consult with you, to, to work with you, to, uh, help, uh, figure out how you can bring this to your teams.

So if you’d like any of that, or if you have any questions for Mon Chaio or myself, if you have any topics you’d like us to talk about on the show, get in contact with us at hosts at the TTL podcast. com. And, uh, LinkedIn, or various other places you might be able to find us online.

Most likely email will get to us better than anything else. It’s been another fun, exciting, invigorating, and um somewhat depressing state of the world reading episode Mon Chaio. So until next time, be kind and stay curious.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *