S3E10 – SWIFTly Understanding Failure Modes

Show Notes

 In this Tactics for Tech Leadership podcast episode, Andy and Mon-Chaio explore SWIFT (Structured What If Technique). While traditionally seen as a technical tool for failure analysis, the hosts consider its potential applications in leadership and organizational contexts. Listeners will learn how SWIFT can help anticipate system failures even before they occur, from technical systems like Redis caches to social-technical systems like performance reviews and hiring processes. By the end, you’ll understand how to adapt this structured method for diagnosing issues and improving both technical and organizational systems.

References

Transcript

Andy: Welcome back for yet another episode of the Tactics for Tech Leadership podcast.

So Mon-Chaio, we’ve been talking through all these different ways that you can examine an organization. You can kind of look at things, you can question, and we usually stay away from the kind of technical tactics.

Mon-Chaio: Mm hmm.

Andy: But today we’re going to talk about something that I know as being more of a technical tactic.

And I’d like to explore if we can use it in more of a leadership or organizational way.

Mon-Chaio: Okay.

Andy: And that tactic is called SWIFT. The structured what if technique. Today we’re gonna be Swifties.

Mon-Chaio: Wow, alright.

Andy: I’ll need to find some Swift music and see if I can

Mon-Chaio: Hehe.

Andy: with that.

So the structured what if technique. I know this. I’ll give a little bit of a story. I don’t think I even told you this, like, how I know this. But, I know this because, at a place that I worked, we did something that we called failure analysis. The failure analysis that we did, for those who don’t do this, I would say that this is a very useful technical technique to use.

So a failure analysis is basically you say, how could my system fail? Before it fails, you do an analysis of how could this all fail? And ideally, you do this analysis before you’ve built the whole thing so that you have the opportunity to now say, Okay, well, let’s change our design to address those failure modes and do something different.

Get a better failure system. So it could be that you do things to mitigate certain things. You do things so that you can, uh, learn that the failure has happened. Sometimes you do these analyses and you start discovering like, Oh, we would never know that that happened. And sometimes you change it where that failure mode just doesn’t exist anymore.

So, The way that I originally learned this was that we would look at the diagram of the system, just a logical diagram of the system, and we go through it and for every kind of, arrow between the boxes, we would just ask the question, how could that fail?

How would we know, and what would we do? That was the technique. And it worked pretty well, but then, a colleague of mine said, maybe we could do this better. We’re not the first people to come up with this. Maybe we could do this better. And we started researching a whole bunch of different, analysis techniques, and there’s many of these.

There’s a really common one that’s been around for decades and decades called failure mode and effect analysis. It’s really big, really heavy, really hard to do. Takes a long time. There’s other ones called HAZOP, there’s, there’s, uh, FailureMode, and, Effect, there’s one with a C in it. Can’t remember the exact name.

And there’s another, there’s one called FaultTreeAnalysis, EffectTreeAnalysis, and all these different techniques. And we came across Swift, and as we did it, we were like, this is very similar to what we already do, but with a few tweaks that actually makes it a little more interesting. So that’s, that’s kind of how I got into this and how I started learning about it.

And it’s actually a really useful technique that I find when I work with teams, taking them through a structured way of evaluating their systems and figuring out what could we do about this? How could it fail? And I find when I do this, one of the big things I need to coach people on is to don’t just disregard that, oh, that failure mode doesn’t matter.

Oh, no, it can’t fail. You’re like, no, be creative. Everything can fail. The most common one that I hear is like, but it’s AWS. It’s going to be fine. Like, no, no, no. They’ll shut off your system at some point.

Mon-Chaio: Yeah, I mean, AWS has outages. We had some Azure outages recently. They fail.

Andy: There’s the famous, bash bug that caused AWS to have to rolling restart every single one of their systems.

Mon-Chaio: Mm hmm.

Andy: And so for reasons completely out of your control, they’ll reboot your system. What happens? So, Mon-Chaio, I kind of threw this technique at you, and I said, Hey, let’s talk about this next. What are your thoughts on it? Like, having just read the documentation that I sent to you, and maybe some other stuff, what are your thoughts on what it is and how it works?

Mon-Chaio: Well, like a lot of these things, it’s very structured. I mean, it’s SWIFT, right? The S stands for structured. What I did like about it was that it was very consumable. There were areas where as I was reading through it, your eyes kind of glaze over a little bit. But for the most part, it’s really, I would say, you could sit down and read it probably in a half hour.

Probably even less if you kind of skimmed it more and basically just got the gist of it.

Andy: Yeah,

and the gist of it is, most of it. Yeah,

Mon-Chaio: right, and the gist of it is most of it. And unlike some of the other techniques that you might find that are more heavyweight, the gist of it gives you enough, how do I say this? Missing the nuance doesn’t make you fail at using the technique. Whereas I think a lot of other techniques, the nuance is the important part.

And you really have to dig in and say, Oh, actually that little 20 minutes that they’re recommending there, I can’t skip over because it’s like a core thing to drive understanding these different areas.

Andy: But I would say, the nuance of it, is critical, but the nuance is its lack of formality. The way you go wrong with it is if you hold yourself so tightly to its process that you don’t let the brainstorming and exploration happen from the experts that you’ve put together.

Mon-Chaio: There’s so much of the brainstorming and they touch on this a lot. It’s you get people together and you brainstorm and it’s a very collaborative technique Which you know, this is the type of stuff that we like so I’m not surprised that you were drawn to it and you suggested that we talk about it.

I think well, why don’t we actually

Andy: Just say what it

is and go through it, yeah,

Mon-Chaio: because I think we’re five plus minutes in and we still don’t know what it is yet.

Andy: Yeah, so let’s just walk through it really quickly. What would it look like to apply this technique? All right, so I’m gonna keep this fairly technical. I’ll use a specific example of one that I went through fairly recently. I did it very loosely, but I’ll give the additions to make it the full Swift technique.

So you start out by identifying what is the part of the system that you are interested in analyzing. In this case, it was a way of caching a certificate for authenticating, authorization tokens. It was a change to the system that was happening. The system had a particular setup. A new setup was being proposed. And that was going to be our focus of attention. Once you’ve understood what it is, you work out who are the experts? Who are the people who know about this? And you make sure that they’re available. And then you tell them how the Swift technique works.

So you give them some upfront preparation if they’ve never gone through this before. The next thing you do, you’ve got your team, you’ve got it all worked out. You make sure that it’s clear what kind of aspects of the system you’re interested in. And in this case, it was what ways of failing does this add to the application that it’s being put into?

So, is it, there’s new failure modes that are immediately obvious from the design that was proposed. So the question was, are these new failure modes acceptable? What would happen with them? What else can we do? Now, you, if you’re bootstrapping this whole thing, one of the things you need to come up with question categories, or checklist. So each area that Swift gets used in, they start out, they kind of have like, what is the minimum set of prompts that you should be using to guide your analysis?

The very original Swift had, it was in, I think, material science was where it was from. And so they said, well, You have material problems, or you have external events, or you have operating errors, analytical or sampling errors. So you’ve got those kinds of things. And what they’re going to do is they’re going to use this list to guide the brainstorming.

When it was brought into healthcare, they said, those aren’t quite what we’re interested in. They said, what happened, the prompts there are going to be wrong person or people, wrong place or location, wrong things, wrong ideas or information or understanding. These are the things that they’re going to talk about for a healthcare setting.

When you do it in software, we had a list of things like, data leak, or data loss, or, Network connection. Things like that. So, we had that initial list. Now you get the whole team together. You have your list, and you start out by just brainstorming through those various categories of things.

Coming up with questions. What if something happened? What if this occurred so you could say well, what if we had a flaw in this library that exposed something in some way What if in this case there was going to be a redis cache involved? What if redis was down? What if redis black holed our network connections?

What if the cached certificate was incorrect, that the actual certificate had changed, and the cached one was now the wrong one. So things like that. What if the data was wrong, would be that category. And you try to brainstorm a bunch of things, and then you try to answer the questions.

This is why you have the experts here. So you want to brainstorm the stuff first, get a whole bunch of ideas out there, and then you go through and you try to answer the what ifs. From the answers, you come up with what should you do. What do you do about this? And this is where, this is the structure.

But the unstructured part is that this is very much about getting knowledgeable people together and guiding them through, thinking about their system, and taking their advice. And the facilitator’s role is to show a lot of judgment in what’s the most valuable course of action. So the facilitator isn’t someone completely unknowledgeable.

The facilitator is someone who’s kind of like, Ah, that, that area, that’s got my spidey sense going. That’s where we’re gonna start. This, this other one that was brought up, I doubt that that’s all that interesting. So if we never get to it, that’s probably okay. And that’s basically it.

Mon-Chaio: Yeah, so I think, a few things sort of high level that I’ll add to that for my reading of it. I think, they talked about bringing in data from previous failures, or any other data that you need to be able to understand the situation. And so, in your example, you might say, look, I’m going to bring up, the last 15 failures in this subcomponent, or in each subcomponent that this new component touches.

And we’ll use that to inform our thinking. You mentioned the categories. Interestingly enough, I think the one that I was reading, was, I think influenced by not, by, by maritime industry, boats and stuff. But I felt like a lot of them were very interesting and it got me thinking as well.

Like they had external factors or influences. For them, they had things like strong wind or vandalism or bomb threat. But this is the thing that you were mentioning around AWS failures. It’s sometimes you look at your system and you forget about the external part of it. What would happen if these APIs went down, or they didn’t return the right data, or they actively returned the wrong data, or there was a man in the middle attack, those types of things, right?

Operating errors and other human errors, they said maintenance, to them maintenance was stuff like, Like materials problems or procedures and permits or scaffolds and crane lifts But we could think about maybe maintenance as a lot of different things as tech debt as maintenance downtime as

Andy: Yeah, maintenance, in this case of the Redis instance. Update, upgrade of the Redis, system, and it now behaves a little differently, what would happen? Now, we might say, well, we have no idea, we don’t know how it would behave, but you can say, if it doesn’t, what can the system do in the case where unintended responses are coming back from Redis?

Mon-Chaio: And so I think even taking the categories from other industries helps you think about, oh, does this, how does this apply in my industry and software, the technical details, or in more organizational or leadership challenges, so I like the categories you talked about.

This process obviously is really simple. You take each of these categories and you brainstorm what if questions, that’s why it’s called Swift structured. What if, right. And then, they say, to ask all the questions, to brainstorm all the questions before diving into answers, which I think is a common brainstorming technique, right?

Once you start solutioning, your brain, the part of your brain sort of changes, and then you sort of lose the brainstorming. So you want to get all your brainstorming out first.

Andy: their thinking on that as well is that, if you stop people, it might be that the first things that come to mind create a whole bunch of discussion, but they’re not the most important ones. You want to keep it going, and then pick up the stuff where your judgment tells you, that’s actually an interesting one.

Mon-Chaio: Okay. They say you need a recorder, which, of course, you got to do take. So there are two roles again, very lightweight, a facilitator and a recorder, and then everybody else participates, including the facilitator participates in the discussion. And then I think the last thing that caught my eye was something that we talk about a lot with triple a teams.

They say, when you make recommendations, try to make them the loosest guidelines possible so that the implementing teams, have a lot of leeway in figuring out how to go about making that a real thing instead of saying specifically what exactly has to happen in constraining them.

Andy: And, I think in a lot of cases in software, the experts that you are bringing in are probably the engineers who are working on the system. Now, you might have other experts, so you might have like a team that is an expert in a particular, application, but you might have your platform team representative coming in to think about how could things go wrong in the infrastructure level, in the networking, in the certificate handling and in that kind of stuff.

Mon-Chaio: I would say even then it’s still pretty important to leave this solution a little bit more wide open. The Agile case, last responsible for a moment for decision making applies here. Even if it’s the same people, there’s a temporal aspect to it. The solutions they come up with today because of the context and the environment may be different than the solutions they come up with tomorrow because they’re more focused on a specific problem at that time.

Andy: One thing I do quite often in these, cause I take this, I take Swift more as an inspiration than as a direct process that I actually execute. One thing I really like doing is when we come up with what the failure mode is or how it could play out. I think it also offers a good training opportunity to do a little bit of design right there to say, well, what would be a different design that wouldn’t have this kind of issue, because then you can immediately go into Swift on it.

Mon-Chaio: Mm

Andy: So someone proposes, oh, it could work this way. And you’re like, okay, how could that fail? What are its failure modes now? Let’s start going through this. Would that lose the data? Would that do, and, and you can go through that. And what I find is quite often people are like, oh, we’ll solve it this way.

And then you’re like, you go through it and you’re like, that has even worse failure modes. They’re like, oh damn. Okay, let’s try again.

Mon-Chaio: What’s interesting here though, is just three or four weeks ago, I led a process, which was a little bit similar to Swift, but a little bit different. The intention was we have a system running to support a business. What are the critical areas of the system that we must know when they fail? Even if we can’t solve it right away, we need to know or have signals that it’s starting to fail.

And the first thing we did was brainstorming and took the whole system. What are the critical parts of the system? Some of the experts included the business process experts to say, Oh, these are the things we see failing on the system all the time, or these are the tickets that we tend to have to file.

We brought in a bunch of our tickets, right? To say, what are the historical failures that we’ve seen? And then after the brainstorming, we had certain categories. Things like, oh, there’s the front end failures where the front end can’t communicate with the middle layer. Here’s some queuing areas where like the queue can back up, et cetera, et cetera.

And then we’re in the process of each of these buckets. 45 minutes, every two or three weeks, we go into each of these buckets and we say, okay, now what are the things that can fail here and what do we want to do about it? So a little bit less swift, cause we’re not really asking what if. We’re asking that a little bit, and the other thing is, it doesn’t have to be what if, there’s just questions, right?

I think they

Andy: How could, is it possible, was the examples of other kinds of questions that people could ask about it.

Mon-Chaio: right, and so there was some of that, what if this, is it possible that this could fail? You know, what happens when the queue lengths get long, that sort of thing, and then we tried to identify what existed, and then what needed to be built so we could get visibility into these systems.

Andy: Yeah,

Mon-Chaio: And you had mentioned that.

Even before you found Swift, you were doing a very similar process, but then you and your colleagues said, Hey, is there something documented? Right?

Andy: And I should say, we didn’t switch fully to Swift. What we did was we actually took from it, the list of prompts, and we started producing our own set of prompts. And that’s, I think, one of the key parts of the Swift technique, is that it learns. So that set of prompts isn’t fixed.

As you do analyses, or as you have incidents occur, you can go back to that list and you can say, were we missing a prompt that would have gotten us to think about the way that this just happened?

Mon-Chaio: Mm

Andy: we would have come up with what just occurred, but you might look at it and say, oh, we have a blind spot.

Let’s change our list of prompts. And now you have a thing written down, and once you have stuff written down, you can start modifying it, and that’s how an organization can encode its learning.

Mon-Chaio: Yeah, absolutely. They do. The authors of this Swift toolkit do caution against using it for. After the fact auditing procedures, because it’s not robust enough for that.

Andy: You use different stuff from that, but you might say, let’s look at the outcomes of a Swift analysis that we did on this system, and see, like, oh yeah, the way it failed wasn’t in any of the prompts, and we can see why they never came up with it. Let’s add that as a prompt for next time.

Mon-Chaio: Yeah, absolutely. So that’s one of the things I think that may be a takeaway, because what I was about to ask is for people that are doing essentially Swift, but not exactly Swift or something that looks like Swift that they didn’t even know about. What does learning about Swift give them? How is it different?

What should they take away from this?

Andy: It’s the same thing we learned from future search. Do you remember that? Get the whole system in the room. This technique is doing the same thing. It’s saying, get the whole system in the room. And in this case, it’s get the whole knowledge of whatever you’re analyzing together.

Because other techniques are like, oh, let them go and think about it independently. Each one of them can just think about it. Swift, FutureSearch, what they’re saying is, no, the conversation of those different expert domains is the important thing.

Mon-Chaio: I like that. Yep.

Andy: And I think the second one is, give yourself this list of prompts to guide what you’re going to investigate based on prior learnings. So once you get that feedback loop, I think then you have a really powerful system.

Mon-Chaio: I think adding onto the list of prompts to me. they talk about preparation. There’s a preparation stage and then there’s a brainstorming and solutioning stage.

Andy: Hmm.

Mon-Chaio: I’m really big. I like their preparation stage. That’s where you figure out what the prompts are, right? The prompts aren’t developed during the brainstorming stage.

It’s to your point, you let them know, do you let the experts know how the process is going to go before you sit down in that meeting. And that includes the prompts. And that includes also any sort of supporting documentation. Past experiences, past errors or bugs or outages, in order to get them prepared to have that conversation.

Andy: Yeah, I think that preparation. Cause if you just get everyone into a room. And immediately, like, okay, we’re going to talk about this bit of the system, and there’s, you don’t have any diagrams to help you visualize it, you don’t have any description of it, everyone has a different understanding of what part of the system they’re talking about, you’re not going to have a useful analysis, so that preparation to make sure everyone’s understanding what the, focus is, and that they have that supporting material, operating manuals, or references to prior outages, or whatever.

Mon-Chaio: Mm hmm. Okay. Well that’s swift.

Andy: That’s Swift. So, I guess the next thing is, all right, that sounds great for these technical domains. Like, I was talking about it for failure of, certificate handling. I used it on all sorts of other things for database failures, for analyzing a whole new system, analyzing an entire Prometheus and Kubernetes setup to see how that could fail.

But that’s all very technical, and I said, I think it could be used for more leadership type things.

Mon-Chaio: Mm hmm. Mm hmm.

Andy: Now the reason I say this is because one of the ways I think we both look at the world is that the way the group is operating is a system. There are things happening, and we can even take this in terms of stuff like maybe you’ve got a hiring process. That is a particular system. You’ve got the way your group works for sprints, or you’ve got the way your OKR process works. hypothesize that you can do the exact same thing with those systems, those designs, and go through it and say, what would be an undesirable outcome here? And now let’s do a what if on it.

Mon-Chaio: Yeah.

Andy: So like on a hiring process, you might say, well, what if we give a pass on this stage of the process to someone who would be a terrible hire?

Mon-Chaio: Mm hmm.

Andy: Okay, well, or what if we wrote down all of these OKRs? We’d say our process is we’ll write them down and we’ll send them out. What if we write them all down and everyone ends up with a different understanding?

Mon-Chaio: Yeah.

Andy: Or what if someone’s on holiday and they never see the stuff when it gets published? So they never actually see the OKRs based on our setup.

Mon-Chaio: The interesting example I was thinking of was compensation strategy. It’s been on my mind, which might be why, might be why I’ve been thinking about it. Interestingly enough, I’ll give a shout out to my chief people officer at the current place that I’m consulting. I’ve never sort of understood how pegging compensation to market rates worked. Because I always felt like, well, if the market goes up, then you change your internal equity and you raise everybody’s salary. But if the market goes down, you don’t decrease everyone’s salary. So how could that possibly work? And she had a very, very nuanced view. And I very much appreciated that. So Bridget, thank you. The first person I’ve ever talked to that was able to give me a structured view that made sense to me about that. You can say, we’re going to change our compensation strategy to align toward performance.

And the way that we’re going to do that is we’re going to do performance reviews based on a certain set of criteria, and then we’re going to score, and then the compensation strategy is going to be based on which bucket you land in based on your score. So that can be what you’re investigating with your structured what if. Even in these sort of maritime buckets, you can start to think about things like, well, measurement error.

Andy: hmm.

Mon-Chaio: What happens if you score somebody high, but they’re actually not valuable? How would that happen? Could that happen? How likely is that to happen?

Andy: Turns out that all of the managers filling out these review forms are succumbing to social desirability bias and inflating all of the scores?

Mon-Chaio: hmm. I like that one, because that, There’s the loss of integrity part, which again in the boat world is like the hole in the superstructure. But you can think about human integrity, bias, as you were

mentioning. What happens if a manager just really wants to retain someone? How do we prevent against this internal bias of managers themselves are only responsible for how we rate?

We can talk about then, do you cross rate or, you know, do we have, unbiased parties sitting in, right? We can talk about emergency operations. A very valuable person is about to quit. What if? Right. So I think

Andy: What if we’re a team spread across around the world? What if, one of the countries that we have people in goes to war.

Mon-Chaio: Mm hmm. And I was just thinking about external factors, but not in that way.

Andy: Yeah. I was looking at external events and influences and I thought, maybe, maybe something like that.

Mon-Chaio: Yeah, and that happens, right? It’s happening today. It happened a few years ago for a lot of companies. So, I do think that there is a lot that you can use Swift for that isn’t technical. And to your point, Andy, we talk about social technical systems, right? Like, these things that we’re building are systems.

Now, they’re not as, they’re not as structured, I guess, or, nah, that’s not the right word. Um, they’re not as,

Andy: They’re not as automated.

Mon-Chaio: they’re not as automated. It’s more difficult to trace through the if then else.

Right, because they’re not, there’s a word in software for when things just like flow, based on if then else versus,

Andy: Oh, control flow versus data flow.

Mon-Chaio: No, I’m thinking it’s like discrete or something descriptive or, or some way of describing programming languages and algorithms.

There’s like an algorithm which always returns the same value, which is an, versus an, what’s that?

Andy: Idempotent?

Mon-Chaio: Versus a, a algorithm that won’t, right, which will return different values even given the same inputs. Um, Okay. Yeah, sure. That’s right. Yep.

Andy: Stochastic systems where it’s like, it will vary like LLMs or stochastic

systems where, mm hmm.

Mon-Chaio: So, human systems, social technical systems are much more stochastic than technical systems, generally speaking, the way we understand them. But they’re still systems. And so, systems analysis techniques can still be brought to bear, and I think they’re still very valuable.

Andy: And one of the reasons these techniques have to be done, even on these systems that are supposedly understandable, is because there is still randomness. What I find goes wrong for most engineers when they try to do this is that they don’t think that their systems are random.

Mon-Chaio: Mm hmm. Mm hmm.

Andy: and I think there’s a clear distinction here, is that randomness is anything I can’t explain.

Mon-Chaio: Okay.

Andy: And so things like, I took no action, but, AWS, cut off connection to our Redis instance. That’s random. Something like that can happen.

We maybe don’t know how, we don’t know when it will happen. It’s not part of our system, but such a thing is possible.

Mon-Chaio: Yeah, no, I agree. So, I mean, we have a couple examples. We have the KPI example. We have the, compensation strategy example, tied to performance reviews, perhaps. And so, yeah, I do think absolutely leadership, technical leadership can use systems like these. In social technical systems, not just technical systems.

And I think more broadly speaking, it’s not just SWIFT. There’s nothing fancy or specific about the structured what if technique that makes it more applicable. I do think that given its nature of being a little bit more flexible, allows you to adapt it versus the 200 page technique of analyzing a nuclear reactor.

Andy: Yeah, something like, fault tree analysis, you probably could apply to social systems, but I think it would be so difficult, because one of the things that it, I think it really struggles with, at least from my understanding of it, it struggles with that uncertainty,

Mon-Chaio: mhm.

Andy: fault tree analysis really is about like, This could happen, which could cascade to this, which could cascade to this, which could cascade to this.

And it’s a useful mode of thinking, but if I was going to try to write that out for how a performance review process would go, I’d probably miss out on a huge number of things that I just can’t figure out how it would get cascaded to, but it could happen.

Mon-Chaio: Your tree, in this case, in human systems, has so many branches.

Andy: Yeah.

Mon-Chaio: I’m not even really sure. And then branches loop back into other branches, right? So, yeah, I can see how that would be much less useful. I think the insight in my mind still stands, which is a lot of these things which we apply to technical systems, can, with some modification, be extremely useful in looking at social technical systems.

And so we don’t have to confine them to the technical analysis.

Andy: Yeah, and so bringing this back to kind of our theme around diagnosis now, you pointed out that, they warn against using this for an after the fact kind of retrospective way of doing this, but you can still use it on a thing that is running and you think is having problems. You can say, what could be going on here?

Why might things be happening? What are things that are, are, could be happening that we’re not even noticing?

Mon-Chaio: Oh, absolutely.

Andy: And so I think applying that, you can start getting through, you can now see it like, okay, you come into organization, you’re like, there’s this lethargy, nothing seems to be going well, okay, let’s put that to the side.

And let’s just kind of take one part of the way we’re operating this company. And as a focus word, let’s use morale. And as another focus word, let’s use, commitment and think about some things like I’m just coming out well on the fly and you start analyzing the system and you start saying, well, what if, this occurred and you can start understanding like, oh, maybe that’s what’s going on here.

Mon-Chaio: Mm hmm.

Andy: Now you can start coming up with ideas about other ways to operate. And so in the diagnosis, you start coming up with. Well, not even really a diagnosis because you haven’t proven anything. You’ve done the hypothetical, something could be going on.

But now that you have a better understanding of how it might fit into the whole system, you might now have something where you can say, let’s modify the system so that now we can get a signal.

Mon-Chaio: That’s an interesting point of view, Andy, because Swift as it is written is about finding failure cases, finding possible failure cases we might have overlooked. But what you’re suggesting is using it as a which one is already happening. Diagnosing, I’m not trying to find failure cases here.

I’m hypothesizing possible failure cases. And asking, is this already happening in the world?

Andy: And you can use that to say, to kind of go along the lines of like, does this failure the way that we think that one would play out? Sound like what we’re actually seeing? Okay, now we have a thought of what could be going on here. You’re almost using it as a, debugging mechanism.

Just like, as a programmer, something will go wrong, a test will fail. And now you’re like, okay, well, what just went wrong? What’s not happening the way I think it is? And what you start doing is you start going through a little bit of a what if technique. Well, what if it’s actually that object has the wrong value in that instance variable?

Mon-Chaio: Mm hmm.

Andy: Okay. Yeah, if that was wrong, then that would, that could, that could turn into this. I could see that could turn into this. All right. Let me, let me go and put a printf statement right there. Nope. Nope. It wasn’t wrong. So maybe that’s not what’s happening here. And it gives you a way of kind of diagnosing your system.

I’m not saying that that’s like what they propose you use Swift for, but I’m saying that that mode of thinking, you can still apply to diagnosing what’s going on in your organization.

Mon-Chaio: Wanted to connect it back to the beginning, Andy. I think this is modifiable. And if someone modified it to be a little bit more diagnosis driven, not just in social technical systems, but even the technical systems, you could use it to say what might be going on. You could call it the Structured What If Checklist Technique for Diagnosis, SWIFT-Di. hahaha.

Andy: Well done

Mon-Chaio: I look forward to your white paper, Andy.

Andy: Alright. I feel like we’ve kind of beaten this horse enough. It was a very swift discussion.

Mon-Chaio: Wow, we’re really reaching

here. Yeah.

Andy: I, I, the swift technique. We’ve taken it through how you can apply it, what it is, how you apply it to your technical systems and then not only how you can apply it to your kind of organizational systems, but also how you can use it probably for diagnosis by implementing the same mentality of what if, how would I know?

And a guidance through a number of focus words or focus questions, concepts that would get you to think about different things that maybe, maybe you wouldn’t have thought of if it didn’t have that little prompt there.

And so these techniques, these are things that Mon-Chaio and I, maybe we don’t apply this one by name. It is the thing that we do when we look at organizations, when we’re working with groups. It is the subconscious process that we’re going through, and it’s the same thing that we train people to, to do as they are coming up in the leadership ranks, as they’re trying to understand how their team is working, how their department is working, so that they have these very fundamental mental processes to understand the world around them.

If that sounds intriguing to you, or if you have questions about it, or if you have comments, or if you’ve even tried this, and you’re like, it was great, or you tried it, and it was terrible, we’d love to hear from you. So, we are available on email at hosts at thettlpodcast. com. Send us an email, let us know, share this around, get it out to more people, if you found this valuable. So Mon-Chaio, until next time, be kind and stay curious.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *