Show Notes
A vacationcast where Andy ruminates on AI coding assistants. He draws on an article re wrote in 2007 on autonomic computing. Emphasis is placed on the essential roles and responsibilities of software engineers, differentiating them from AI systems. Andy shares personal experiences and observations of using tools like AI coding assistants, highlighting potential pitfalls and benefits. He concludes with some thoughts on a possibly useful pattern to follow when working with the assistants that keeps the software engineer responsible for the engineering.
References
- Autonomic Computing – https://zaphod42.livejournal.com/45717.html
Transcript
Welcome back for another Tactics for Tech Leadership podcast. This one’s a vacation cast with just me. What I wanna talk about is I wanna talk about all these AI tools. If, if you’re on LinkedIn, I’m sorry, but if you are on LinkedIn and you pay attention to what’s in your feed, it’s almost ex, almost certainly exactly like what I have now, which is that almost everything is either people saying that AI is the greatest thing, or AI is the worst thing, or AI doesn’t change anything, or it’s all theft, or who knows what, but it’s all about ai.
What do we mean by ai? I don’t know. I’m getting more and more confused by that every day. Uh, sometimes it’s just large language models. Sometimes it’s AI is doing, um, protein folding, uh, which I pretty sure is not a large language model. I don’t know, uh, but I’m kind of losing track of that. But I’m gonna focus on all the AI coding assistants.
The ones where people say that software engineers aren’t really needed anymore. Or no, oh, very few software engineers will be needed. So let’s talk about that. And I got kind of three things to go through here. Maybe some flights, uh, tangents along the way, but we’ll see where this goes. So the first place I wanna stop is about 20 years ago.
A few days ago I got an email from LiveJournal. If anyone remembers LiveJournal. I got an email from LiveJournal saying that I’d got it, I had an account with them for 25 years now, which was interesting. Uh, I, I, I kind of remembered that I had it, but it mostly forgotten and I didn’t know what I’d written on it.
The last entry was from 2007 and surprisingly that last entry was actually about something that I think is relevant. Some friends read it, uh, and they said if you just change a couple of the words, I could post it on LinkedIn as one of these AI posts that everyone’s putting up, but I, I wanna take a look at it.
And what it’s about, it’s about au autonom autonomic computing. So autonomic computing was this idea that the, the, the computer systems could take actions on their own to do whatever. Basically, uh, autonomic computing is, yeah, the, the self-healing systems, if you’ve ever heard of that, that was part of autonomic computing.
But, uh, I left it at this. I said, “it doesn’t seem so hard until you try to work through it for a specific system. Then you start running into all sorts of problems. What aspects of the system are going to be in your, in your model, what aspects are not relevant? How is the system going to figure out what it can do? How do you describe the changes to the system or model that the actions will do? Is it even deterministic? What they will be, what they will do? Who is responsible? If the system doesn’t do the right thing, how are you going to represent the model? And the list goes on.”
And I, I think that’s highly relevant because. It centers the thought, not on the actions that it will really take, but on how you come about with deciding on what those actions are, and then also in the end, who’s accountable or who’s responsible for having taken that action. That’s actually a thing that comes up in like the autonomous car questions. Uh, another thing I’m interested in is kind of like urbanism and transport and transit, and there’s a, a big thing there about autonomous vehicles not really solving any problems. Sorry, slight tangent. Uh, but
I think, I think there’s a, a few things. So we’re gonna, we’re gonna go down that path for a little bit. In, in a little bit. But first I also want to add one other little tangent onto this, a tangent from my tangent, which is I see another argument, which is that AI kind of stops people from understanding or stops them from learning.
I, I don’t really buy that one. I, I, I, I can, I can get behind it at times, but in the end, I don’t think it’s really all that big of a deal because I remember, uh, when intelliSense came out. IntelliSense is this thing where if you’re typing along and you hit dot, it gives you a list of all of the methods you can call.
It’s just normal now. It’s what we expect. When it’s not there I get really frustrated. Um, but when it was first coming out, there was a big hullabaloo about, oh, no one’s gonna remember APIs. How will anyone learn how to use the APIs anymore? ’cause the system just tells them what it is. I, I don’t think that was ever a problem. Uh, I, I think that it basically, it’s just, it’s a tool that helps us explore.
So you’ve got these things about AI and what you do with them, and I think it also connects a little bit back to, in the past there’s been a lot of cycles where we say we don’t need software engineers anymore. There were the four GL languages. There were, I mean, COBOL was an attempt to get rid of software engineers before there were even software engineers. And in the end, all of them mistake what a software engineer is. And he, this, this is, I think also career advice as well as an analysis of ai.
So here’s the thing that I think it, it, these arguments get wrong about software engineering. They’re, they’re saying that you can use ai because you can describe to it and it will produce code. You can describe to it the thing you want it to do, and it’ll produce code. The thing is, is it’s about getting to that description.
So if you say, oh, okay, well, uh, I take a description and I, I produce some code out of it. Well, if you’re, if you’re taking a description, you’re doing, uh, something like that, it has to be an unambiguous description. Right. Well, at that point, that’s a compiler. If you, if you take, uh, a complete unambiguous description of what it needs to do and produce code, then you are a compiler.
And a software engineer is not a compiler. A compiler is a compiler. It, a a C compiler takes, uh, a string of text that is a, has a completely unambiguous meaning, and it produces machine code. So what is it that a software engineer does? Well, a software engineer takes all of that ambiguous self, uh, uh, contradictory, uh, information and comes up with that unambiguous specification.
And you might say, well, but isn’t that what business analysts do? No business analysts, product managers, they don’t come up with an unambiguous specification. They come up with an ambiguous specification. And then the role of the software engineer is to take all of those contradictory, ambiguous, unclear, incomplete things and produce that very clear, unambiguous specification.
And it isn’t so much about writing the code as, uh, as it is about those interactions. That back and forth, that, is this what you mean? Is this the way we’re thinking about this? How, how can I come up with, from all of the things that I’ve heard? What is the thing that pulls it all together and holds it all together and then working with others to make sure that we’re all understanding this in the same way.
All right, so that’s a all kind of philosophical high in the high, in the stratosphere thinking. Let, let’s go down to what I’ve seen with people using these tools. I was working with a team and they were using copilot.
They, they were using it fairly extensively. I hadn’t enabled it on my IDE yet. I was, I’m well on a former business card. I, I, I marked myself as a Luddite, so I, I kind of hold off on it a little for a little while. But I was watching, uh, uh, other people using it and I was kinda like, oh, that’s interesting.
That’s cool. And what I noticed was at the time that the primary use I saw people using it for was kind of like the statement completion, where it would give them suggestions for what line of code to write next. And most of the time what I saw it suggesting people would ignore. Okay. But then I also saw that a fair amount of the time people would accept it and not fully understand what they just accepted and keep moving on. So basically producing bad code in my, in my As assessment of it. So then a few years later, I started doing some interviews where I had to interview people for jobs as senior engineers and lead engineers. And people were also using it and that’s fine. I don’t mind, I, I don’t care if you use it or not. It’s in fact if it’s part of your workflow, if it’s part of the way you would be doing this, during an interview, it’s what I’d want to see. I wanna understand how you’d think about this, how you’d do it.
And what I started to notice was that there’s also the other interaction patterns. You can have people, uh, you can have it where it, uh, completes the entire file or completes the entire method. You can, uh, have the little chat window on the side open. You can write comments and then let it kind of fill in the code that would come be beneath the comment. Uh, you’ve also seen people have like a chat GPT, um, browser window open and, and they’ll be chatting to it and kind of going in cycles with it.
And it would propose code and they’d say, no, change it like this and change it like that. And each one of them, uh. Kind of, I think actually hit all the same problems and had all the same advantages. So one of the first things I saw, and I kind of mentioned this a little bit, one of the first things I started to notice was that what it would produce would anchor the person.
Now that’s not necessarily bad, but you just have to be aware that it will happen. Now, anchoring is that once you’ve seen something, you’re probably gonna do something that’s similar to it. Basically. It’s also, it’s what these LLMs, do they anchor themselves? They, they start from some seed of text and then they, they kind of work from there.
But if, if you’re supposed to be evaluating if it’s the right thing, you gotta be aware of this. The other thing that I started to notice was that a quite often people would start taking stuff that they didn’t understand. Now. If you’re anchored and you’re taking things you don’t understand, you might, you might start hitting problems, but it also had some good, some big advantages.
Um, one is it removes what I’d call the blank page problem, where you’re sitting there and you just, you can’t think of how to start this. And so you might say, Hey, copilot, give, give me a start for a function that’s going to, uh, load this data from the database and it’ll give you something.
Alright, cool. Now I got somewhere to start.
Um, or the other way that I think it can be really useful and I’ve seen it happen, is you’re kind of. Uh, maybe because from the intelisense world, we don’t know our APIs intimately anymore. And some of them are so big that just can’t keep them in your head. It can remove a lot of the floundering around about how should I actually do this?
And so you can give it a little bit of a prompt around, I think I, I need something that does this, and it does that now, and it, and it gives you something.
So what’s the next step on this? What, what happens once it’s given you something?
All right, so a few days ago I was, uh, working on a website for a local group, and I wanted to experiment with some new tech that I hadn’t used before. Uh, it was, it was Firebase. Um, I’d used small parts of it, I think, but I wanted to use parts of it that I, I wasn’t familiar with and I, and I hadn’t done anything like it, so.
I also have access to Gemini. So I used Gemini and I was like, okay, help me with this. Help me with that. And I would ask it questions. Um, for instance, I said, um, well actually this had this, my very first question to it had nothing to do with Firebase. I was trying to write some tests and I needed to, uh, create a location object for, for what it was I was gonna do.
And I said, how do I create a location object in JavaScript for testing? And it gave me all sorts of options, and that was very useful. That got me unstuck. Even if it wasn’t the right thing, it worked and it got me moving. So it was a little bit like having a navigator pairing partner.
I could ask it, what do you think about this? Or Where should I take that? Now the thing is, is that it was a very, it’s a very passive and compliant navigator. These AI systems don’t do very well at arguing with you. So I am still responsible for keeping the entire, uh, uh, approach, the code base, uh, what I’m doing, aligned and on track and the appropriate quality.
The other thing that I noticed was that when it would give me code to do things, it gave me code that did way more than I needed. Way more, and it was really up to me to keep an eye on the YAGNI principle. Ya ain’t gonna need it. Because if I just took what it would give me, well. I might have all sorts of things that are in the system that either aren’t used or don’t do what I really needed at the time.
So I, I really need to pay, I needed to pay attention to that. And I think, I think even more than we’ve, uh, in the past, I, I remember I was taught this rule of, um, you don’t copy code from online if you don’t understand it. So if you find something on Stack Overflow. Don’t copy it. If you don’t understand it, use it to, you can use it to try to understand it.
You can do some more investigation. ’cause I say, well why is it, why is it that, that I need to need to do? But here, I think it applies just as well. I shouldn’t take code from these systems if I don’t understand what it’s doing because, and this gets to what’s, what’s the software engineering role in this stuff?
Because in the end, I’m still responsible. I’m still accountable and responsible for that system working. And if it’s doing things that I don’t understand, then I’m abdicating that responsibility and I’m not acting as a software engineer because once again, back to the, what is the software engineer? The software engineer is the one that is taking all of that ambiguity, all of that inconsistency and turning it into consistency and certainty and all of that.
And if I don’t know what I’m turning it into. I can’t be doing that. I can’t be doing my job as a software engineer.
So I don’t think, I don’t think that it’s really all that reasonable at the moment, maybe it’ll change, but I don’t think it’s at the moment reasonable to delegate the engineering to these systems. I think that they’re incredibly useful for guiding API usage, being, being like interactive documentation for these things, uh, for brainstorming, different approaches that could be used for debugging.
Um, finding error messages and explaining what that could mean and what you might do about it, and doing some boilerplate work. Because in the end, boilerplate work is finding a pattern. And what are these systems? They’re, they’re pattern generation systems. So boilerplate work is about patterns. Now, I say not too much boilerplate work though, because once again, you’re a software, you’re software engineers.
You want to not do too much boilerplate work. In fact, if you’re finding yourself doing lots of boilerplate work you should think about, well, what is that telling me about the system that I should have here and how can I minimize that boilerplate?
So I, I, I actually, in, in doing this, I thought it would be a interesting thing, and I haven’t done this yet, so if anyone’s done this, please let me know. I, I think it would be a really interesting thing to try to use one of these systems. Now, I’ll, I’ll, I’ll step back a second. Another thing that I’ve seen people do with them that I think actually is completely wrong is I’ve seen people say, oh, give me all the tests for this piece of code.
Now, using it as a, an aid for getting some of them, sure, but you still need to come up with are these the right tests? Do these cover everything? Do they explain what’s going on properly? All of that. Because once again, your tests are not about really the testing. They’re about an executable specification.
We’re back to that specification. Your role as an engineer is about specification. So a thing that I think would be really interesting is rather than saying like, AI write the tests for me, it’s, I write the tests and the AI writes the code that makes that test pass. And you do this in that baby steps TDD fashion and you do it in that baby steps, what I call adversarial TDD, where it’s like, alright, I’m gonna write this, you’re gonna write the smallest thing could, could make that work.
I’m gonna come up with more and more test cases that kind of force that code towards the full behavior that I need the full specification of what’s needed. Uh, but try to keep that code as minimal as possible, as simple as you can. And you’re constantly going back to that what’s the simplest thing that could possibly work here?
Now you let the AI write that step for what’s the next bit of code to get the test to pass. And then you take on refactoring. And real refactoring. I, I, I, I. Bug bear of mine is that I hear a lot of people say, oh, I’m gonna refactor this code. What they mean is they’re gonna rewrite it or they’re gonna implement functionality.
It seems like for most people the word refactoring has lost all meaning. But here, I mean, actual refactoring making changes to the structure of the code that do not change the behavior. So what that means is that you are responsible for the specification of what the code should do, the tests. And you are responsible for the structure that the code has to create that design, those affordances, the, that, that structure, that, uh, uh, correctness of design.
That means that, uh, it will, it, it does, it expresses what that specification is. Or it is demanding. And the AI bot is there to know the APIs and to know, um, all of these other things so that it can just add in and do those things. But it doesn’t do refactoring because it doesn’t know that context. It, uh, it doesn’t do specification because it doesn’t know that context.
And so I think that would be an interesting uh, approach. And I’ve heard from some people that actually a lot of seemingly extreme programming practices are showing up again because people are finding that they need that in order to keep the AI agents under control. Which I, I find fascinating. So it’d be like, yeah, just go down that path then.
So this was a bit more rambling, uh, vacation cast than, um, I normally do and much longer. But, uh, I hope you found it interesting. Thank you for listening. And yeah, until next time, be kind and stay curious.
Leave a Reply