S2E28 – Making better hiring decisions with Edward Morgan

Show Notes

 In this episode of the TTL podcast, the hosts welcome Ed Morgan, founder of Gordian Knot, to discuss enhancing the hiring process for software engineers. Ed highlights the concept of ‘sensemaking’ and its impact on interview outcomes, stressing the importance of structured versus unstructured interviews. The trio touches on creating effective rubrics, the surprising resistance to change in hiring practices despite evidence, and the role of psychometrics. Listeners will learn practical steps to improve hiring accuracy and discover why structured interviews are statistically superior.

Edward Morgan is the founder of Gordian Knot, a consulting firm that helps companies build high performance teams by improving their hiring processes and developing custom technical assessments. Visit www.gordianknot.company to learn more.

References:

Transcript

Andy: Welcome fine listeners to yet another episode of the TTL podcast. And as we’ve done twice so far, I think this is our third time Mon Chaio. We have a guest. So everyone welcome Ed Morgan, who’s the founder of Gordian Knot, which is a software hiring process consultancy. I think I got that right.

So Ed, would you like to give us a quick introduction to yourself and then we’ll introduce the topic that we’re going to be talking about this time with you?

Edward: Sure. Yeah. So I’ve been an engineering leader, various companies over the past almost 25 years at this point. And about a year ago I founded this process consultancy to help people get better at hiring software engineers. There’s a lot of issues with the hiring process. Candidates are unhappy.

The hiring managers are unhappy. Recruiters are unhappy. And so I try to step in and bridge the gap between those three different groups, help them figure out what works for them and come up with a smoother process that allows them to hire better. And today, we’re going to talk a little bit about about sensemaking and about how to use structured versus unstructured interviews.

Andy: Absolutely. Yeah. Sense making and how that fits into the hiring process or doesn’t fit into the hiring process. When we first spoke to you about whether or not we would do this interview and what we would cover in it, you brought up this whole idea that just the conception that most of us have about how to get information during an interview is just not based on the evidence. And it’s this idea of sense making. That, that is kind of where it starts going wrong, I think. Can you just describe a little bit, what is sense making in this case? Because I don’t think we’re talking about Cynefin framework here.

Edward: no

Andy: heh heh.

Edward: Now, sensemaking is this phenomenon that’s that just, that helps people literally make sense out of information that they have, whether it’s structured or unstructured, whether it makes sense or not. The human brain is wired to see patterns in things. So when you get a bunch of information, that’s all connected either tightly or loosely your brain wants to make a story out of it.

It wants to understand where that information comes from and why that information has has come into your head.

Andy: Right, so this would be me in the interview context for hiring. This would be me, say, having a, doing a whiteboard interview. And I’m asking the candidate to design something. Say that we’re gonna design a, an online poker game. And they’re taking me through the structures and, and what they think that they would have.

And I’m sitting there thinking, Why have they not mentioned authentication? This, this person is not concerned with security at all. They’re not mentioning authentication. They’re not mentioning any access controls. This person isn’t interested in security. And that’s kind of me sense making without without checking my sense.

Is that, is that the idea?

Edward: Yeah, that’s the whole idea. So there can be a lot of different ways that you go about sensemaking, but that’s certainly one of them. You’ve asked a question, you’ve asked somebody to tell you a story and now you need to make sense out of all the information that you’re getting. So it could be that somebody had an approach that you weren’t necessarily expecting.

And you think, well, that’s the wrong approach, you know, and here’s why this is the wrong approach. Even if they’re getting to the same results you’re trying to make sense out of, What is this information that I got? And why did this person do this in a particular way? But the authentication example is great.

You know, you asked a particular question that had nothing to do with authentication. And you’re expecting something when you don’t get that, you try to make sense out of why you didn’t get that. And that leads to all sorts of bias and weird results when you are interviewing engineers.

Mon-Chaio: And so if we’re saying that sense making is problematic in interviews, it almost sounds like we just shouldn’t interview. Now before we get too far down this path, I would love to hear, Ed, in your in your experience, in your research, and in your consulting, are there sort of the top three or whatever big red flags around how sense making is problematic in technical interviews?

Edward: Yeah. So this really comes into play when you are asking people unstructured questions, right? So, so broad open ended questions are really where sense making becomes an issue. You can mitigate against that in a, in a bunch of different ways, but sense making is less of an issue when you ask a direct question.

So if you say to somebody tell me, tell me about a time you were able to lead a project to completion. This is a bit of an open ended question. And if you’re not careful, you can start making sense of this stuff, right? You start to see yourself in this answer and you think to yourself, Oh, that’s how I would have run this project, right?

This must be a good candidate because because he runs projects the same way I would run. So what you, what you do to mitigate some of this stuff is you come up with a rubric. So, you know, like if you’re asking about running a project, You can say right. Was the project successful? If the project wasn’t successful, what other criteria do you want to look at in terms of whether or not this is a good answer?

Did they encounter problems? How did they deal with problems? So that takes all of your innate sensemaking out of the equation. And, and it gives you a bit of more of a rigid structure to actually evaluate these answers. So that. Theoretically, I can write this question. I can write this rubric. I can give it to either one of you guys.

We can all interview the same candidate and we’re all going to come to the same conclusion. Assuming the candidate gives us the same answer all three times. All three times.

Mon-Chaio: What you said just now around, theoretically, the three of us could give the same interview and we would all come to the same conclusion. Do you know if there’s been anybody that’s done that study essentially recorded someone with a successful rubric this way, and replayed it back to interviewers versus unstructured interviews.

Edward: So when it comes to personnel selection, which is the area that we are really interested in, I don’t know of anything offhand. A lot of what the personnel selection, like, branch of, of interviewing does is, the science of it is based off of other types of interviewing. So, when social scientists go out and they interview individuals to try to get information out of them, they’ve been able to determine that using unstructured interview questions.

Like, saying Tell me about your experience with this particular thing. Doesn’t lead them to have reproducible results. So they’ve been able to do studies saying if you ask questions in an unstructured manner, you will not have a reliable set of information upon which to, to base your decisions.

You need to be really structured in how you ask these questions. Otherwise you get really variable results. And the social sciences. are variable enough in and of themselves that you don’t want to introduce any more variability into.

Andy: And, and actually that’s a really useful thing to bring up is to talk a little bit about what these different formats of interviews are, because as you were talking about in the social sciences, you have different, I guess you could call them different levels of structure. The, the taxonomy that I, I learned was basically you have conversations, which are completely open.

You are, you just meet someone and you just start talking about whatever. You don’t even have a particular topic. You have unstructured, which is normally you set the topic, and then how you’re going to explore that topic is completely open to what happens during the interview. You’ve got semi structured, which is you absolutely have a set topic.

And some part of how you’re going to handle that is structured, but the rest of it is pretty open. And then you’ve got structured, which is that you know the topic, you know the questions, you know the order you’re going to do the questions, and you know exactly how you’re going to behave. Is that how rigorous we are in talking about the interview hiring, hiring interviews, of how structured we should be trying to get?

Edward: Yeah, I think, you know, the, the exactly how you behave part I think is maybe a bridge too far.

Andy: We’re, we’re not going for, for science level reproducibility here.

Edward: Right. I mean, because, you know, because ultimately you have to work with this person, right? So you don’t want to alienate them as part of this process but you do want to ask them, you want to have the questions prepared in advance and you want to have a rubric for how to evaluate those questions ready to go.

Andy: And you want to, and you do want to train your interview interviewers probably to do things like do not scoff when they say their answer.

Edward: Ideally yes,

please don’t tell them they’re wrong in the middle of the answer. There’s, there’s lots of things that you, you definitely don’t want interviewers doing that, that’s a, that’s a whole other topic. There’s a there’s a workshop that I do on training interviewers for how to do structured interviews, why it’s important to do structured interviews how to use rubrics, that sort of thing.

So if you ever want to have me back on, we can chat about that too. But but yeah, you’re basically right when it comes to the taxonomy. There are really just open conversations where there’s no topic at all. And I’ve certainly been part of some of those interviews which are nice. They’re fun.

Especially if you end up getting the job it’s great, right? There’s, there’s no real stress to it. It’s just an enjoyable conversation. And you know, then you, you get a great job and everything’s great. It’s those poor folks who didn’t have a good conversation that’s, that end up losing out on that stuff.

Andy: Well, but also from the, from the side of the person doing the hiring, you just had a great conversation, but, and I think this is what the research was that we were finding and you pointed us at some as well. You had a great conversation, but it wasn’t a conversation that actually produced reliable information.

Edward: Yeah,

Andy: but I’ll, I’ll believe that I was getting reliable information out of it.

Edward: Yeah. You’ll, you’ll go back, you’ll look at that interview. So when somebody says to you, why did you make the decision that you made? And part of you is going to be just like, well, because it was a great interview. They answered the questions the way that I thought they should. They gave me this information.

They gave me enough information for me to say they have the skills needed to do this job. But if you actually look at the interview, you find that there was no information that was really gained. You just had a nice conversation. You found out that you think the same way as another person thinks, which doesn’t really give you anything to go on one way or the other.

Andy: Well, you, you think the same way as they were able to present themselves as thinking. During that interview.

Edward: Yeah. Yeah. Accurate Yeah, that was in that one particular interest instance, right?

Mon-Chaio: it goes back to that old sales thing, right? Of personality mirroring. Right, whatever

Andy: Oh, yeah, yeah.

Mon-Chaio: your client does, you do the same thing, right? If they slouch in their chair, you slouch in your chair, right? If they’re upright and and, and gregarious, you’re upright and gregarious. But I think it’s interesting because earlier on, right before we were recording, we were talking about you saying you go to a lot of your clients and you show them the data that structured interviews produce better signal than unstructured interviews. And yet, And Andy and I see this in a lot of topics we cover as well, right? People will say, yeah, yeah, that makes sense. But then you ask them, okay, so what would you change in your behavior in interviews? And they say nothing. So how do we make sense, sensemaking, how do we make sense of all of this? The fact that the research clearly shows structured interviews are better, but people still have this affinity for unstructured interviews.

Edward: Yeah. Well, I mean, it’s part of the sense making process itself. If you’ve been doing this throughout your entire career. And not only, like from an interviewer standpoint as an interviewee, you’ve been involved in lots of unstructured interviews. Like to you, this is all you know, and this is all that makes sense.

And you’ve told yourself the story over and over again, that this is what makes sense. You know that somebody is a good candidate because you can talk to them, right? You understand like what’s important. You can get the information out of them. Even if All the data tells you otherwise. Your lived experience tells you no, I am a good interviewer and I know what makes for a good interview and I can tell when somebody is good for a job.

But there’s no real connection back to how did that person perform in that role. Because of the long lead time between these things, it’s really difficult to connect them back together. And there’s so many other variables that are at play. So, you know, like I might hire somebody, they work out well for the first three months because they’re, they’re training.

Then you start to see some gaps in their skills and you think, well, maybe they’re having a hard time. And then they’re really not performing well and you think, well, they just need some more training. And then eventually at some point you say, well, they’re not really working out. Most people don’t connect this all the way back to, I hired the wrong person.

So they don’t get that feedback loop to say, I’m not doing a good job. I’m not really being able to predict whether or not somebody is going to be successful in a role. So they have this whole narrative that they’ve told themselves that they’re good at this stuff. They understand what makes for a good candidate and what doesn’t.

And then their feedback loop is so long that they can’t actually use that to call into question any of the things that they’ve done in their career when it comes to selecting candidates.

Mon-Chaio: And I think problematically when they look back on it, they have their own biases around Well, you know, this was an exception case because, as Ed, you were saying you know, they were great for the first three months, and that showed that I interviewed the right person, but then something else happened.

But luckily for us, I think, Andy, there has been research, right, done from unbiased connections between interview and job performance. I think that’s something that you found.

Andy: Yeah, so, I don’t know how To your point, Ed, about it’s hard to make this connection, I don’t know how much research there actually is on measuring job performance versus the way someone was interviewed. I didn’t find much on that, but I did find at least some thinking along these lines from the researchers in this field.

And one of the things that they brought up in a paper that I have called Stubborn Reliance on Intuition and Subjectivity in Employee Selection is that we kind of work with this belief that 20 percent of your ability to do the job is technical competence, and 80 percent is psychological factors. And so in some ways, kind of like, you do this technical competence, and then you do a lot of these psychological factors of this soft interviewing technique, this very unstructured interviewing to see, do I get along with this person? And I would have to say, until I started reading this stuff, I would have been like, well, yeah, that sounds about right. But, what they said is, based on, based on research that’s been done around this, without citing any particular finding, so, as far as I could tell, it’s more like 20 percent technical competence, 10 percent psychological factors, and 70 percent randomness. Randomness being, in scientific speak, randomness being, we don’t know how to explain it. It’s not like someone’s rolling the dice. It’s, it’s purely just, we don’t know how to explain this.

Mon-Chaio: Oh man, and if those people hated being told that they didn’t know or were unreliable interviewers in the first place, imagine telling them that they’re not only unreliable, but they can only account for 30 percent of the reliability even if they were completely reliable.

Edward: Yeah. Yeah, I mean, I I don’t know about the percentages but I’m just going to go ahead and agree with you anyway because cause I’m trying to make sense out of this. So sure. I it feels right to me. I, I think that there’s there’s just, there’s something about like how human beings relate to one another that becomes really important, like when you’re having a conversation.

So, you know, 70 percent randomness. Seems like a fair enough estimate,

Andy: Yeah.

Edward: yeah, it’s, you know, and, and some of that I think is valuable. Like some of that is really about, do I personally want to work with this person? Like, am I going to relate to them well? And I think that in certain circumstances, it makes a ton of sense that you want to do that.

You know, you don’t necessarily want to hire somebody that has like all the skills. That’s that you need to do a job, but that is going to make you miserable.

Andy: That in the first five minutes of meeting them, you knew that you were just going to be fighting with this person every single day.

Edward: Yes. Yeah.

Andy: So I think what we’re saying is that it’s not that you have to completely ignore those kinds of unstructured things. But it’s that they shouldn’t be all of the weight. They shouldn’t be everything that’s determining what you’re, what you’re doing here.

Is that right?

Edward: Yeah, that’s exactly right. Because when you’re, when you’re creating a team of people that’s going to interact with one another. You, you have to have some of those soft skills, right? Like, and you know, if you want to count likability as a soft skill, which I think is fair then yeah, that’s super important. You can get a little more structured about gauging likability.

If

Andy: Perfect. I was going to ask you, how do you get structured about those kinds of things?

Edward: So, likability is a tough one. I mean, there are some others like like agreeableness, which is sort of related to, to likability. You can, you can ask some questions that are kind of targeted towards that you know, conscientiousness. So like there, there are soft skills that you can get more structured about.

But it’s, it’s squishier,

Andy: Yeah. Would you use something, would you use something like the Myers Briggs inventory or big five? Would you go down that path? How would you do that kind of thing?

Edward: I mean, for me personally, I think the Big Five has the most data behind it. Things like like, DISC or Myers Briggs have shown to be really random. You know, like, they’re highly dependent upon your mood, like, the day you are taking the test. So,

Andy: And probably reactivity, too. You kind of know what the purpose of this is as well.

Edward: Right. So if you’re taking a Myers Briggs test and you’re taking just for your own amusement, right? You want to know what sort of Myers Briggs personality type you are. You’re not going into it with any preconceived notion of like, I need to get this right. But if you’re taking this knowing my job depends on this, you are going to be the, the most agreeable, the most outgoing person like you could possibly imagine.

Huh. Because, because you, you know, right, that’s, that your livelihood potentially depends on this. So, yeah, it’s it’s, it’s really hard to gauge these things. Even when you’re, you’re more structured about these things, it’s, it’s tough. You want to avoid leading questions, right?

You, you don’t want to prompt people to the correct answer. But when it comes to things like likability and agreeableness Nobody’s going to think, Oh, I should show up as disagreeable.

Andy: Yeah. And, and this gets, this gets to something about how you design these kinds of interviews. And this is a big topic in the social sciences is how do you design these things to take into account like that social desirability bias where people know that you’re not supposed to be interviewing a job and everyone thinks that you’re a dick.

Edward: exactly.

Mon-Chaio: Well, and even Andy, we just did an episode that says extroverts make better employees, right? So, if you’re doing an interview,

Andy: Well, better, better, better leaders.

Mon-Chaio: Some of the research also shows like it correlates with employee performance reviews outside of leadership as well. So yeah either way leadership or not, if you’re interviewing to be a leader, you want to try and be as extroverted as possible for those eight hours, right?

But Ed, it sounds like you’re saying that, you feel like psychometrics have a space in the interview in the interview, what we call it, panel.

Edward: Yeah, definitely. You’re, you’re not going to be able to, I, one of the biggest predictors of a high performing team. I don’t know if you guys have gone over this or not, but it was, um, Project Aristotle that’s, that Google did. And they found that psychological safety is the biggest predictor of of a team being high performing

Andy: Sadly, most of the information about Project Aristotle seems to have disappeared from the internet.

Mon-Chaio: Really?

Andy: stuff left is the Japanese page. And the hypothesizing, the theorizing I saw about why is that is that Google is so fractured that it’s just the Japanese team has happened to keep it running and no one else cares.

Edward: yeah, I’ve right. I mean, that’s, that’s a great example of sense making right there

Andy: Yeah.

Edward: because, because we don’t know why

Andy: no one knows why, but it’s it, the Japanese page is still up. That’s all we know.

Edward: Yeah, I, I, I do not have the ability to read Japanese. So, I have, and I actually, I’ve never even come across that. I’ve looked for some of the data behind Project Aristotle, never found it. The only

Andy: they never published it, which is sad because so many of us reference it, and it sounds like such a good, good outcome.

Edward: Right. The, the best information I found on it was a New York Times article that was published shortly, shortly after it was released. And I think that’s kind of how most people know about it. And so we’re just sort of going on the, the author’s conclusions without any data to, you know, to be able to reproduce this.

And then the interpretations of the author’s conclusions that appeared in this New York times article. So it’s, it’s the most tenuous plank you can possibly stand on when it comes to,

Andy: But we’re going to stand on it because

Edward: I’m going to stand on it anyway. Because, cause I like the idea right. Like my, my own personal experience, I think has

Andy: And, and it. And it does fit with other research. There is other research in other disciplines that psychological safety is very important. Now, also, Aristotle had other findings, but unfortunately, this is the one that stuck around. It was, I think, the most important, but they had other things as well that were also important.

Edward: Yeah, and we have we have no idea the weight of these things, or, or the correlations. Like, so, it might have been, like, the single biggest predictor, but you know, it was barely more predictive than anything else.

Andy: Yeah.

Edward: so, we, like, we, we really just don’t know. But I’m gonna, I’m gonna hang my hat on that anyway.

Because because I need something to go on when I’m trying to build a high performing team. And you know, in in, in my experience, like having done this for lots and lots of years, you know, teams that, that do feel psychologically safe. And by that, I mean teams that I see openly questioning one another, having open debates feeling like they can get things wrong.

And go back and fix them tend to perform better than teams that don’t have,

Andy: Yeah. And in reading the research on this and asking that question, why structured interviews? What is it about structured interviews? And it was It was a couple things, and I’m going to add one that I think is in there that they never added.

So one was that they said standardized questions “should therefore result in more consistent measurement of application, applicant qualifications across multiple interviews than should unstandardized questions”. So if you’re doing these interviews, you can say, Oh, we’re getting this consistent read of what they’re doing.

And “it should provide a more job relevant sample of applicant performance that interviewers can evaluate the information of”. So you can say, okay, we’re interviewing for a software engineer in this team that needs these kinds of skills. Let’s check that. But onto your learning thing, and I think this is part of the pushback that I feel when I first start hearing, oh, standardize all these questions, is, but they won’t capture everything. And the answer is, yes, absolutely. They won’t, especially at the beginning, but by having them structured. I can now go back and look at them and say, how can I improve these? Whereas if I’m working with this unstructured setup, where every interviewer goes in and does a different thing, asks a different question, I have no control over how do I improve this process.

Edward: Yeah, exactly. When you start asking structured questions, you can start calibrating those questions. Right. So if you have a question bank. You can say this question is designed to elicit a response. That’s going to tell me something about this person’s character in this way or their skills in this way.

If you’re finding that you’re not actually getting a good signal out of that, you can go back and change that question. Right. So this is one of the things that I counsel my clients to do is create a continuous improvement. So every, you know, three months or so However often you’re doing hiring, you know, for some folks, you know, if they’re hiring two, three people a year, every three months doesn’t really make sense.

But if you’re a large organization, you’re hiring lots and lots of people, and you’re hiring them constantly, go back every three months and look at the performance of people and say, how well did they do in this particular area? And if they weren’t really doing as well as we predicted they were going to do based on their interview results, then modify that interview question.

You know, go back and look, think, are we really collecting the relevant information for this? If we are collecting relevant information, if the, if the question is okay is it the rubric that’s wrong? You know, like there’s, there’s ways that you can go back and modify these things to get better at this stuff over time.

Mon-Chaio: well, and I think the, we, I want to go back to the fact that, that 30, we’re getting 30 percent signal and 70 percent randomness, right? I think that’s important not to lose track of. Because, yes, you want to improve that 30%. But by saying, well, I can’t cover the other 70%, therefore I should try to do all of these other things which are not indicated for, as being good or relevant I think it’s problematic.

Right? And so, yeah, you get a very narrow 30 percent of signal, but you should try to get that, get the best signal that you can out of that. Huh.

Andy: another side of that as well, that I think going to the people who want to keep doing these unstructured interviews, feeling like, but maybe I can cover that 70 percent through an unstructured interview. Now, there was a really fascinating a really fascinating article that you sent through to us that had them experimenting with the different ways in which people get information and their ability to predict something.

In this case, it was predict a student’s grade point average. And the most fascinating thing for, from it for me, was that people were more willing to take random information. literally random information, and use it to make this prediction, than no information.

Edward: even even when the, the results were roughly the same or even slightly worse, you know, like they, they still they still preferred that instruct unstructured information. And I think one of the, for, for anybody who’s listening, who’s wondering just how random it was, I think it was, they took the the last letter

Andy: the

first letter of the last two words.

Edward: of the last two words and and would answer one way or the other based on that. So there was sort of an even distribution of randomness.

Andy: And to be clear to, so, so for people listening, because I reread this just to understand what, what it was that they were doing. So the, the. The setup was that you had an interviewer and an interviewee, and the interviewer needed to make a prediction about the GPA of a particular quarter for the interviewee. And they could ask yes no questions. Now, they, in one setup, they were supposed to answer everything truthfully, yes or no, or this or that questions. Is it, are you in computer science or are you in IT? Oh, I’m in computer science. Then they had another setup where they could, they had to answer the first 10 minutes of questioning, truthfully, and then after a break, the second 10 minutes of questions, they had to answer randomly using the scheme of take the question, take the first letter of the last two words of the question, and then they had this way of, of selecting do you answer yes or do you answer no, and they, they showed in there like, yes, this creates a distribution that is approximately equal yes and no. And then they ran this with a bunch of people. Oh. And then the third condition was they just gave them essentially some demographic information about this person and a prior GPA. And they said, predict the GPA of this student in for this quarter. And here, here’s the syllabus of what they were doing, or here’s the curriculum of what they were doing.

Here’s of a couple of things. And yeah, and they tried this in, I think they did three different experimental setups to kind of take the reason you do this in these studies is you want to take here’s the result of one, but this could have this as an explanation. So we’re going to check it this way that would get rid of that as an explanation.

And so they did three different studies to cover a couple different explanations. And basically in the end that it was, people were happier to get completely wrong random information than no information, even though with no information other than that demographic stuff, they could predict best.

I think it was. I think that one allowed them to predict best.

Edward: yeah

Mon-Chaio: So,

Edward: random.

Mon-Chaio: so hire based on resume reviews, right, I think is where we’re leaning toward. But no,

Andy: really, because a resume is not a GPA, it’s not a syllabus.

Mon-Chaio: But I think people fall in love, people fall in love with what they know, right? I, I think So a piece of research that I found was it was around employee performance. This was actually pulled from something I read when we were talking about perf reviews back in the day. And they said that some of the stuff that correlates highest with employee performance across a broad set of industries is stuff like altruism, conscientiousness, sportsmanship, courtesy, civic virtue, right?

And so, you know when you talk about, well this is 20 percent technical, 10 percent psychological, 70 percent random, telling people that they should be measuring for altruism instead of a unstructured, what did you do in your last project, I don’t think it computes for most people.

Edward: yeah, no, it, it definitely does not. When when I go in and I start talking to clients about this stuff their first reaction is just sheer disbelief. they, they don’t It’s, it’s so alien to, to everything that they’re used to that it’s just hard for them to, to grasp onto this stuff. And this goes back to the whole idea of sensemaking, the stories that not only like we’ve told ourselves about the the interviewees but the stories we’ve told ourselves about the interview process right.

It feels true and so therefore to them it is true. And when you confront them with this data They, they don’t change their minds. I will say that’s that is no longer my approach. I do not go in and confront people with data because it doesn’t, it doesn’t get us to where we want to be. So telling them, telling them stories and sort of leading them along this path, helping them to do a little bit of exploration on their own, that sort of gets them a little bit further down the path. There’s also a bit of trust that they, that they have to me to say, Hey, you’ve done this a whole lot. You’ve done a lot of research into this. I’m going to trust that you’re going to help us get this stuff better. But but yeah, it’s it’s really hard to, to get people to give up on some of this stuff.

Some folks will just say if you can reverse a linked list, you must be able to do this job.

Andy: in, in constant space,

Edward: in constant

Andy: only if you can get that algorithm.

Edward: Right.

Mon-Chaio: and no hints by the way.

Edward: There are some reasons behind that, right? When you’re conducting a structured interview, you, you don’t want to be giving hints to some people and not to others.

Andy: Absolutely. Yeah.

Edward: So like, that’s going to skew your results. So, so like giving hints, like, and like the the absence of giving hints makes a lot of sense. And I think that’s probably where some of this stuff came from. But if you are, you know, asking somebody a question about, you know, can you design an online poker game for me but you’re evaluating them on whether or not they included authentication, it’s going to be really hard without giving them some sort of hints?

Mon-Chaio: And it’s even a little more insidious than that as I’ve seen. I’ve seen rubrics where it says, look, this is what you ask, and then if they haven’t gotten this in 10 minutes you can give a hint here, and then in 20 minutes you give a hint here. That seems fairly structured but wouldn’t you know it when you go into review?

It would be, oh, well this candidate got here without needing a hint. So, you know, A plus, right? But I got to 10 minutes and I had to give this candidate a hint, so A minus.

Edward: And the, the thing that’s really insidious about this whole thing is that there’s an anchoring effect. So. When you have this thing that feels like a piece of hard data, that’s going to skew all of the other like softer data. So once you think, well, this person scored an A minus, they got an 82 on this particular thing.

And our cutoff score is only 60. Like this person is great for this role. So we’re going to ignore all of the other information that we have about whether or not this candidate is good. Because we got this one score that says this candidate is good. Like that’s how, how heavily we weight things that seem you know, structured and objective.

Even if they are truly not objective at all.

Andy: Yeah. And, and that goes, that goes to the stuff that I was finding, which is that it’s not, you don’t get to stop at structured interviews. You don’t get to stop at just structuring and standardizing your interview. You actually have to continue on. You have to standardize your evaluation.

Well, this person I had to prompt at this point, so they get nine points rather than 10. Something like that.

Mon-Chaio: What do we take away from all of this?

Edward: There’s, there’s perfect signal and then there’s good signal right. And we’re never going to get the perfect signal, right? There’s just way too many variables. You know, like we’re, we’re looking for, you know, slightly better correlation than a 50, 50 chance.

Right. And this is, this is a hard thing to sell to folks, but, It does ultimately like turn into, into real dollars, right? Like one of the things, one of the biggest selling points that I have is that a single bad hire can cost an organization literally hundreds of thousands of dollars. When you think about the amount of money you spent on their salary, the time you spent training them. The the, you know, increased unplanned downtime that you’re going to have, you know, if they’re a particularly poor performer there’s, there’s a lot of hard costs that go into this. If you avoid even one of those, it makes a ton of sense for you to just slightly improve your your interviewing techniques.

Andy: And it needs to be tied to the job that they’re going to be doing. One of the

Edward: Ideally,

Andy: yeah, one of the things that is in this, what practical implications of this analysis around interviewing was that they had you standardize your questions, you, what they called mechanical combination. But they also said job analysis, analyze the job you’re going to be asking them to do and make sure the interview is connected to that.

Mon-Chaio: It feels like that doesn’t have to be said. It’s funny that they have to say it.

Andy: Oh, I think it has to be said.

Edward: it has to be said, 100 percent has to be said. I, I think none of the clients that I have worked with so far, have come to me and said, we know exactly what this role is and what this role will need to do, not only now, but in the future. 0 percent of them have actually gone through that exercise.

Andy: just just the number of places that I’ve seen where they’re, usually these are pretty young startups where they want to hire software engineers and the initial manager or CTO is like, I just need a 20 minute conversation with them. And I know if I want to hire them, I don’t need them to write any code. It’s like, so, so their job doesn’t include writing any code.

Edward: yeah. I mean, if their job was just having 20 minute conversations with the CTO, that’s great, right? Like that’s, that’s about as close to perfect signal as you can get. but yes, so many people do this. So many people ask questions going back to the sort of, the Leetcode question, right?

Andy: I was going to ask you about LeetCode. Where does LeetCode fit into this?

Edward: It, it doesn’t. Yeah I, I think that LeetCode is one of these things that feels structured, it feels objective.

Andy: Yeah, it absolutely does. Because there’s a, there’s a yes or no answer in the end of my code ran and it ran within the bounds of the

Edward: right. But there’s this notion of relevance, right, so, like, you can ask an objective question, you can get an objective answer, but if it’s not relevant to the job at hand, it’s pretty useless for you. Right, like, I can ask all sorts of things, right, I can say you know, please solve the Riemann hypothesis And and if they do, that’s great.

But you

Andy: you get a paper out of it.

Edward: you get a, you get a whole paper out of that. Like you get, you know, you, you’re, you’re probably going to get a Nobel prize in mathematics for that. But does it mean that you’re going to be good at the job you’re interviewing for? No, it doesn’t. You know, you’re a smart guy, you can do things, but,

Mon-Chaio: so I guess this is the death knell of that classic, it may even be an urban legend, the classic Microsoft question of how many windows are there in New York City?

Edward: Yeah, that’s I don’t know that that’s been definitively proven to be a poor predictor, but. I would say that’s classic sense making. You ask somebody this question, you’re expecting a certain class of answer, right? Like, how do they go around estimating these things? Do they ask things like, you know, does that include car windows?

Do you have to account for the average number of cars, right? So they’re looking for all of these things, and they’re trying to make sense out of that stuff, right? Like, did they ask all these questions, but really it’s more about familiarity with the question.

I can ask all these things because I’m familiar with the questions. You know, like I sat down and I thought about it. I was like, Oh yeah. You know, like how, how would I estimate these things? Right. And I delve pretty deep into it, but if somebody sprung that on me for the very first time, I’d be like, I don’t know, lots, a great many windows.

Mon-Chaio: And the last thing on my mind, Andy, I don’t know if there’s other things that you, you were interested in talking about. The last thing on my mind is kind of provocative, like getting back to what I sort of alluded to at the beginning. Should we be interviewing at all?

Andy: just hire and see how it goes.

Mon-Chaio: yeah, or is this a, or should every job really just be a three month contract and see how it goes?

Like, is that the best signal we’re going to get?

Edward: Yeah, so, like work tryouts are actually really great predictors of performance. But you know, like these are still human beings and their lives we’re talking about. So hiring them and being like, well, it didn’t work out, you’re fired after three months is one is pretty bad, like for, for the human being that has to go through this.

But it’s also a super expensive way to hire people. Like you get great signal out of it, but. Unless they’re working for you for three months for free like you’ve spent a lot of time and energy and money in, in working with this person and really training them for three months. You know, like to then just let them go.

Andy: And the, and the impact on the team around them. You’ve, you’ve brought them in, and You’ve, you’ve incorporated them. They’ve started structuring their processes around this person. And then three months later, it turns out, ah, it’s not quite working the way we wanted and they’re gone. It’ll happen. You should accept that that’ll happen.

But if you build your entire process around it, it’s happening constantly.

Edward: yeah, you’re never going to get a high performance team, but, but to answer your, your question about like, should we bother interviewing at all, you know, and, and the answer is yes, right. Because you can increase your likelihood. Like you can’t get to a point where you’re like, we know definitively this person is going to be able to do this job.

But if you can go from, you know, a 10 percent chance at having a good hire to a 20 percent chance of having a good hire, even something that small, it makes a huge difference, especially over time. So, so yes, interviewing definitely makes sense. But you really need to figure out like what’s your ROI on your interview process, right?

Because you can get to something that’s more and more and more predictive, right? You can put somebody through, you know, eight hours of interviews. You can put them through three months of interviews, right? And you’re going to get closer to being really predictive of whether or not they’ll, they’ll be successful.

But you spent an inordinate amount of time and energy figuring this out. So like what I do is I really try to help people find the balance between how predictive they need to be and how much time and energy they want to spend on being that predictive.

Andy: Perfect. And I think at this point we’ll try to wrap up. So you can find Ed on gordianknot. company, or we’ll also put a link in our show notes along with links to all sorts of papers that we referenced. And just to try to wrap up the tactics at the end, let me see if I can get this and you guys can agree or disagree with me. So it’s Structure your interviews. Structuring will lead to better results. Figure out why you’re interviewing and what you’re interviewing for, and then reduce the bias going into those evaluation of those structured interviews by having a rubric, having a way of analyzing it, as well as combining those analyses together. There’s probably a lot more that we had in there. Anything I’m missing?

Edward: There’s a lot more that you can do when it comes to hiring, but. I think for this particular topic of of sense making and boosting , the statistics of hiring well, I think that’s pretty spot on. Like you, you really want to do those three things and that’s just going to increase the likelihood.

And even if this is the only three things that you do, they’re probably going to be the three most impactful things you do. To get better signal out of your hiring process.

Andy: All right.

Thank you for a great conversation, Ed. It was really enjoyable. I’d love to get you back on sometime. And Mon Chaio I hope that you’re feeling a bit better from COVID.

Mon-Chaio: Mm, mm

Andy: Hopefully by next time we record. And until next time, for you all, be kind and stay curious.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *