Thoughts on Interviewing (and AI, Inevitably)
Defining what you're looking for; the importance of rubrics; a brief review of Talent; and what's wrong with candidates using AI?
I’ve been interviewed at many companies, and have interviewed dozens of candidates myself. The discrepancy in different companies’ ability to screen candidates is immense, which is surprising: you’d think such a common and critical activity would be well-understood, thought out, and fairly uniform.
It’s not. Companies that are good at it know exactly what they are looking for, and how to identify it in candidates. They run very structured recruitment processes, with rubrics that help interviewers rate candidates’ responses. Companies that are bad at it like to say things like ‘interviewing is more of an art than a science’, run chaotic processes, and have no formalised approach to rating candidates.
In this post, I’ll go through interviewing best practices I’ve observed.
You should know what you’re looking for
Phrased like this, this advice is obvious — of course, you should know what you’re looking for, otherwise how are you gonna know it if you see it? Yet, I’ve seen…
The good: companies that have a very clear idea of the qualities they want in their hires, and they specifically and methodically test for each one in the interview process.
The bad: companies that do test for specific qualities in their hires, but haven’t really bothered to think whether these are the main or only qualities they need in their workforce. For instance, some companies only test for raw intelligence, and completely overlook leadership potential or ability to collaborate — and are then surprised when they end up with a group of intellectual superstars who can’t organize themselves.
The ugly: and of course, companies that have no idea what to look for, and have different interviewers ask random questions (‘if you were an animal, what animal would you be?’)
It’s very hard to determine the absolute optimal list of candidate qualities that will help a company succeed — it will depend on the company’s culture, its industry, maturity, country, and many other factors. But companies should at least try!
Rubrics
Once you know what qualities you’re looking for, find questions that will help you gauge whether a candidate has those qualities — and, importantly, help your interviewers assess answers by designing rubrics. A rubric is a set of rules that helps interviewers rate candidates’ responses. Good rubrics have the following three characteristics:
First, they break down a high-level quality into easily testable components. For instance, ‘leadership’ is a good quality to look for, but too broad — a good rubric identifies specific elements that define leadership, such as the ability to set a vision and to assign roles responsibilities to team members, or attributes such as self-awareness, authenticity, etc.
Then, they spell out what good looks like. For instance, a rubric might give guidance for a rating on self-awareness as follows:
Finally (and this is the part many people hate), a good rubric is formulaic — it spits out a score that determines the hiring decision. I remember in one of the companies I worked, I sat through a two-hour training where the instructor kept talking about the importance of rubrics, and how using them reduces bias, etc. He then walked us through the company’s rubric and showed us how to score candidates.
At the end, I asked, how does the score translate to a hiring decision? ‘Oh, it doesn’t’, was the response. ‘We want interviewers to have the flexibility to use their judgment — so they can make a hire / no-hire decision irrespective of the score’. 🤦♂ IF YOU CAN DISREGARD THE SCORE BECAUSE YOU DIDN’T LIKE THE CANDIDATE’S ACCENT, HOW HAS THE RUBRIC REDUCED BIAS?
(By the way, this is why I disliked Cowen’s Talent, despite its rave reviews (which I regret to say, because I really like Tyler Cowen): the book gives a lot of suggestions on what kinds of questions to ask to gauge candidates’ creativity, but provides 0 guidance on how to evaluate answers — it suggests this is something some people are naturally good at, and others can develop with practice. Yes, but people who are naturally good at gauging creativity don’t need a book to tell them how to do it; and those who aren’t naturally good at it will be at a loss.
For instance, Cowen liked to ask ‘what’s your most absurd belief’. Apparently, his favorite answer to this question is, ‘I believe if you go to the beach, but you don’t give the ocean a chance to taste you, she will come to take her taste when she chooses’. Like, what am I meant to do with that?? Why is this a good answer? Cowen doesn’t explain. And the problem with that is that some idiot will read Talent, will start asking these kinds of questions without knowing how to evaluate the answers, and will end up hiring people based on ‘gut feel’, whatever that is.
This is annoying because by all accounts, Tyler Cowen is good at evaluating creativity. So I’d have loved to see him break down his thought process and evaluation mechanisms in more detail — including thinking more about what ‘creativity’ is and what are good proxies for it.)
It’s fine if candidates know the questions in advance
Some colleagues and I were discussing these matters a while back; one of them said a problem with many interviews is that candidates know what to expect, and can rehearse their answers to perfection.
I don’t think it’s a problem if candidates know what questions to expect. The answers to good questions can’t be faked (I’m talking about competency-based questions here, not maths problems whose solutions can be memorized): if you’re trying to find out whether a candidate is self-aware, and they’re not, no matter how much they rehearse they’ll never pass the interview. The map is the territory: someone who gives a good answer almost definitely has the quality being tested (nothing signals high-brow more than a reference to Slate Star Codex and Infinite Jest).
Consider this question: ‘if you were to ask three groups of people — those you’ve managed directly, your own direct managers, and your peers — to rate you as a colleague, which group would rate you highest, which group would rate you lowest, and why?’.
There is no right answer here. In asking this question, I want to understand whether the candidate knows how they come across. Does their behavior change depend on the nature of their relationship with their colleagues? How and why might different groups of people rate them differently? Are they happy with the order of ratings they think they’d get?
Knowing this question in advance will help candidates get their thoughts in order, and that’s a good thing! If someone can reach self-awareness through more reflection, why shouldn’t I hire them? If they can reason about this question when they take their time, why should I not let them take their time?
A counter-argument here is that being able to structure your thoughts and give an articulate answer on the spur of the moment is in itself a skill that interviewers should be testing. Perhaps this is true for some very specific roles — but in general, most of us have plenty of time to plan our work. Advantaging charismatic people who can speak well off the cuff doesn’t seem very clever to me (see Boris Johnson).
I’ll grant you that there is a little nuance when it comes to problem-solving. On one hand, there’s nothing wrong with learning how to solve a particular class of problems: for instance, I don’t think it’s an issue that candidates for consulting firms learn how to solve case studies since that’s what they’ll be doing for work anyway. On the other hand, it can be an issue if what you’re looking for is versatility in your employees: the ability to solve different kinds of problems. In this case, you probably want to be able to introduce entirely novel challenges to your candidates to see how they respond to them (although if what you’re testing is the ability to innovate, competency-based questions (‘tell me about a time you came up with an innovative solution to a difficult problem’) might serve you better — they will help you probe how the candidate thinks; and I’ll re-iterate that if the candidate can’t think clearly, no matter how much they rehearse, they won’t give a convincing answer)).
It’s fine if candidates use ChatGPT
Some employers are annoyed that candidates are turning to ChatGPT to prepare for the interview process, especially when candidates use it to solve take-home tasks.
I think this is an absurd objection. Look, the interview process is meant to test a candidate’s ability to do their job. If they can use AI in their work, why shouldn’t they use it in the interview? That’s like suggesting that analysts should use pen and paper instead of Excel, or that coders should use punch cards or assembly.
(By the way, this assumes that AI can complement the person doing the job; if AI can solve the take-home task entirely by itself, and the take-home task is a comprehensive test of the role’s requirements, then the company should not fill the role, and start using AI instead!)
In other words: if the underlying skills required to pass the interview or solve the take-home task can be entirely substituted by AI, then humans don’t need these skills in the role — and so, the interview process needs to be reconfigured to test specifically for skills that AI cannot replicate. If that’s done properly, then it won’t matter if candidates use AI.
I’d love to hear counter-arguments to the above! You can email me at acatsambas@gmail.com, comment below, or find me on Twitter - and of course, you can subscribe to receive new stories: