How to Make Tech Interviews Suck Less
Interviews for tech jobs can be performative at best and exclusionary at worst. Candidates often report they are left feeling anxious, on display, judged — and set up to fail. Exercises meant to gauge a developer’s technical prowess can seem arbitrary and irrelevant to the role being filled.
Hiring processes often accentuate the worst sides of the tech industry. A bias toward pedigree — four-year computer science degrees, work experience at a FAANG — excludes candidates from non-traditional backgrounds. Whiteboard tests, a mainstay of developer interviews, favor the most charismatic, not the best person for the job. Making candidates jump through lots of hoops — all while unpaid — can push those under-represented in tech to drop out mid-process, as many do at alarming rates.
The tech job market struggles with massive talent gaps, and with diversity, equity and inclusion. So it seems odd that recruitment processes work so hard to strip the power and control from candidates.
Post-pandemic, lots of developers will be leaving their jobs and seeking new ones: 48% of technologists surveyed in April by Dice, the technology careers website, said they expect to change employers this year.
As your company seeks to fill new or vacant jobs, how it conducts those interviews matters. Not only to that single recruitment process but to the next ones as well. Because people talk. And they post their thoughts on Glassdoor. So it’s important to try to hire the best person for the role and your team, but it’s also important to make sure those you don’t hire have a positive experience.
Just like everything else in the tech industry, interviews are ripe for continuous improvement, measurement, and quality control. Let’s look at some ways you can make talent screening safer and more successful for all involved.
Are Whiteboard Tests Necessary?
You’re interviewing for your dream job. Suddenly, you find yourself standing at a whiteboard, in front of people you’ve maybe never even met before. Maybe you didn’t understand the question. Maybe you don’t have the answer — or you do, but right now you’re too sweaty and shaky to remember it.
As Linux programmer Craig Maloney has written, tech interviews — and especially whiteboard tests — can make a job candidate feel like they’re being hazed.
“In many ways, technical interviews are nothing more than ways to put a candidate out of their comfort zone and see how they perform under pressure,” Maloney wrote. “Which is great if your job demands it. However, I would argue that timed tests, whiteboard challenges, and logic problems don’t actually test for how good a programmer is; they only test for how well a programmer has practiced those sorts of challenges.”
Whiteboard tests generally involve solving some sort of short, algorithmic problem, like creating a link list or a bubble sort, noted Crispin Read, CEO of The Coders Guild a UK tech apprenticeship program, to The New Stack, This comes with its own inherent problems, he said, like asking for something that isn’t related to the actual job being filled.
“We are never going to need to write code on a whiteboard other than during an interview. The environment and pressures that you are testing are nothing like the work environment, so cannot be a good measure,” he said.
Besides that, he added, whiteboard tests don’t screen for a core problem solving or communication skills, nor do they bring out candidates’ necessary curiosity or ability to learn. “You’re not looking at someone’s capability as a problem solver. You are looking at how well they can remember syntax and research interview problems.”
Finally, Read pointed out that whiteboard tests come with cognitive and pedigree bias baked right in: “This method is the absolute enemy of achieving cognitive diversity in problem-solving. It strongly favors people with formal academic computer science backgrounds. If the tests that you are doing favor a particular group of people, you’re just going to hire more of those sort of people, and your problem-solving capabilities are reduced.”
On the other hand, whiteboard tests — especially when done remotely on a tool like Miro — is often used in backend engineering, site reliability engineering, and chaos engineering, according to Gremlin’s Tammy Bryant Bütow. SREs often use it for chaos game days, she said, drawing systems diagrams to talk through the different failure modes and weaknesses to hypothesize.
“I think it is a good skill to help people explain their ideas if they are visual learners. Some people struggle to verbalize their answers during an interview,” Bryant Bütow said.
Of course, she makes sure candidates understand how to whiteboard before testing them on it, running her own whiteboard prep sessions when needed before interviews. She’s also a fan of the interview prep sessions run by Women Who Code.
“I always want to set everyone up for success,” Bryant Bütow said. “If I’m going to ask them to draw an architecture diagram on a whiteboard for an interview, I’d like them to know I expect that in advance.”
If you ask a job candidate to write code, at least use what tools they’re comfortable with, suggested Angelique Weger, a frontend lead and manager, in a blog post. “If you ask a candidate to write code, they should be able to do so in a familiar code editor with the tooling they’re accustomed to.”
So logically any interview process — especially nowadays when many new remote tools are involved — should begin with asking what candidates may need to succeed.
Rather than making candidates sweat it out at the whiteboard, some companies give potential hires a short take-home assignment. Read advocates for this sort of test, which gives people the opportunity to complete it in their own time, in the comfort of their own homes, without the pressure of a timed interview. Then, the discussion during the interview should revolve around the candidate’s problem-solving process, not the code binary.
Another popular alternative to whiteboard tests is simply pair programming. It shows how a candidate works with a more senior member of your team. Then, the job seeker can explain their decision-making and coding processes in a later interview.
Questions must be based on scenarios the team has actually encountered, Read said. He describes these as high-level questions that exhibit an understanding of the frameworks or tools being used, in order to assess how a candidate would act as a communicator, problem-solver and team member.
“This opens up a two-way conversation, which gives you a good understanding of how a candidate is approaching a scenario, and whether they can solve the problem being presented,” he said.
Of course, again this approach has to be examined for bias because someone self-taught may not have been exposed to a certain framework. This is why giving questions or at least topic areas to candidates ahead of time to help them prepare is an inclusive recruitment best practice.
Organizations should check the rubrics they use to evaluate candidates for concepts that signal pedigree, warned Shannon Hogue Brown, head of solutions engineering at Karat, which conducts technical interviews for enterprises seeking to hire tech talent.
“Don’t have people go through the process and ask them a bunch of questions about concepts they may never have been exposed to. Ensure that you have the right structure in place. Define your competencies clearly,” Hogue Brown said during a panel this month hosted by LeadDev on how to fix so-called neutral hiring practices.
One of the biggest mistakes she sees is the hiring manager not getting buy-in from the potential teammates. Clarify with teammates ahead of time about the competencies that are actually needed for the vacant role, versus nice-to-haves. And have those same teammates answer the questions so the interviewer has an idea of how to score them.
For example, “If you’re looking for someone that should be optimizing code, make sure that you let them know and let the interview folks know that that’s what you are looking for,” she said during the panel.
Make this clear in the interview process, and spell out which skills indicate beginner, intermediate, advanced, and expert levels. Such clear expectations will give each team member what she calls “a defensible position” when the team gets ready to discuss the candidates, post-interviews.
You have to establish a hiring threshold, Hogue Brown advised. This helps take the natural human bias out of hiring decisions while also weeding out what she says are often “sub-par” internal referrals.
The folks at Karat perform what is often called data-based recruitment — which, in their case, includes 70 different scores for an interviewer to note. Then someone else on the Karat team watches a video of each interview to quality control that scoring.
Structured Interviews: Predicting Behavior
Container Solutions, a cloud native consulting company, follows a very science-oriented, competency-based approach to hiring. Andrea Dobson, a psychologist and the company’s head of people, leads a recruitment team whose approach, she said, includes “evidence-based processes that have proven to be accurate and proven to be the best way of hiring for the job. It’s not about looking for the person that fits your team. It’s about hiring for the work you want them to do.”
The team’s confidence in its process is so high that it doesn’t do much CV or resume screening ahead of time, except for pre-requisites for specific roles, like timezones. The Container Solutions team aims to eliminate the pedigree bias, by simply not allowing training, schooling or even — in many cases — job history to factor in.
First, the hiring manager identifies the core skills required. Then the hiring process begins with an aptitude or ability test, then a take-home API technical test “that actually really reflects the work that our engineers are working with our customers,” Dobson said. This is all followed by a personality test — an important gauge, she said, because “behavior is predictable.”
Finally, candidates undergo a structured interview, where every applicant gets the same questions and is scored against behaviorally anchored rating scales (BARS). Interviews don’t start until after the aptitude test round, and all applicants receive a pre-briefing where they are told which competency areas will be covered and which examples they should be prepared to highlight.
“If someone has the skills and they are able to highlight that in the interview, they will get the job,” Dobson said. “We do the structured interview because they are better at predicting job fit. and we see that this is leading to better results.” While no selection process can be foolproof, she said, the Container Solutions team considers the positive feedback it has received both from successful and unsuccessful candidates an important indicator that it’s working.
No ‘Gotcha’ Questions
As with all things, there is a risk that an overly structured process will exclude people. It’s vital to continue to revisit and reflect on not only your hiring process but how you’re measuring it.
“The number one mistake that concerns me, is, yes, we should have some type of structured rubric. But not having any nuance to it and not ever changing it, I think that’s a mistake because that’s the whole definition of a rubric,” said Preetha Appan, an engineering director at Hashicorp, on the June panel hosted by LeadDev.
“It takes a long time to refine rubrics and then you work on them as you go,” she said. “You also have to adapt them a little bit to the candidate’s background, the specific role.”
For her, that means shying away from a focus on “which maps to which data structure or which architectural design pattern. Make it more about: Did they demonstrate a learning mindset? Were they able to work through a solution? Were they able to talk through ideas about how they would finish it?”
Recruiters and hiring managers shouldn’t pepper candidates with “gotcha questions” or an excessive demand for knowledge, Weger wrote in her blog post on hiring.
”Avoid questions that can and should be resolved with a web search or a visit to MDN. Knowing stuff is definitely important to the job, but outright quizzing candidates isn’t as revealing as one might hope and adds extra stress into an already fraught process.”
Always work to make the candidate feel safe and like you actually want them to succeed.
a variety of interview options is helpful because it will help you find candidates who are awesome in/comfy in a variety of situations.
i suggest approaching technical interviews as though you and the interviewee are united to discover if you'll be happy and productive on a team together.
you want them to succeed! unless they wouldn't be happy here, in which case you want to help them find that out too!
— Charity Majors (@mipsytipsy) February 15, 2021
Don’t Expect Free — or Fast — Labor
Generally speaking, I find it unbelievable that companies have the audacity to expect us to dedicate hours of our time to their hiring process – to the point of asking us to submit a 500 word essay that will be then used to judge us. The solution is simple – simply reimburse me!
— Nabeel Khalid (@nubeals) June 14, 2021
This is a big one. If you are asking people to write code or strategy that can benefit your business, they should be paid for their intellectual property contribution. Consider whether your candidates should be paid for work done during the hiring process.
And if you are asking people to write code or strategy that isn’t directly related to the role you’re hiring for, then why are you even bothering?
“We don’t want to be asking folks to do free work. If you are giving an assignment, make sure it’s equitable, that it’s a fair understanding of whether or not that person could be working on your team,” Hogue Brown said during the LeadDev panel.
A candidate may have other things to do — whether it’s their current job or caring for children or other family members, a burden that disproportionately falls on those marginalized in tech — than handle an arbitrary take-home test that doesn’t reflect the actual job they’re applying to do. Don’t waste their time.
In Karat’s ongoing research, the company found that up to 70% of candidates from groups underrepresented in tech — including women — tend to drop out of hiring funnels once they get a take-home test.
It’s also important to give candidates a reasonable amount of time to complete their take-home tests, including a weekend. And always schedule a follow-up interview before giving them the take-homes — they need reassurance that you will actually be considering their work.
“Just some random take-home test to see if somebody can code probably won’t give you what you want,” said Hogue Brown. “If it’s going to take more than an hour, you are filtering out candidates.”