Hiring Interviews are not Data Collection

Hiring Interviews are not Data Collection

I'll discuss a common misunderstanding that I have sometimes observed in hiring interviews: the mistaken view that hiring interviews are a form of data collection.

This misunderstanding is seen when interviews have several of these features:

  • Questions that are 100% scripted in advance

  • Interviews that are identical for different candidates

  • Questions that do not reflect a candidate's skills

  • Little or no time for introductory comments

  • Little or no time for candidate questions at the beginning or end

  • Rudimentary answers from the interviewer when a candidate asks questions

  • Interviewers who give most of their attention to typing notes

I don't deny that interviews yield "data" in the rudimentary sense of being information. The problem arises when interviews are seen as a way to collect comparative data as if they were similar to research observations.

This misunderstanding is well-intentioned. I've heard interviewers say things like, "I want the interviews to be identical so I can compare candidates fairly." It is no surprise that academically trained UX researchers may expect such practices to make interviews fairer or more scientific.

But interviews are not research! Instead, interviews are a way for a candidate and interviewer to engage in mutual learning to determine whether there is a good person/job fit, and they should be tailored for that purpose.

In this post, I'll explain why.

Small print and disclaimers: these are personal reflections after being involved in hundreds of interviews as an interviewer, hiring manager, committee member, and trainer of interviewers in previous jobs. They also reflect my pre-tech experience as a clinical psychologist. They do not represent any company's policy nor my current job where, so far, I have not been involved in any hiring interviews except my own.


Interviews Involve Mutual Observation

One difference from research is this: a candidate observes an interviewer at least as much as the interviewer observes the candidate. Each of them is attempting to determine whether a particular position is the right fit.

Suppose a Quant UXR candidate's resume says they program in R. They show up to a programming interview and are grilled on SQL because that is part of the interviewer's script and they wish to be consistent (or they believe that SQL is "required" for data positions, a myth I will set aside for now).

What will the candidate think? They may think that the interviewer didn't read their resume, or wonder what it is that SQL can do that R can't (answer: for most cases, nothing), or wonder why SQL is so important (answer: it probably isn't very important, or they'd be hiring a database engineer).

In this situation, an inappropriate focus causes the interviewer to miss out on assessing the candidate's more general strengths. And the candidate will draw incorrect implications about the position.

Even if the candidate "passes" the interview, the incorrect assessment and implications can be detrimental. The stronger a candidate is, the more likely they will be bothered by such misalignment. In the worst cases, such misalignment may lead them to decline an offer.


Extreme Scripting is not Fairer or Scientific

There is a simple reason that adherence to an interview script does not collect particularly fair or scientific information: the interview script almost certainly has not been conceived, developed, tested, or normed as a scientific assessment.

Here's how I believe interview scripts are typically developed: an interviewer looks for canned questions in a shared list of interview questions or in their own archives, and perhaps adjusts or adds some questions. Then they use that script repeatedly in candidate interviews.

What's missing from that? Almost everything that would be done in psychometric validity assessment: definition of the construct being interviewed; writing and testing proposed items; assessing those items against other measures (see my post on convergent and discriminant validity); and collecting normative information across the population (see my post about LLM "IQ").

Without those things, a script is just that, a script. It reflects one interviewer's bias ... and applying that bias repeatedly to multiple people does not make it scientific.


Personalized Interviews Gather Better "Data"

Let's set job interviews aside for a moment and think about another situation where two strangers meet and assess one another: a blind date.

Imagine if you showed up to a date and the other person:

  • Asked scripted questions

  • The questions had little to do with you

  • They launched into the questions without small talk

  • You had almost no chance to ask anything

  • When you did ask, the date didn't answer much

Sound horrifying? It is!

Sound familiar? On the one hand, I hope not! And yet, on the other hand, I made that list by copying my list at the beginning of this post. I just changed the situation from a job interview to a date. So you've already read a longer version of this list! Doesn't it seem more awful now than it did in the "interview" context above?

Think about what you would hope to see on a date: human engagement. That is the same answer in a job interview: human engagement!

Put differently, candidates are not question-answering robots to be probed with semi-automated testing. They are people, and they are not only assessing you (as I noted above) but also candidates will respond better and more accurately when they are treated as people.

In clinical psychology, that is called "establishing rapport." A client and a therapist need to form a working relationship with mutual respect and engagement (or, in clinical jargon, a "therapeutic alliance"). Key parts of that include honesty, transparency, a reduction of anxiety, basic human warmth, and a focus on the needs of the person in front of you.

It is the same in interviews. They should be conducted to minimize anxiety, be as honest and transparent as possible, and focus on the shared goal of both parties, which is to establish and understand the fit between a candidate and a job.

Those goals are addressed when an interview opens with warmth and general discussion, and only then moves into scripted aspects. The scripted aspects should either be tailored to the candidate in advance (for instance, using R instead of SQL) or should be adapted on the fly to reflect the candidate's situation, skills, and interests ... and not a preconceived list.

When interviews are conducted that way, they will collect better and more ecologically valid "data" ... and they will be a better experience for the candidate. And that makes it more likely that they will say "yes!" when they get an offer.


Does That Mean "No Script"?

Not at all! It means that interviewers should adapt their scripts. And that means a bit of preparation in advance.

In particular, I do this:

  1. Start with a previous interview guide or suggested questions

  2. Adapt those to be questions that you find interesting as the interviewer!

  3. For a particular candidate, review their resume, portfolio, and other materials and adjust the questions to fit their qualifications, background, or interests. (BTW, if some skill or statement appears on a resume, it's fair game to be probed!)

  4. Start the interview casually, as noted above, to establish a friendly situation (as best as one can in a stressful activity like interviewing!)

  5. Make sure your script includes branches that can cover higher- and lower-skilled responses. If a candidate is struggling, there is no reason to persist with difficult questions; or if they are acing all of them, increase the difficulty.

  6. Once you have enough evidence, stop asking scripted questions and move to a more informal discussion. What questions does the candidate have? Give them a more person-to-person sense of what the job would be like.

This accomplishes multiple goals simultaneously: a better assessment of the candidate, higher-quality information, and a more appealing representation of the job and its culture.


For More about Quant UXR Interviews ...

I've written separately in other posts about the skills needed for Quant UX and Quant UXR programming interviews.

Stay tuned ... future posts will share more reflections and advice for Quant UX interviews (and UXR interviews in general).

Beyond that, in our Quant UX book, Kerry Rodden and I discuss interviews from the perspective of both candidates and interviewers ... including an appendix with evaluation rubrics for each of the core areas of Quant UX interviews.

Thanks for reading!