Validity of Interviewing and Selection Methods
by Mathea Kangas, MA in Industrial-Organizational Psychology
and John Vlastelica, CEO, Recruiting Toolbox
We want the best person for the job to get the job. Sounds easy, right? But you know that it’s not that simple. We all have biases and assumptions about who will be best. The best person for the job might be someone without a fancy pedigree, or someone from a historically underrepresented group. They might not match the hiring team’s idea of a perfect candidate (which is one reason why the ideal target candidate profile is a myth that needs to end). So, what’s a hiring team that just wants to make the best decision (but is, like all teams, made up of biased humans) to do? In order to make the most informed hiring decisions to select the best talent, we turn to the help of science.
Why isn’t intuition enough?
Why does science come into this? Well, we tend to think we’re good at knowing intuitively who the best candidate is. Turns out, we’re often not great at predicting on the job success. We can be blind to our own biases, and that leads to sub-optimal hiring decisions. And that can lead to discrimination (and missing out on good hires), lower-performing new hires, and even – potentially – lawsuits… all bad things.
When hiring decisions are made primarily based on intuition and personal judgment, that leaves room for a whole party of biases and cognitive distortions. Scott Highhouse, an industrial-organizational psychologist, reviewed many studies on the effectiveness of hiring decisions, and found that people tend to overestimate their own ability to make good, intuitive decisions. He called this the “myth of expertise”—the belief that experienced professionals can make subjective hiring decisions that are more effective than those guided by impersonal selection methods. Actually, results from many studies show that more intuitive selection methods, like unstructured interviews, are less effective at informing good hiring decisions than more structured, rigorous methods, like work ability tests. It turns out that we make better hiring decisions when we trust the results of validated, research-supported assessment methods.
What are “valid” selection methods?
So, that prompts a question: what does “validated” even mean? How does a validated selection method provide more useful decision-making evidence than just an unstructured interview? For our answer, we’ve got to go back to stats class – just briefly, I promise!
To be considered good to use, a selection method needs to be both reliable and valid.
- Reliability refers to whether a selection method is consistent – for instance, if five people are interviewing a candidate using the same questions, would they rate the candidate similarly? If so, that’s a reliable interview. Think of a dartboard – to be a reliable dart thrower, you want to hit the same spot every time.
- Validity refers to whether a selection method actually tells us anything about what we really care about: future job performance. Going back to that dartboard, what we really care about is hitting the bullseye. That’s our target – just like a candidate’s future job performance is the target in the hiring process.
Both reliability and validity are important – if you were a reliable dart thrower, hitting the same spot every time, but that spot was off to the side of the board, you’re still not going to win many games! And, if you think about it, you can’t have validity – hitting the target every time – without being consistent. So, leaving our imaginary dartboard and heading back to the world of recruiting, this all means that a validated selection method is one that researchers have shown to consistently provide useful information about a candidate’s ability to perform in the job they’ve applied for.
Want to learn more about the importance of reliability and validity in the selection context? This short video goes into a little more detail.
Going back to our dartboard, pretend you’re in a team-based dart-throwing contest that you really want to win – if you can either partner with an unknown player who says they have a great instinct for throwing darts, or a world-champion player who consistently hits the bullseye every time, which person would you want on your team? Obviously, you’d choose the world-renowned player known to deliver results! (Okay, now we’re putting the dartboard metaphor away, I promise.) The same thing is true in hiring – if you want to consistently hire world-class talent, you need to use selection methods that have been found to provide reliable, useful information about candidates’ ability to do the job.
Here’s where the numbers come in. We interpret the validity of a selection method with a statistical term: the validity coefficient. The higher the validity coefficient for a selection method, the stronger the method’s ability to predict job performance, and the more useful that method will be for accurately determining meaningful differences between candidates. And since the goal of the hiring process is to find out which candidate is the best fit for the job, using selection methods with high validity coefficients is essential.
“That makes sense,” you might be saying, “but what does that mean for my company’s hiring process?” It means that however your hiring teams screen, assess, and select talent, hiring decisions need to be based on validated selection methods, in order to ensure that you’re hiring the best candidates based on solid evidence, and not letting biases creep in.
Here are some popular methods and assessments used in selection, along with links to the academic research articles that discuss their validity in detail.
|Selection Method||Validity Coefficient||Source|
|Work Sample||.54||Schmidt & Hunter, 1998|
|Assessment Center||.37||Schmidt & Hunter, 1998|
|McDaniel et al., 1994|
|Biodata||.37||Hunter & Hunter, 1984|
|Situational Judgment Test||.28||Christian et al., 2010|
|Judge et al., 2013|
|Years of Work Experience||.18||Schmidt & Hunter, 1998|
|College GPA||.11||Hunter & Hunter, 1984|
|Years of Education||.10||Schmidt & Hunter, 1998|
|Types of Structured Interview Questions||Past Behavioral: .28
|Hartwell et al., 2019|
Looking at the table, you can see that the best predictor of job performance is a work sample test – makes sense, right? The best way to know how well a dart-thrower can throw darts is to watch them throw darts; the best way to know how well a software engineer can write code is to ask them to complete a sample project. We can see that structured interviews are better at predicting job performance than unstructured interviews. And situational and past behavioral interview questions are the best kinds of interview questions. [This is what we’ve been teaching in our interview training since 2005.] On the other hand, grades and years of education don’t tell us much about a candidate’s likely job performance. Combining multiple strong, validated selection methods can help provide a comprehensive picture of candidates’ ability to perform well in the position you’re hiring for.
Validated methods help us hire the best, most diverse talent
To hire the right person for the job, we can’t rely on intuition (or poorly defined, and often heavily biased, “culture fit” evaluations). When we rely on intuition, we’re really letting our biases and cognitive distortions sneak into the decision-making process. That results in less diverse, less qualified talent, which is the opposite of what we want! Understanding the validity of selection methods and choosing methods that actually predict job performance will give us solid evidence to guide us to a predictably good hire and create a fair, equitable process that helps us achieve our diversity goals.
Now, can we eliminate all risk in hiring? No. And you’ll notice none of the methods in the table above reach 1.0 in predictability. Nothing is perfect. But focusing more on structured interviews – behavioral interviewing questions that dig into the candidate’s past achievements – plus skill demonstrations and work sample reviews are what many of the best companies use for their live interviewing process.
That’s also what we happen to teach at Recruiting Toolbox. For over 15 years, we’ve built custom interview training for startups and world-class brands, all over the world.
If you’re thinking of building your own in-house interview training, we recommend you incorporate the following elements:
- Teach hiring managers how to translate traditional job description language into performance-oriented skills and abilities.
- Teach interviewers what good looks like – what behaviors does someone who will be successful at our company need to demonstrate in their past examples and present performance, during the interview?
- Ensure your team understands the biggest biases that negatively impact candidates – confirmation bias and similarity bias are two of the most common biases to mitigate when interviewing.
- Get prescriptive around your diversity focus – are you looking for “culture add,” where you’re consciously seeking out people who will bring something new to the team?
- Share legal dos and don’ts. Don’t assume everyone knows what’s illegal to ask about, especially if you have friendly interviewers who tend to steer off course into personal questions.
- Teach interviewers how to ask behavioral and situational interviewing questions, and most importantly, how to know what good looks like in answers.
- Create a strong decision-making process, where the input of everyone is incorporated into an evidence-based discussion, as free from bias as possible.
More research is needed into the validity of selection methods in 2020 and beyond. A lot of the most-referenced work was completed in the 1980s with some updates in the 1990s, but so much has changed since then. As a recruiting pro or hiring manager, it’s critical that you bring a little science into your interviewing approach and stay current on the research focused on validity and reliability.
This is not legal advice. You should speak to your company’s attorney before changing or implementing any assessments or changes to your interviewing techniques. Laws also vary by geography.