Belief in the unstructured interview: The persistence of an illusion

Journal Title: Judgment and Decision Making - Year 2013, Vol 8, Issue 5

Abstract

Unstructured interviews are a ubiquitous tool for making screening decisions despite a vast literature suggesting that they have little validity. We sought to establish reasons why people might persist in the illusion that unstructured interviews are valid and what features about them actually lead to poor predictive accuracy. In three studies, we investigated the propensity for “sensemaking” - the ability for interviewers to make sense of virtually anything the interviewee says—and “dilution”—the tendency for available but non-diagnostic information to weaken the predictive value of quality information. In Study 1, participants predicted two fellow students’ semester GPAs from valid background information like prior GPA and, for one of them, an unstructured interview. In one condition, the interview was essentially nonsense in that the interviewee was actually answering questions using a random response system. Consistent with sensemaking, participants formed interview impressions just as confidently after getting random responses as they did after real responses. Consistent with dilution, interviews actually led participants to make worse predictions. Study 2 showed that watching a random interview, rather than personally conducting it, did little to mitigate sensemaking. Study 3 showed that participants believe unstructured interviews will help accuracy, so much so that they would rather have random interviews than no interview. People form confident impressions even interviews are defined to be invalid, like our random interview, and these impressions can interfere with the use of valid information. Our simple recommendation for those making screening decisions is not to use them.

Authors and Affiliations

Jason Dana, Robyn Dawes and Nathanial Peterson

Keywords

Related Articles

The effects of surrounding positive and negative experiences on risk taking

Two experiments explored how the context of recently experiencing an abundance of positive or negative outcomes within a series of choices influences risk preferences. In each experiment, choices were made between a seri...

Framing effect in evaluation of others’ predictions

This paper explored how frames influence people’s evaluation of others’ probabilistic predictions in light of the outcomes of binary events. Most probabilistic predictions (e.g., “there is a 75% chance that Denver will w...

Does menu design influence retirement investment choices? Evidence from Italian occupational pension funds

Previous research has demonstrated that consumers’ decisions regarding supplementary pensions could be affected by biases. Bernatzi and Thaler’s experiment demonstrated that menu design can influence pension fund enrollm...

Do the Right Thing: Experimental evidence that preferences for moral behavior, rather than equity or efficiency per se, drive human prosociality

Decades of experimental research show that some people forgo personal gains to benefit others in unilateral anonymous interactions. To explain these results, behavioral economists typically assume that people have social...

How different types of participant payments alter task performance

Researchers typically use incentives (such as money or course credit) in order to obtain participants who engage in the specific behaviors of interest to the researcher. There is, however, little understanding or agreeme...

Download PDF file
  • EP ID EP678087
  • DOI -
  • Views 123
  • Downloads 0

How To Cite

Jason Dana, Robyn Dawes and Nathanial Peterson (2013). Belief in the unstructured interview: The persistence of an illusion. Judgment and Decision Making, 8(5), -. https://europub.co.uk./articles/-A-678087