Each year, around 185,000 MBA students graduate in the U.S. alone. A significant portion of these students spends more than 100 hours each preparing for so-called case interviews — the favored evaluation method of elite consulting firms such as McKinsey, in which candidates are presented with a business problem and asked to talk through how they would solve the problem. This is a colossal waste of time. Case interviews are a terrible evaluation method; it’s time to end their use in hiring.

As former consultants who both prepped and administered case-based interviews, it’s painful for us to admit this truth. Case interviews have long been part of the ritualistic hiring process of elite consulting firms. Stressful and intimidating, they have a patina of rigor and gravitas. If you make it through, you are truly one of the chosen few: Many top-tier consulting firms have acceptance rates lower than Ivy League colleges. As a result, many hiring managers, ranging from Fortune 500 companies to startups, have also adopted the case method when evaluating new hires, particularly for junior roles, as they try to capture some of that “McKinsey mystique.”

There’s just one problem. Case interviews are not a reliable way of predicting job performance. When ECA was founded 10 years ago, we wanted to take a more data-driven approach to executive search. We, therefore, reached out to Stockholm based economist Tino Sanandaji to start hunting for rigorous, research-backed hiring tactics. We assumed that case-based interviews would top the list. After all, they seem to be a smart way to evaluate talent: They are designed to screen for general problem-solving skills, which strongly correlates with long-term job performance. Since the pace of change in the business world is ever more rapid, general problem-solving skills are highly valued by employers.

iStock-1024637858.jpg

But we were dismayed to learn that there’s no academically validated support for this assumption. In fact, research suggests the opposite. Case interviews are designed so there is no right or wrong answer. The idea is that the evaluator can pull useful information that comes from the candidate’s case presentation: “The candidate shows creativity,” “The candidate is highly numerate,” and so on. Unfortunately, research has shown that excess information reduces the prediction accuracy of job interviewers. Further, not having a clear and structured way of evaluating candidates makes it more likely that interviewers will be influenced by their biases when evaluating candidates. And finally, not having a right or wrong answer leads to arbitrary decisions. One candidate having a deep discussion about a company’s business model might be hired for being “intellectually curious,” while another candidate having the same discussion might be rejected for being “too theoretical.” Which hiring decision was the correct one?

We’ve recruited for seven out of the 10 most prestigious strategy consulting firms. For years, we’ve asked for evidence supporting that case interviews are predictive of work performance. To our disappointment, our clients would typically wave off this request. Perhaps they were reluctant to share sensitive data, or they were so convinced that their methods worked that they didn’t want to discuss data that indicated otherwise – what Nobel Laureate Daniel Kahneman calls “the illusion of validity.”

The final nail in the coffin for us was when we learned that Google had abandoned case-based interviews. In his book Work Rules! Laszlo Bock, Google’s former SVP of People Operations, described how the company meticulously challenged traditional recruiting wisdoms by diligently collecting interview data and then measuring the job performance for tens of thousands of hires, as well as their interviewers. Each interview technique was closely scrutinized. Bock reports in the book that the research project found case-based interviews to be “worthless.” Later on he tweeted: “I never liked case interviews … they aren’t predictive of candidate performance and serve mainly to make the interviewer feel clever.”

When we talk to proponents of case-based interviews, the defense we often hear is that they are effective at ensuring you only hire competent people. Sure, they may end up missing a competent candidate or two, but because hiring mistakes are very costly, what’s important is that they effectively identify people that are sure to excel. First off, there’s no evidence this is true. But even if it were true, the scenario of missing out on great hires only makes sense if it’s fairly inexpensive for a company to find more superstars. Since top-tier consulting firms have thousands of candidates to choose from, one might argue it’s not costing them that much to pass up on a few good candidates. But most firms aren’t like McKinsey or BCG. In PwC’s annual survey of global CEOs, “availability of key skills” was listed as the third most important threat to businesses in 2019, up from fifth rank in 2018. The number of U.S. unfilled jobs surpassed 7 million for the first time in August 2018 and still remains at that level. In such a tight labor market, you can’t afford to miss talent when they are sitting right in front of you.

Alternative approaches

Fortunately, more than a century of research shows that there are better, more rigorous ways to screen for general problem-solving skills. To start, General Mental Ability (GMA) is by far the best predictor of fluid intelligence, or the ability to solve problems in a variety of topics. GMA has been a well-established and tested construct for decades, with thousands of peer reviewed papers published on topic. Using a standardized, decidedly un-sexy GMA test is a far better way to start a candidate’s evaluation than a case-based interview.

Companies can also mitigate some of the known downsides with case-based interviews by standardizing their approach and becoming more specific about the skills that are most desirable for a job, how these skills are tested, and what constitutes a stellar versus a poor response. Written assessments can help make skill evaluation more objective. They also can help reduce bias by allowing evaluators to blind the names (and thus gender, race and background) of the candidates.

One client we worked with recently replaced case interviews with a standardized written evaluation in which he asked candidates to read through an investment memorandum and write down the main questions they wanted to investigate before making an investment decision. Before rating the responses, he and his colleague blinded the names of the candidates. They also agreed on which topics were important and within those categories what would constitute an insightful response. The client ended up progressing an ESL candidate who had recently moved to the U.S. to a final round — someone who had struggled with a traditional case interview when speaking with our other clients.

Consulting firms are correct about one thing related to hiring. General problem-solving skills are important in predicting job success. However, the traditional case-based interview has played out its role in identifying these skills. We think of the tens of thousands of MBA students and job prospectors that spend huge amounts of time and money each year prepping for case interviews. It’s such an enormous waste of energy and human potential. We are letting these job hunters down, and leaving value on the table in the process. McKinsey, BCG and your imitators take note: It’s time to abandon this antiquated, biased approach for identifying top talent.


Sourced from Harvard Business Review - written by Atta Tarki and Tin Sanandaji


Comment