- | 8:00 am
Why you should ignore success stories
In his new book “May Contain Lies” finance professor Alex Edmans warns about the dangers of studying the success stories of others.
My school gave us a lengthy psychometric test to find out what careers we might be best suited to. We could then work backwards, like a master chess player who sees ten moves ahead. Once we’d singled out our dream career, we’d figure out the degree we needed to study at university and finally pick the A-levels that would unlock the campus gates.
It seemed like a great idea. Naïve and foolhardy at sixteen, we thought we could plan decades into the future. We took the test enthusiastically, eager to see what it predicted. But we were disappointed by many of the questions. One asked us to draw a normal “S,” then a backwards “S,” then a normal “S,” and so on, as fast as we could. When we got the test results, we saw that this exercise aimed to measure our “flexibility.” That sounded like psychobabble—it was ridiculous to make life-changing decisions on how quickly we could alternate “S” shapes. For me at least, the test ended up being a poor predictor. It recommended ten careers, none of which was “professor,” or anything related, such as “teacher,” “author,” or “researcher.” Given my confirmation bias, I’ve concluded the test must have been wrong, rather than that I’ve blundered into a career I’m ill suited to.
As a result of our disappointment, my friends and I took matters into our own hands. We thought there was a much better way to decide our futures: to follow in the footsteps of successful people. We knew better than to read a single biography, so we instead pored over the Sunday Times Rich List, which contained the hundred wealthiest people in the U.K.—being young and foolish, wealth was our only measure of success. We probed into how they found their fortune, paying careful attention to the stepping stones that led to the bounty. Many struck gold through starting their own business, but we were interested in the initial careers that had cultivated these entrepreneurial skills. (As you can guess, none of these hundred tycoons spent any time as an impoverished business professor, my current career.) And we found out the degrees that launched them into those first jobs.
Why look at the Rich List, rather than taking a deep dive into one success story like Apple? Because a single anecdote is prone to the narrative fallacy—our temptation to see two events and believe that one caused the other, even if there were different causes or no cause at all besides luck. Simon Sinek claims that Apple was successful because it “started with why,” but Apple’s success could have been down to myriad other causes. We thought we were being smart by looking at the Rich List because it gave us a hundred datapoints, not just a couple of examples. If the problem of the narrative fallacy is that it’s based on a single story, can’t we solve it by studying a hundred stories? We also had no pet theory of what drove success but started from a blank slate. We didn’t pick one hero, identify how we thought she reached the top, and then cherry pick ninety-nine others who’d taken the same route. Sinek claims that “starting with why” also propelled the fortunes of the Wright Brothers and Wikipedia, but he never mentions the others who came a cropper. Instead, we were completely open-minded—we had no preconceptions, and would let the data speak.
Yet none of this mattered, because we had a selected sample: Our dataset only contained people that ended up extremely rich. Even if we found a common thread, we’d have had to turn it into a hypothesis, such as “studying biochemistry leads to success.” But we had no way to test this hypothesis, since we didn’t have data on the thousands of other biochemistry graduates who were leading normal lives. A hundred datapoints don’t help if they’re all successful people.
Of course, we were immature. Since much of my current work is on the importance of purpose, not financial motivation, I now cringe at the way in which I first tried to choose a career. In our defense, it’s not just us. Many gurus purport to base their claims on research, not just case studies and anecdotes, and they equate research to gathering a cornucopia of data. And these gurus include the authors of multimillion-selling books.
Built to Last, by Jim Collins and Jerry Porras, identifies nine principles that seemingly lead to enduringly profitable companies. Chapter 1 ends with over ten pages brow-beating the reader with how much information they gathered, to convince you to put your faith in their principles. They proudly proclaim: “we sourced nearly a hundred books and over three thousand individual documents (articles, case studies, archive materials, corporate publications, video footage). As a conservative estimate, we reviewed over sixty thousand pages of material (the actual number is probably closer to a hundred thousand pages). The documents for this project filled three shoulder-height file cabinets, four bookshelves, and twenty megabytes of computer storage space.” On three separate occasions, they stress how all that number-crunching took six years.
The authors dramatize the research process so that the reader thinks she’s learning something astounding. They compare their journey to Charles Darwin’s five-year voyage through the Galapagos Islands, likening their findings to his discovery of new species. They’re so taken by their self-comparison to Darwin that, just two pages later, they now say their project took five years. “To ensure systematic and comprehensive data collection,” they use ‘Organization Stream Analysis’, which I’ve never heard of; web searches find no mention of it outside Built to Last. In The Halo Effect, a critique of Built to Last and similar books, business professor Phil Rosenzweig calls this “the delusion of rigorous research.” It doesn’t matter what fancy name you give your techniques or how much data you gather—quantity is no substitute for quality.
What was the problem with the data quality in Built to Last? It’s our old friend (or enemy): selected samples. The authors found 18 companies which they argued were “built to last”—successful over a long period, not just a flash in the pan—and then sought “to identify underlying characteristics that are common to highly visionary companies.” They’d started out with successful companies and then identified a unifying theme (the nine principles): the same methodology my friends and I used.
To the untrained eye, Collins and Portas went one better than us, as they had a control group. The Rich List only studied success stories, but Collins and Portas also considered failures. They compared the 18 winners with similar companies they deemed not so visionary. Their goal was to identify themes that could explain why Hewlett Packard was outperforming Texas Instruments, Merck was beating Pfizer, and so on.
This misinference matters. Several of the supposedly visionary companies plummeted shortly after the book’s publication, suggesting the magic formula doesn’t make you built to last at all—yet millions bought the book hoping to emulate them. The same is true for other books in the same genre which take a set of successful firms and identify a shared theme, such as In Search of Excellence and Good to Great. None of them has a control group, and the exemplar companies subsequently nosedived. The secret sauce was a scam.
Reprinted with permission from May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases—And What We Can Do about It by Alex Edmans, courtesy of the University of California Press. Copyright 2024.
Alex Edmans is professor of finance at London Business School. His TED talk “What to Trust in a Post-Truth World” has been viewed two million times; he has also spoken at the World Economic Forum, Davos, and the U.K. Parliament. In 2013, he was awarded tenure at the Wharton School, and in 2021, he was named MBA Professor of the Year by Poets&Quants. His first book, Grow the Pie, was a Financial Times Book of the Year. He is a Fellow of the Academy of Social Sciences.