Range: Why Generalists Triumph in a Specialized World / Диапазон: почему дженералисты побеждают в специализированном мире (by David Epstein, 2019) - аудиокнига на английском
чтобы убрать рекламу сделайте регистрацию/авторизуйтесь на сайте
Range: Why Generalists Triumph in a Specialized World / Диапазон: почему дженералисты побеждают в специализированном мире (by David Epstein, 2019) - аудиокнига на английском
Издавна в нашем мире принято считать так, что достижение успеха человеком начинается в раннем возрасте. Сначала, с подачи родителей, он посещает разные кружки, развивает свои умения и постепенно выбирает то, которое получается лучше остальных. Становление успешного человека продолжается во время учёбы после школы. Теория настолько стала привычной и реальной, что никому не приходило в голову рассмотреть путь становления гения по-другому. Дэвид Эпштейн опровергает описанную узкую специализацию как единственный вариант успешности человека в выбранной сфере. Он доказывает на примере спортсменов, актёров, художников и других выдающихся людей, что взобраться на вершину успеха можно и тому, кто владеет универсальными умениями. Такие люди умеют лучше остальных строить связи между разными сферами и коммуникации между людьми, более активны, любознательны, обожают жизнь. Попробуйте выйти за рамки, и пересмотрите свои прежние взгляды вместе с автором.
Range: Why Generalists Triumph in a Specialized World / Диапазон: почему дженералисты побеждают в специализированном мире (by David Epstein, 2019) - аудиокнига на английском
Слушать онлайн Range: Why Generalists Triumph in a Specialized World / Диапазон: почему дженералисты побеждают в специализированном мире аудиокнигу на английском языке:
Скачать текст книги в формате .doc (Word) по прямой ссылке david_epstein_-_range.doc [2.28 Mb] (cкачиваний: 73)
. Скачать текст книги в формате .pdf по прямой ссылке david_epstein_-_range.pdf [3.05 Mb] (cкачиваний: 93)
. Скачать audiobook (MP3) бесплатно с файлообменника.
Слушать аудиокнигу в смартфоне через телеграм: Range: Why Generalists Triumph in a Specialized World
Читать книгу на английском онлайн:
(Чтобы переводить слова на русский язык и добавлять в словарь для изучения, щелкаем мышкой на нужное слово).
For Elizabeth, this one and any other one And he refused to specialize in anything, preferring to keep an eye on the overall estate rather than any of its parts. . . . And Nikolay’s management produced the most brilliant results. —Leo Tolstoy, War and Peace No tool is omnicompetent. There is no such thing as a master-key that will unlock all doors. —Arnold Toynbee, A Study of History INTRODUCTION Roger vs. Tiger LET’S START WITH a couple of stories from the world of sports. This first one, you probably know. The boy’s father could tell something was different. At six months old, the boy could balance on his father’s palm as he walked through their home. At seven months, his father gave him a putter to fool around with, and the boy dragged it everywhere he went in his little circular baby walker. At ten months, he climbed down from his high chair, trundled over to a golf club that had been cut down to size for him, and imitated the swing he’d been watching in the garage. Because the father couldn’t yet talk with his son, he drew pictures to show the boy how to place his hands on the club. “It is very difficult to communicate how to putt when the child is too young to talk,” he would later note. At two—an age when the Centers for Disease Control and Prevention list physical developmental milestones like “kicks a ball” and “stands on tiptoe”—he went on national television and used a club tall enough to reach his shoulder to drive a ball past an admiring Bob Hope. That same year, he entered his first tournament, and won the ten-and-under division. There was no time to waste. By three, the boy was learning how to play out of a “sand twap,” and his father was mapping out his destiny. He knew his son had been chosen for this, and that it was his duty to guide him. Think about it: if you felt that certain about the path ahead, maybe you too would start prepping your three-year-old to handle the inevitable and insatiable media that would come. He quizzed the boy, playing reporter, teaching him how to give curt answers, never to offer more than precisely what was asked. That year, the boy shot 48, eleven over par, for nine holes at a course in California. When the boy was four, his father could drop him off at a golf course at nine in the morning and pick him up eight hours later, sometimes with the money he’d won from those foolish enough to doubt. At eight, the son beat his father for the first time. The father didn’t mind, because he was convinced that his boy was singularly talented, and that he was uniquely equipped to help him. He had been an outstanding athlete himself, and against enormous odds. He played baseball in college when he was the only black player in the entire conference. He understood people, and discipline; a sociology major, he served in Vietnam as a member of the Army’s elite Green Berets, and later taught psychological warfare to future officers. He knew he hadn’t done his best with three kids from a previous marriage, but now he could see that he’d been given a second chance to do the right thing with number four. And it was all going according to plan. The boy was already famous by the time he reached Stanford, and soon his father opened up about his importance. His son would have a larger impact than Nelson Mandela, than Gandhi, than Buddha, he insisted. “He has a larger forum than any of them,” he said. “He’s the bridge between the East and the West. There is no limit because he has the guidance. I don’t know yet exactly what form this will take. But he is the Chosen One.” — This second story, you also probably know. You might not recognize it at first. His mom was a coach, but she never coached him. He would kick a ball around with her when he learned to walk. As a boy, he played squash with his father on Sundays. He dabbled in skiing, wrestling, swimming, and skateboarding. He played basketball, handball, tennis, table tennis, badminton over his neighbor’s fence, and soccer at school. He would later give credit to the wide range of sports he played for helping him develop his athleticism and hand-eye coordination. He found that the sport really didn’t matter much, so long as it included a ball. “I was always very much more interested if a ball was involved,” he would remember. He was a kid who loved to play. His parents had no particular athletic aspirations for him. “We had no plan A, no plan B,” his mother would later say. She and the boy’s father encouraged him to sample a wide array of sports. In fact, it was essential. The boy “became unbearable,” his mother said, if he had to stay still for too long. Though his mother taught tennis, she decided against working with him. “He would have just upset me anyway,” she said. “He tried out every strange stroke and certainly never returned a ball normally. That is simply no fun for a mother.” Rather than pushy, a Sports Illustrated writer would observe that his parents were, if anything, “pully.” Nearing his teens, the boy began to gravitate more toward tennis, and “if they nudged him at all, it was to stop taking tennis so seriously.” When he played matches, his mother often wandered away to chat with friends. His father had only one rule: “Just don’t cheat.” He didn’t, and he started getting really good. As a teenager, he was good enough to warrant an interview with the local newspaper. His mother was appalled to read that, when asked what he would buy with a hypothetical first paycheck from playing tennis, her son answered, “a Mercedes.” She was relieved when the reporter let her listen to a recording of the interview and they realized there’d been a mistake: the boy had said “Mehr CDs,” in Swiss German. He simply wanted “more CDs.” The boy was competitive, no doubt. But when his tennis instructors decided to move him up to a group with older players, he asked to move back so he could stay with his friends. After all, part of the fun was hanging around after his lessons to gab about music, or pro wrestling, or soccer. By the time he finally gave up other sports—soccer, most notably—to focus on tennis, other kids had long since been working with strength coaches, sports psychologists, and nutritionists. But it didn’t seem to hamper his development in the long run. In his midthirties, an age by which even legendary tennis players are typically retired, he would still be ranked number one in the world. — In 2006, Tiger Woods and Roger Federer met for the first time, when both were at the apex of their powers. Tiger flew in on his private jet to watch the final of the U.S. Open. It made Federer especially nervous, but he still won, for the third year in a row. Woods joined him in the locker room for a champagne celebration. They connected as only they could. “I’ve never spoken with anybody who was so familiar with the feeling of being invincible,” Federer would later describe it. They quickly became friends, as well as focal points of a debate over who was the most dominant athlete in the world. Still, the contrast was not lost on Federer. “His story is completely different from mine,” he told a biographer in 2006. “Even as a kid his goal was to break the record for winning the most majors. I was just dreaming of just once meeting Boris Becker or being able to play at Wimbledon some time.” It seems pretty unusual for a child with “pully” parents, and who first took his sport lightly, to grow into a man who dominates it like no one before him. Unlike Tiger, thousands of kids, at least, had a head start on Roger. Tiger’s incredible upbringing has been at the heart of a batch of bestselling books on the development of expertise, one of which was a parenting manual written by Tiger’s father, Earl. Tiger was not merely playing golf. He was engaging in “deliberate practice,” the only kind that counts in the now-ubiquitous ten-thousand-hours rule to expertise. The “rule” represents the idea that the number of accumulated hours of highly specialized training is the sole factor in skill development, no matter the domain. Deliberate practice, according to the study of thirty violinists that spawned the rule, occurs when learners are “given explicit instructions about the best method,” individually supervised by an instructor, supplied with “immediate informative feedback and knowledge of the results of their performance,” and “repeatedly perform the same or similar tasks.” Reams of work on expertise development shows that elite athletes spend more time in highly technical, deliberate practice each week than those who plateau at lower levels: Tiger has come to symbolize the idea that the quantity of deliberate practice determines success—and its corollary, that the practice must start as early as possible. The push to focus early and narrowly extends well beyond sports. We are often taught that the more competitive and complicated the world gets, the more specialized we all must become (and the earlier we must start) to navigate it. Our best-known icons of success are elevated for their precocity and their head starts—Mozart at the keyboard, Facebook CEO Mark Zuckerberg at the other kind of keyboard. The response, in every field, to a ballooning library of human knowledge and an interconnected world has been to exalt increasingly narrow focus. Oncologists no longer specialize in cancer, but rather in cancer related to a single organ, and the trend advances each year. Surgeon and writer Atul Gawande pointed out that when doctors joke about left ear surgeons, “we have to check to be sure they don’t exist.” In the ten-thousand-hours-themed bestseller Bounce, British journalist Matthew Syed suggested that the British government was failing for a lack of following the Tiger Woods path of unwavering specialization. Moving high-ranking government officials between departments, he wrote, “is no less absurd than rotating Tiger Woods from golf to baseball to football to hockey.” Except that Great Britain’s massive success at recent Summer Olympics, after decades of middling performances, was bolstered by programs set up specifically to recruit adults to try new sports and to create a pipeline for late developers—“slow bakers,” as one of the officials behind the program described them to me. Apparently the idea of an athlete, even one who wants to become elite, following a Roger path and trying different sports is not so absurd. Elite athletes at the peak of their abilities do spend more time on focused, deliberate practice than their near-elite peers. But when scientists examine the entire developmental path of athletes, from early childhood, it looks like this: Eventual elites typically devote less time early on to deliberate practice in the activity in which they will eventually become experts. Instead, they undergo what researchers call a “sampling period.” They play a variety of sports, usually in an unstructured or lightly structured environment; they gain a range of physical proficiencies from which they can draw; they learn about their own abilities and proclivities; and only later do they focus in and ramp up technical practice in one area. The title of one study of athletes in individual sports proclaimed “Late Specialization” as “the Key to Success”; another, “Making It to the Top in Team Sports: Start Later, Intensify, and Be Determined.” When I began to write about these studies, I was met with thoughtful criticism, but also denial. “Maybe in some other sport,” fans often said, “but that’s not true of our sport.” The community of the world’s most popular sport, soccer, was the loudest. And then, as if on cue, in late 2014 a team of German scientists published a study showing that members of their national team, which had just won the World Cup, were typically late specializers who didn’t play more organized soccer than amateur-league players until age twenty-two or later. They spent more of their childhood and adolescence playing nonorganized soccer and other sports. Another soccer study published two years later matched players for skill at age eleven and tracked them for two years. Those who participated in more sports and nonorganized soccer, “but not more organized soccer practice/training,” improved more by age thirteen. Findings like these have now been echoed in a huge array of sports, from hockey to volleyball. The professed necessity of hyperspecialization forms the core of a vast, successful, and sometimes well-meaning marketing machine, in sports and beyond. In reality, the Roger path to sports stardom is far more prevalent than the Tiger path, but those athletes’ stories are much more quietly told, if they are told at all. Some of their names you know, but their backgrounds you probably don’t. I started writing this introduction right after the 2018 Super Bowl, in which a quarterback who had been drafted into professional baseball before football (Tom Brady), faced off against one who participated in football, basketball, baseball, and karate and had chosen between college basketball and football (Nick Foles). Later that very same month, Czech athlete Ester Ledeck? became the first woman ever to win gold in two different sports (skiing and snowboarding) at the same Winter Olympics. When she was younger, Ledeck? participated in multiple sports (she still plays beach volleyball and windsurfs), focused on school, and never rushed to be number one in teenage competition categories. The Washington Post article the day after her second gold proclaimed, “In an era of sports specialization, Ledeck? has been an evangelist for maintaining variety.” Just after her feat, Ukrainian boxer Vasyl Lomachenko set a record for the fewest fights needed to win world titles in three different weight classes. Lomachenko, who took four years off boxing as a kid to learn traditional Ukrainian dance, reflected, “I was doing so many different sports as a young boy—gymnastics, basketball, football, tennis—and I think, ultimately, everything came together with all those different kinds of sports to enhance my footwork.” Prominent sports scientist Ross Tucker summed up research in the field simply: “We know that early sampling is key, as is diversity.” • • • In 2014, I included some of the findings about late specialization in sports in the afterword of my first book, The Sports Gene. The following year, I got an invitation to talk about that research from an unlikely audience—not athletes or coaches, but military veterans. In preparation, I perused scientific journals for work on specialization and career-swerving outside of the sports world. I was struck by what I found. One study showed that early career specializers jumped out to an earnings lead after college, but that later specializers made up for the head start by finding work that better fit their skills and personalities. I found a raft of studies that showed how technological inventors increased their creative impact by accumulating experience in different domains, compared to peers who drilled more deeply into one; they actually benefited by proactively sacrificing a modicum of depth for breadth as their careers progressed. There was a nearly identical finding in a study of artistic creators. I also began to realize that some of the people whose work I deeply admired from afar—from Duke Ellington (who shunned music lessons to focus on drawing and baseball as a kid) to Maryam Mirzakhani (who dreamed of becoming a novelist and instead became the first woman to win math’s most famous prize, the Fields Medal)—seemed to have more Roger than Tiger in their development stories. I delved further and encountered remarkable individuals who succeeded not in spite of their range of experiences and interests, but because of it: a CEO who took her first job around the time her peers were getting ready to retire; an artist who cycled through five careers before he discovered his vocation and changed the world; an inventor who stuck to a self-made antispecialization philosophy and turned a small company founded in the nineteenth century into one of the most widely resonant names in the world today. I had only dipped my toe into research on specialization in the wider world of work, so in my talk to the small group of military veterans I mostly stuck to sports. I touched on the other findings only briefly, but the audience seized on it. All were late specializers or career changers, and as they filed up one after another to introduce themselves after the talk, I could tell that all were at least moderately concerned, and some were borderline ashamed of it. They had been brought together by the Pat Tillman Foundation, which, in the spirit of the late NFL player who left a professional football career to become an Army Ranger, provides scholarships to veterans, active-duty military, and military spouses who are undergoing career changes or going back to school. They were all scholarship recipients, former paratroopers and translators who were becoming teachers, scientists, engineers, and entrepreneurs. They brimmed with enthusiasm, but rippled with an undercurrent of fear. Their LinkedIn profiles didn’t show the linear progression toward a particular career they had been told employers wanted. They were anxious starting grad school alongside younger (sometimes much younger) students, or changing lanes later than their peers, all because they had been busy accumulating inimitable life and leadership experiences. Somehow, a unique advantage had morphed in their heads into a liability. A few days after I spoke to the Tillman Foundation group, a former Navy SEAL who came up after the talk emailed me: “We are all transitioning from one career to another. Several of us got together after you had left and discussed how relieved we were to have heard you speak.” I was slightly bemused to find that a former Navy SEAL with an undergraduate degree in history and geophysics pursuing graduate degrees in business and public administration from Dartmouth and Harvard needed me to affirm his life choices. But like the others in the room, he had been told, both implicitly and explicitly, that changing directions was dangerous. The talk was greeted with so much enthusiasm that the foundation invited me to give a keynote speech at the annual conference in 2016, and then to small group gatherings in different cities. Before each occasion, I read more studies and spoke with more researchers and found more evidence that it takes time—and often forgoing a head start—to develop personal and professional range, but it is worth it. I dove into work showing that highly credentialed experts can become so narrow-minded that they actually get worse with experience, even while becoming more confident—a dangerous combination. And I was stunned when cognitive psychologists I spoke with led me to an enormous and too often ignored body of work demonstrating that learning itself is best done slowly to accumulate lasting knowledge, even when that means performing poorly on tests of immediate progress. That is, the most effective learning looks inefficient; it looks like falling behind. Starting something new in middle age might look that way too. Mark Zuckerberg famously noted that “young people are just smarter.” And yet a tech founder who is fifty years old is nearly twice as likely to start a blockbuster company as one who is thirty, and the thirty-year-old has a better shot than a twenty-year-old. Researchers at Northwestern, MIT, and the U.S. Census Bureau studied new tech companies and showed that among the fastest-growing start-ups, the average age of a founder was forty-five when the company was launched. Zuckerberg was twenty-two when he said that. It was in his interest to broadcast that message, just as it is in the interest of people who run youth sports leagues to claim that year-round devotion to one activity is necessary for success, never mind evidence to the contrary. But the drive to specialize goes beyond that. It infects not just individuals, but entire systems, as each specialized group sees a smaller and smaller part of a large puzzle. One revelation in the aftermath of the 2008 global financial crisis was the degree of segregation within big banks. Legions of specialized groups optimizing risk for their own tiny pieces of the big picture created a catastrophic whole. To make matters worse, responses to the crisis betrayed a dizzying degree of specialization-induced perversity. A federal program launched in 2009 incentivized banks to lower monthly mortgage payments for homeowners who were struggling but still able to make partial payments. A nice idea, but here’s how it worked out in practice: a bank arm that specialized in mortgage lending started the homeowner on lower payments; an arm of the same bank that specialized in foreclosures then noticed that the homeowner was suddenly paying less, declared them in default, and seized the home. “No one imagined silos like that inside banks,” a government adviser said later. Overspecialization can lead to collective tragedy even when every individual separately takes the most reasonable course of action. Highly specialized health care professionals have developed their own versions of the “if all you have is a hammer, everything looks like a nail” problem. Interventional cardiologists have gotten so used to treating chest pain with stents—metal tubes that pry open blood vessels—that they do so reflexively even in cases where voluminous research has proven that they are inappropriate or dangerous. A recent study found that cardiac patients were actually less likely to die if they were admitted during a national cardiology meeting, when thousands of cardiologists were away; the researchers suggested it could be because common treatments of dubious effect were less likely to be performed. An internationally renowned scientist (whom you will meet toward the end of this book) told me that increasing specialization has created a “system of parallel trenches” in the quest for innovation. Everyone is digging deeper into their own trench and rarely standing up to look in the next trench over, even though the solution to their problem happens to reside there. The scientist is taking it upon himself to attempt to despecialize the training of future researchers; he hopes that eventually it will spread to training in every field. He profited immensely from cultivating range in his own life, even as he was pushed to specialize. And now he is broadening his purview again, designing a training program in an attempt to give others a chance to deviate from the Tiger path. “This may be the most important thing I will ever do in my life,” he told me. I hope this book helps you understand why. • • • When the Tillman Scholars spoke of feeling unmoored, and worried they were making a mistake, I understood better than I let on. I was working on a scientific research vessel in the Pacific Ocean after college when I decided for sure that I wanted to be a writer, not a scientist. I never expected that my path from science into writing would go through work as the overnight crime reporter at a New York City tabloid, nor that I would shortly thereafter be a senior writer at Sports Illustrated, a job that, to my own surprise, I would soon leave. I began worrying that I was a job-commitment-phobic drifter who must be doing this whole career thing wrong. Learning about the advantages of breadth and delayed specialization has changed the way I see myself and the world. The research pertains to every stage of life, from the development of children in math, music, and sports, to students fresh out of college trying to find their way, to midcareer professionals in need of a change and would-be retirees looking for a new vocation after moving on from their previous one. The challenge we all face is how to maintain the benefits of breadth, diverse experience, interdisciplinary thinking, and delayed concentration in a world that increasingly incentivizes, even demands, hyperspecialization. While it is undoubtedly true that there are areas that require individuals with Tiger’s precocity and clarity of purpose, as complexity increases—as technology spins the world into vaster webs of interconnected systems in which each individual only sees a small part—we also need more Rogers: people who start broad and embrace diverse experiences and perspectives while they progress. People with range. CHAPTER 1 The Cult of the Head Start ONE YEAR AND FOUR DAYS after World War II in Europe ended in unconditional surrender, Laszlo Polgar was born in a small town in Hungary—the seed of a new family. He had no grandmothers, no grandfathers, and no cousins; all had been wiped out in the Holocaust, along with his father’s first wife and five children. Laszlo grew up determined to have a family, and a special one. He prepped for fatherhood in college by poring over biographies of legendary thinkers, from Socrates to Einstein. He decided that traditional education was broken, and that he could make his own children into geniuses, if he just gave them the right head start. By doing so, he would prove something far greater: that any child can be molded for eminence in any discipline. He just needed a wife who would go along with the plan. Laszlo’s mother had a friend, and the friend had a daughter, Klara. In 1965, Klara traveled to Budapest, where she met Laszlo in person. Laszlo didn’t play hard to get; he spent the first visit telling Klara that he planned to have six children and that he would nurture them to brilliance. Klara returned home to her parents with a lukewarm review: she had “met a very interesting person,” but could not imagine marrying him. They continued to exchange letters. They were both teachers and agreed that the school system was frustratingly one-size-fits-all, made for producing “the gray average mass,” as Laszlo put it. A year and a half of letters later, Klara realized she had a very special pen pal. Laszlo finally wrote a love letter, and proposed at the end. They married, moved to Budapest, and got to work. Susan was born in early 1969, and the experiment was on. For his first genius, Laszlo picked chess. In 1972, the year before Susan started training, American Bobby Fischer defeated Russian Boris Spassky in the “Match of the Century.” It was considered a Cold War proxy in both hemispheres, and chess was suddenly pop culture. Plus, according to Klara, the game had a distinct benefit: “Chess is very objective and easy to measure.” Win, lose, or draw, and a point system measures skill against the rest of the chess world. His daughter, Laszlo decided, would become a chess champion. Laszlo was patient, and meticulous. He started Susan with “pawn wars.” Pawns only, and the first person to advance to the back row wins. Soon, Susan was studying endgames and opening traps. She enjoyed the game and caught on quickly. After eight months of study, Laszlo took her to a smoky chess club in Budapest and challenged grown men to play his four-year-old daughter, whose legs dangled from her chair. Susan won her first game, and the man she beat stormed off. She entered the Budapest girls’ championship and won the under-eleven title. At age four she had not lost a game. By six, Susan could read and write and was years ahead of her grade peers in math. Laszlo and Klara decided they would educate her at home and keep the day open for chess. The Hungarian police threatened to throw Laszlo in jail if he did not send his daughter to the compulsory school system. It took him months of lobbying the Ministry of Education to gain permission. Susan’s new little sister, Sofia, would be homeschooled too, as would Judit, who was coming soon, and whom Laszlo and Klara almost named Zseni, Hungarian for “genius.” All three became part of the grand experiment. On a normal day, the girls were at the gym by 7 a.m. playing table tennis with trainers, and then back home at 10:00 for breakfast, before a long day of chess. When Laszlo reached the limit of his expertise, he hired coaches for his three geniuses in training. He spent his extra time cutting two hundred thousand records of game sequences from chess journals—many offering a preview of potential opponents—and filing them in a custom card catalog, the “cartotech.” Before computer chess programs, it gave the Polgars the largest chess database in the world to study outside of—maybe—the Soviet Union’s secret archives. When she was seventeen, Susan became the first woman to qualify for the men’s world championship, although the world chess federation did not allow her to participate. (A rule that would soon be changed, thanks to her accomplishments.) Two years later, in 1988, when Sofia was fourteen and Judit twelve, the girls comprised three of the four Hungarian team members for the women’s Chess Olympiad. They won, and beat the Soviet Union, which had won eleven of the twelve Olympiads since the event began. The Polgar sisters became “national treasures,” as Susan put it. The following year, communism fell, and the girls could compete all over the world. In January 1991, at the age of twenty-one, Susan became the first woman to achieve grandmaster status through tournament play against men. In December, Judit, at fifteen years and five months, became the youngest grandmaster ever, male or female. When Susan was asked on television if she wanted to win the world championship in the men’s or women’s category, she cleverly responded that she wanted to win the “absolute category.” None of the sisters ultimately reached Laszlo’s highest goal of becoming the overall world champion, but all were outstanding. In 1996, Susan participated in the women’s world championship, and won. Sofia peaked at the rank of international master, a level down from grandmaster. Judit went furthest, climbing up to eighth in the overall world ranking in 2004. Laszlo’s experiment had worked. It worked so well that in the early 1990s he suggested that if his early specialization approach were applied to a thousand children, humanity could tackle problems like cancer and AIDS. After all, chess was just an arbitrary medium for his universal point. Like the Tiger Woods story, the Polgar story entered an endless pop culture loop in articles, books, TV shows, and talks as an example of the life-hacking power of an early start. An online course called “Bring Up Genius!” advertises lessons in the Polgar method to “build up your own Genius Life Plan.” The bestseller Talent Is Overrated used the Polgar sisters and Tiger Woods as proof that a head start in deliberate practice is the key to success in “virtually any activity that matters to you.” The powerful lesson is that anything in the world can be conquered in the same way. It relies on one very important, and very unspoken, assumption: that chess and golf are representative examples of all the activities that matter to you. • • • Just how much of the world, and how many of the things humans want to learn and do, are really like chess and golf? Psychologist Gary Klein is a pioneer of the “naturalistic decision making” (NDM) model of expertise; NDM researchers observe expert performers in their natural course of work to learn how they make high-stakes decisions under time pressure. Klein has shown that experts in an array of fields are remarkably similar to chess masters in that they instinctively recognize familiar patterns. When I asked Garry Kasparov, perhaps the greatest chess player in history, to explain his decision process for a move, he told me, “I see a move, a combination, almost instantly,” based on patterns he has seen before. Kasparov said he would bet that grandmasters usually make the move that springs to mind in the first few seconds of thought. Klein studied firefighting commanders and estimated that around 80 percent of their decisions are also made instinctively and in seconds. After years of firefighting, they recognize repeating patterns in the behavior of flames and of burning buildings on the verge of collapse. When he studied nonwartime naval commanders who were trying to avoid disasters, like mistaking a commercial flight for an enemy and shooting it down, he saw that they very quickly discerned potential threats. Ninety-five percent of the time, the commanders recognized a common pattern and chose a common course of action that was the first to come to mind. One of Klein’s colleagues, psychologist Daniel Kahneman, studied human decision making from the “heuristics and biases” model of human judgment. His findings could hardly have been more different from Klein’s. When Kahneman probed the judgments of highly trained experts, he often found that experience had not helped at all. Even worse, it frequently bred confidence but not skill. Kahneman included himself in that critique. He first began to doubt the link between experience and expertise in 1955, as a young lieutenant in the psychology unit of the Israel Defense Forces. One of his duties was to assess officer candidates through tests adapted from the British army. In one exercise, teams of eight had to get themselves and a length of telephone pole over a six-foot wall without letting the pole touch the ground, and without any of the soldiers or the pole touching the wall.* The difference in individuals’ performances were so stark, with clear leaders, followers, braggarts, and wimps naturally emerging under the stress of the task, that Kahneman and his fellow evaluators grew confident they could analyze the candidates’ leadership qualities and identify how they would perform in officer training and in combat. They were completely mistaken. Every few months, they had a “statistics day” where they got feedback on how accurate their predictions had been. Every time, they learned they had done barely better than blind guessing. Every time, they gained experience and gave confident judgments. And every time, they did not improve. Kahneman marveled at the “complete lack of connection between the statistical information and the compelling experience of insight.” Around that same time, an influential book on expert judgment was published that Kahneman told me impressed him “enormously.” It was a wide-ranging review of research that rocked psychology because it showed experience simply did not create skill in a wide range of real-world scenarios, from college administrators assessing student potential to psychiatrists predicting patient performance to human resources professionals deciding who will succeed in job training. In those domains, which involved human behavior and where patterns did not clearly repeat, repetition did not cause learning. Chess, golf, and firefighting are exceptions, not the rule. The difference between what Klein and Kahneman documented in experienced professionals comprised a profound conundrum: Do specialists get better with experience, or not? In 2009, Kahneman and Klein took the unusual step of coauthoring a paper in which they laid out their views and sought common ground. And they found it. Whether or not experience inevitably led to expertise, they agreed, depended entirely on the domain in question. Narrow experience made for better chess and poker players and firefighters, but not for better predictors of financial or political trends, or of how employees or patients would perform. The domains Klein studied, in which instinctive pattern recognition worked powerfully, are what psychologist Robin Hogarth termed “kind” learning environments. Patterns repeat over and over, and feedback is extremely accurate and usually very rapid. In golf or chess, a ball or piece is moved according to rules and within defined boundaries, a consequence is quickly apparent, and similar challenges occur repeatedly. Drive a golf ball, and it either goes too far or not far enough; it slices, hooks, or flies straight. The player observes what happened, attempts to correct the error, tries again, and repeats for years. That is the very definition of deliberate practice, the type identified with both the ten-thousand-hours rule and the rush to early specialization in technical training. The learning environment is kind because a learner improves simply by engaging in the activity and trying to do better. Kahneman was focused on the flip side of kind learning environments; Hogarth called them “wicked.” In wicked domains, the rules of the game are often unclear or incomplete, there may or may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both. In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons. Hogarth noted a famous New York City physician renowned for his skill as a diagnostician. The man’s particular specialty was typhoid fever, and he examined patients for it by feeling around their tongues with his hands. Again and again, his testing yielded a positive diagnosis before the patient displayed a single symptom. And over and over, his diagnosis turned out to be correct. As another physician later pointed out, “He was a more productive carrier, using only his hands, than Typhoid Mary.” Repetitive success, it turned out, taught him the worst possible lesson. Few learning environments are that wicked, but it doesn’t take much to throw experienced pros off course. Expert firefighters, when faced with a new situation, like a fire in a skyscraper, can find themselves suddenly deprived of the intuition formed in years of house fires, and prone to poor decisions. With a change of the status quo, chess masters too can find that the skill they took years to build is suddenly obsolete. • • • In a 1997 showdown billed as the final battle for supremacy between natural and artificial intelligence, IBM supercomputer Deep Blue defeated Garry Kasparov. Deep Blue evaluated two hundred million positions per second. That is a tiny fraction of possible chess positions—the number of possible game sequences is more than atoms in the observable universe—but plenty enough to beat the best human. According to Kasparov, “Today the free chess app on your mobile phone is stronger than me.” He is not being rhetorical. “Anything we can do, and we know how to do it, machines will do it better,” he said at a recent lecture. “If we can codify it, and pass it to computers, they will do it better.” Still, losing to Deep Blue gave him an idea. In playing computers, he recognized what artificial intelligence scholars call Moravec’s paradox: machines and humans frequently have opposite strengths and weaknesses. There is a saying that “chess is 99 percent tactics.” Tactics are short combinations of moves that players use to get an immediate advantage on the board. When players study all those patterns, they are mastering tactics. Bigger-picture planning in chess—how to manage the little battles to win the war—is called strategy. As Susan Polgar has written, “you can get a lot further by being very good in tactics”—that is, knowing a lot of patterns—“and have only a basic understanding of strategy.” Thanks to their calculation power, computers are tactically flawless compared to humans. Grandmasters predict the near future, but computers do it better. What if, Kasparov wondered, computer tactical prowess were combined with human big-picture, strategic thinking? In 1998, he helped organize the first “advanced chess” tournament, in which each human player, including Kasparov himself, paired with a computer. Years of pattern study were obviated. The machine partner could handle tactics so the human could focus on strategy. It was like Tiger Woods facing off in a golf video game against the best gamers. His years of repetition would be neutralized, and the contest would shift to one of strategy rather than tactical execution. In chess, it changed the pecking order instantly. “Human creativity was even more paramount under these conditions, not less,” according to Kasparov. Kasparov settled for a 3–3 draw with a player he had trounced four games to zero just a month earlier in a traditional match. “My advantage in calculating tactics had been nullified by the machine.” The primary benefit of years of experience with specialized training was outsourced, and in a contest where humans focused on strategy, he suddenly had peers. A few years later, the first “freestyle chess” tournament was held. Teams could be made up of multiple humans and computers. The lifetime-of-specialized-practice advantage that had been diluted in advanced chess was obliterated in freestyle. A duo of amateur players with three normal computers not only destroyed Hydra, the best chess supercomputer, they also crushed teams of grandmasters using computers. Kasparov concluded that the humans on the winning team were the best at “coaching” multiple computers on what to examine, and then synthesizing that information for an overall strategy. Human/Computer combo teams—known as “centaurs”—were playing the highest level of chess ever seen. If Deep Blue’s victory over Kasparov signaled the transfer of chess power from humans to computers, the victory of centaurs over Hydra symbolized something more interesting still: humans empowered to do what they do best without the prerequisite of years of specialized pattern recognition. In 2014, an Abu Dhabi–based chess site put up $20,000 in prize money for freestyle players to compete in a tournament that also included games in which chess programs played without human intervention. The winning team comprised four people and several computers. The captain and primary decision maker was Anson Williams, a British engineer with no official chess rating. His teammate, Nelson Hernandez, told me, “What people don’t understand is that freestyle involves an integrated set of skills that in some cases have nothing to do with playing chess.” In traditional chess, Williams was probably at the level of a decent amateur. But he was well versed in computers and adept at integrating streaming information for strategy decisions. As a teenager, he had been outstanding at the video game Command and Conquer, known as a “real time strategy” game because players move simultaneously. In freestyle chess, he had to consider advice from teammates and various chess programs and then very quickly direct the computers to examine particular possibilities in more depth. He was like an executive with a team of mega-grandmaster tactical advisers, deciding whose advice to probe more deeply and ultimately whose to heed. He played each game cautiously, expecting a draw, but trying to set up situations that could lull an opponent into a mistake. In the end, Kasparov did figure out a way to beat the computer: by outsourcing tactics, the part of human expertise that is most easily replaced, the part that he and the Polgar prodigies spent years honing. • • • In 2007, National Geographic TV gave Susan Polgar a test. They sat her at a sidewalk table in the middle of a leafy block of Manhattan’s Greenwich Village, in front of a cleared chessboard. New Yorkers in jeans and fall jackets went about their jaywalking business as a white truck bearing a large diagram of a chessboard with twenty-eight pieces in midgame play took a left turn onto Thompson Street, past the deli, and past Susan Polgar. She glanced at the diagram as the truck drove by, and then perfectly re-created it on the board in front of her. The show was reprising a series of famous chess experiments that pulled back the curtain on kind-learning-environment skills. The first took place in the 1940s, when Dutch chess master and psychologist Adriaan de Groot flashed midgame chessboards in front of players of different ability levels, and then asked them to re-create the boards as well as they could. A grandmaster repeatedly re-created the entire board after seeing it for only three seconds. A master-level player managed that half as often as the grandmaster. A lesser, city champion player and an average club player were never able to re-create the board accurately. Just like Susan Polgar, grandmasters seemed to have photographic memories. After Susan succeeded in her first test, National Geographic TV turned the truck around to show the other side, which had a diagram with pieces placed at random. When Susan saw that side, even though there were fewer pieces, she could barely re-create anything at all. That test reenacted an experiment from 1973, in which two Carnegie Mellon University psychologists, William G. Chase and soon-to-be Nobel laureate Herbert A. Simon, repeated the De Groot exercise, but added a wrinkle. This time, the chess players were also given boards with the pieces in an arrangement that would never actually occur in a game. Suddenly, the experts performed just like the lesser players. The grandmasters never had photographic memories after all. Through repetitive study of game patterns, they had learned to do what Chase and Simon called “chunking.” Rather than struggling to remember the location of every individual pawn, bishop, and rook, the brains of elite players grouped pieces into a smaller number of meaningful chunks based on familiar patterns. Those patterns allow expert players to immediately assess the situation based on experience, which is why Garry Kasparov told me that grandmasters usually know their move within seconds. For Susan Polgar, when the van drove by the first time, the diagram was not twenty-eight items, but five different meaningful chunks that indicated how the game was progressing. Chunking helps explain instances of apparently miraculous, domain-specific memory, from musicians playing long pieces by heart to quarterbacks recognizing patterns of players in a split second and making a decision to throw. The reason that elite athletes seem to have superhuman reflexes is that they recognize patterns of ball or body movements that tell them what’s coming before it happens. When tested outside of their sport context, their superhuman reactions disappear. We all rely on chunking every day in skills in which we are expert. Take ten seconds and try to memorize as many of these twenty words as you can: Because groups twenty patterns meaningful are words easier into chunk remember really sentence familiar can to you much in a. Okay, now try again: Twenty words are really much easier to remember in a meaningful sentence because you can chunk familiar patterns into groups. Those are the same twenty pieces of information, but over the course of your life, you’ve learned patterns of words that allow you to instantly make sense of the second arrangement, and to remember it much more easily. Your restaurant server doesn’t just happen to have a miraculous memory; like musicians and quarterbacks, they’ve learned to group recurring information into chunks. Studying an enormous number of repetitive patterns is so important in chess that early specialization in technical practice is critical. Psychologists Fernand Gobet (an international master) and Guillermo Campitelli (coach to future grandmasters) found that the chances of a competitive chess player reaching international master status (a level down from grandmaster) dropped from one in four to one in fifty-five if rigorous training had not begun by age twelve. Chunking can seem like magic, but it comes from extensive, repetitive practice. Laszlo Polgar was right to believe in it. His daughters don’t even constitute the most extreme evidence. For more than fifty years, psychiatrist Darold Treffert studied savants, individuals with an insatiable drive to practice in one domain, and ability in that area that far outstrips their abilities in other areas. “Islands of genius,” Treffert calls it.* Treffert documented the almost unbelievable feats of savants like pianist Leslie Lemke, who can play thousands of songs from memory. Because Lemke and other savants have seemingly limitless retrieval capacity, Treffert initially attributed their abilities to perfect memories; they are human tape recorders. Except, when they are tested after hearing a piece of music for the first time, musical savants reproduce “tonal” music—the genre of nearly all pop and most classical music—more easily than “atonal” music, in which successive notes do not follow familiar harmonic structures. If savants were human tape recorders playing notes back, it would make no difference whether they were asked to re-create music that follows popular rules of composition or not. But in practice, it makes an enormous difference. In one study of a savant pianist, the researcher, who had heard the man play hundreds of songs flawlessly, was dumbstruck when the savant could not re-create an atonal piece even after a practice session with it. “What I heard seemed so unlikely that I felt obliged to check that the keyboard had not somehow slipped into transposing mode,” the researcher recorded. “But he really had made a mistake, and the errors continued.” Patterns and familiar structures were critical to the savant’s extraordinary recall ability. Similarly, when artistic savants are briefly shown pictures and asked to reproduce them, they do much better with images of real-life objects than with more abstract depictions. It took Treffert decades to realize he had been wrong, and that savants have more in common with prodigies like the Polgar sisters than he thought. They do not merely regurgitate. Their brilliance, just like the Polgar brilliance, relies on repetitive structures, which is precisely what made the Polgars’ skill so easy to automate. • • • With the advances made by the AlphaZero chess program (owned by an AI arm of Google’s parent company), perhaps even the top centaurs would be vanquished in a freestyle tournament. Unlike previous chess programs, which used brute processing force to calculate an enormous number of possible moves and rate them according to criteria set by programmers, AlphaZero actually taught itself to play. It needed only the rules, and then to play itself a gargantuan number of times, keeping track of what tends to work and what doesn’t, and using that to improve. In short order, it beat the best chess programs. It did the same with the game of Go, which has many more possible positions. But the centaur lesson remains: the more a task shifts to an open world of big-picture strategy, the more humans have to add. AlphaZero programmers touted their impressive feat by declaring that their creation had gone from “tabula rasa” (blank slate) to master on its own. But starting with a game is anything but a blank slate. The program is still operating in a constrained, rule-bound world. Even in video games that are less bound by tactical patterns, computers have faced a greater challenge. The latest video game challenge for artificial intelligence is StarCraft, a franchise of real-time strategy games in which fictional species go to war for supremacy in some distant reach of the Milky Way. It requires much more complex decision making than chess. There are battles to manage, infrastructure to plan, spying to do, geography to explore, and resources to collect, all of which inform one another. Computers struggled to win at StarCraft, Julian Togelius, an NYU professor who studies gaming AI, told me in 2017. Even when they did beat humans in individual games, human players adjusted with “long-term adaptive strategy” and started winning. “There are so many layers of thinking,” he said. “We humans sort of suck at all of them individually, but we have some kind of very approximate idea about each of them and can combine them and be somewhat adaptive. That seems to be what the trick is.” In 2019, in a limited version of StarCraft, AI beat a pro for the first time. (The pro adapted and earned a win after a string of losses.) But the game’s strategic complexity provides a lesson: the bigger the picture, the more unique the potential human contribution. Our greatest strength is the exact opposite of narrow specialization. It is the ability to integrate broadly. According to Gary Marcus, a psychology and neural science professor who sold his machine learning company to Uber, “In narrow enough worlds, humans may not have much to contribute much longer. In more open-ended games, I think they certainly will. Not just games, in open ended real-world problems we’re still crushing the machines.” The progress of AI in the closed and orderly world of chess, with instant feedback and bottomless data, has been exponential. In the rule-bound but messier world of driving, AI has made tremendous progress, but challenges remain. In a truly open-world problem devoid of rigid rules and reams of perfect historical data, AI has been disastrous. IBM’s Watson destroyed at Jeopardy! and was subsequently pitched as a revolution in cancer care, where it flopped so spectacularly that several AI experts told me they worried its reputation would taint AI research in health-related fields. As one oncologist put it, “The difference between winning at Jeopardy! and curing all cancer is that we know the answer to Jeopardy! questions.” With cancer, we’re still working on posing the right questions in the first place. In 2009, a report in the esteemed journal Nature announced that Google Flu Trends could use search query patterns to predict the winter spread of flu more rapidly than and just as accurately as the Centers for Disease Control and Prevention. But Google Flu Trends soon got shakier, and in the winter of 2013 it predicted more than double the prevalence of flu that actually occurred in the United States. Today, Google Flu Trends is no longer publishing estimates, and just has a holding page saying that “it is still early days” for this kind of forecasting. Tellingly, Marcus gave me this analogy for the current limits of expert machines: “AI systems are like savants.” They need stable structures and narrow worlds. When we know the rules and answers, and they don’t change over time—chess, golf, playing classical music—an argument can be made for savant-like hyperspecialized practice from day one. But those are poor models of most things humans want to learn. When narrow specialization is combined with an unkind domain, the human tendency to rely on experience of familiar patterns can backfire horribly—like the expert firefighters who suddenly make poor choices when faced with a fire in an unfamiliar structure. Chris Argyris, who helped create the Yale School of Management, noted the danger of treating the wicked world as if it is kind. He studied high-powered consultants from top business schools for fifteen years, and saw that they did really well on business school problems that were well defined and quickly assessed. But they employed what Argyris called single-loop learning, the kind that favors the first familiar solution that comes to mind. Whenever those solutions went wrong, the consultant usually got defensive. Argyris found their “brittle personalities” particularly surprising given that “the essence of their job is to teach others how to do things differently.” Psychologist Barry Schwartz demonstrated a similar, learned inflexibility among experienced practitioners when he gave college students a logic puzzle that involved hitting switches to turn light bulbs on and off in sequence, and that they could play over and over. It could be solved in seventy different ways, with a tiny money reward for each success. The students were not given any rules, and so had to proceed by trial and error.* If a student found a solution, they repeated it over and over to get more money, even if they had no idea why it worked. Later on, new students were added, and all were now asked to discover the general rule of all solutions. Incredibly, every student who was brand-new to the puzzle discovered the rule for all seventy solutions, while only one of the students who had been getting rewarded for a single solution did. The subtitle of Schwartz’s paper: “How Not to Teach People to Discover Rules”—that is, by providing rewards for repetitive short-term success with a narrow range of solutions. All this is bad news for some of the business world’s favorite successful-learning analogies—the Polgars, Tiger, and to some degree analogies based in any sport or game. Compared to golf, a sport like tennis is much more dynamic, with players adjusting to opponents every second, to surfaces, and sometimes to their own teammates. (Federer was a 2008 Olympic gold medalist in doubles.) But tennis is still very much on the kind end of the spectrum compared to, say, a hospital emergency room, where doctors and nurses do not automatically find out what happens to a patient after their encounter. They have to find ways to learn beyond practice, and to assimilate lessons that might even contradict their direct experience. The world is not golf, and most of it isn’t even tennis. As Robin Hogarth put it, much of the world is “Martian tennis.” You can see the players on a court with balls and rackets, but nobody has shared the rules. It is up to you to derive them, and they are subject to change without notice. • • • We have been using the wrong stories. Tiger’s story and the Polgar story give the false impression that human skill is always developed in an extremely kind learning environment. If that were the case, specialization that is both narrow and technical and that begins as soon as possible would usually work. But it doesn’t even work in most sports. If the amount of early, specialized practice in a narrow area were the key to innovative performance, savants would dominate every domain they touched, and child prodigies would always go on to adult eminence. As psychologist Ellen Winner, one of the foremost authorities on gifted children, noted, no savant has ever been known to become a “Big-C creator,” who changed their field. There are domains beyond chess in which massive amounts of narrow practice make for grandmaster-like intuition. Like golfers, surgeons improve with repetition of the same procedure. Accountants and bridge and poker players develop accurate intuition through repetitive experience. Kahneman pointed to those domains’ “robust statistical regularities.” But when the rules are altered just slightly, it makes experts appear to have traded flexibility for narrow skill. In research in the game of bridge where the order of play was altered, experts had a more difficult time adapting to new rules than did nonexperts. When experienced accountants were asked in a study to use a new tax law for deductions that replaced a previous one, they did worse than novices. Erik Dane, a Rice University professor who studies organizational behavior, calls this phenomenon “cognitive entrenchment.” His suggestions for avoiding it are about the polar opposite of the strict version of the ten-thousand-hours school of thought: vary challenges within a domain drastically, and, as a fellow researcher put it, insist on “having one foot outside your world.” Scientists and members of the general public are about equally likely to have artistic hobbies, but scientists inducted into the highest national academies are much more likely to have avocations outside of their vocation. And those who have won the Nobel Prize are more likely still. Compared to other scientists, Nobel laureates are at least twenty-two times more likely to partake as an amateur actor, dancer, magician, or other type of performer. Nationally recognized scientists are much more likely than other scientists to be musicians, sculptors, painters, printmakers, woodworkers, mechanics, electronics tinkerers, glassblowers, poets, or writers, of both fiction and nonfiction. And, again, Nobel laureates are far more likely still. The most successful experts also belong to the wider world. “To him who observes them from afar,” said Spanish Nobel laureate Santiago Ram?n y Cajal, the father of modern neuroscience, “it appears as though they are scattering and dissipating their energies, while in reality they are channeling and strengthening them.” The main conclusion of work that took years of studying scientists and engineers, all of whom were regarded by peers as true technical experts, was that those who did not make a creative contribution to their field lacked aesthetic interests outside their narrow area. As psychologist and prominent creativity researcher Dean Keith Simonton observed, “rather than obsessively focus[ing] on a narrow topic,” creative achievers tend to have broad interests. “This breadth often supports insights that cannot be attributed to domain-specific expertise alone.” Those findings are reminiscent of a speech Steve Jobs gave, in which he famously recounted the importance of a calligraphy class to his design aesthetics. “When we were designing the first Macintosh computer, it all came back to me,” he said. “If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts.” Or electrical engineer Claude Shannon, who launched the Information Age thanks to a philosophy course he took to fulfill a requirement at the University of Michigan. In it, he was exposed to the work of self-taught nineteenth-century English logician George Boole, who assigned a value of 1 to true statements and 0 to false statements and showed that logic problems could be solved like math equations. It resulted in absolutely nothing of practical importance until seventy years after Boole passed away, when Shannon did a summer internship at ATandT’s Bell Labs research facility. There he recognized that he could combine telephone call-routing technology with Boole’s logic system to encode and transmit any type of information electronically. It was the fundamental insight on which computers rely. “It just happened that no one else was familiar with both those fields at the same time,” Shannon said. In 1979, Christopher Connolly cofounded a psychology consultancy in the United Kingdom to help high achievers (initially athletes, but then others) perform at their best. Over the years, Connolly became curious about why some professionals floundered outside a narrow expertise, while others were remarkably adept at expanding their careers—moving from playing in a world-class orchestra, for example, to running one. Thirty years after he started, Connolly returned to school to do a PhD investigating that very question, under Fernand Gobet, the psychologist and chess international master. Connolly’s primary finding was that early in their careers, those who later made successful transitions had broader training and kept multiple “career streams” open even as they pursued a primary specialty. They “traveled on an eight-lane highway,” he wrote, rather than down a single-lane one-way street. They had range. The successful adapters were excellent at taking knowledge from one pursuit and applying it creatively to another, and at avoiding cognitive entrenchment. They employed what Hogarth called a “circuit breaker.” They drew on outside experiences and analogies to interrupt their inclination toward a previous solution that may no longer work. Their skill was in avoiding the same old patterns. In the wicked world, with ill-defined challenges and few rigid rules, range can be a life hack. Pretending the world is like golf and chess is comforting. It makes for a tidy kind-world message, and some very compelling books. The rest of this one will begin where those end—in a place where the popular sport is Martian tennis, with a view into how the modern world became so wicked in the first place. CHAPTER 2 How the Wicked World Was Made THE TOWN OF DUNEDIN sits at the base of a hilly peninsula that juts off of New Zealand’s South Island into the South Pacific. The peninsula is famous for yellow-eyed penguins, and Dunedin boasts, demurely, the world’s steepest residential street. It also features the University of Otago, the oldest university in New Zealand, and home to James Flynn, a professor of political studies who changed how psychologists think about thinking. He started in 1981, intrigued by a thirty-year-old paper that reported IQ test scores of American soldiers in World Wars I and II. The World War II soldiers had performed better, by a lot. A World War I soldier who scored smack in the middle of his peers—the 50th percentile—would have made only the 22nd percentile compared to soldiers in World War II. Flynn wondered if perhaps civilians had experienced a similar improvement. “I thought, if IQ gains had occurred anywhere,” he told me, “maybe they had occurred everywhere.” If he was right, psychologists had been missing something big right before their eyes. Flynn wrote to researchers in other countries asking for data, and on a dull November Saturday in 1984, he found a letter in his university mailbox. It was from a Dutch researcher, and it contained years of raw data from IQ tests given to young men in the Netherlands. The data were from a test known as Raven’s Progressive Matrices, designed to gauge the test taker’s ability to make sense of complexity. Each question of the test shows a set of abstract designs with one design missing. The test taker must try to fill in the missing design to complete a pattern. Raven’s was conceived to be the epitome of a “culturally reduced” test; performance should be unaffected by material learned in life, inside or outside of school. Should Martians alight on Earth, Raven’s should be the test capable of determining how bright they are. And yet Flynn could immediately see that young Dutchmen had made enormous gains from one generation to the next. Flynn found more clues in test reference manuals. IQ tests are all standardized so that the average score is always 100 points. (They are graded based on a curve, with 100 in the middle.) Flynn noticed that the tests had to be restandardized from time to time to keep the average at 100, because test takers were giving more correct answers than they had in the past. In the twelve months after he received the Dutch letter, Flynn collected data from fourteen countries. Every single one showed huge gains for both children and adults. “Our advantage over our ancestors,” as he put it, is “from the cradle to the grave.” Flynn had asked the right question. Score gains had occurred everywhere. Other academics had stumbled upon pieces of the same data earlier, but none had investigated whether it was part of a global pattern, even those who were having to tweak the test scoring system to keep the average at 100. “As an outsider,” Flynn told me, “things strike me as surprising that I think people trained in psychometrics just accepted.” • • • The Flynn effect—the increase in correct IQ test answers with each new generation in the twentieth century—has now been documented in more than thirty countries. The gains are startling: three points every ten years. To put that in perspective, if an adult who scored average today were compared to adults a century ago, she would be in the 98th percentile. When Flynn published his revelation in 1987, it hit the community of researchers who study cognitive ability like a firebomb. The American Psychological Association convened an entire meeting on the issue, and psychologists invested in the immutable nature of IQ test scores offered an array of explanations to usher the effect away, from more education and better nutrition—which presumably contributed—to test-taking experience, but none fit the unusual pattern of score improvements. On tests that gauged material picked up in school or with independent reading or study—general knowledge, arithmetic, vocabulary—scores hardly budged. Meanwhile, performance on more abstract tasks that are never formally taught, like the Raven’s matrices, or “similarities” tests, which require a description of how two things are alike, skyrocketed. A young person today asked to give similarities between “dusk” and “dawn” might immediately realize that both connote times of day. But they would be far more likely than their grandmothers to produce a higher-level similarity: both separate day from night. A child today who scores average on similarities would be in the 94th percentile of her grandparents’ generation. When a group of Estonian researchers used national test scores to compare word understandings of schoolkids in the 1930s to those in 2006, they saw that improvement came very specifically on the most abstract words. The more abstract the word, the bigger the improvement. The kids barely bested their grandparents on words for directly observable objects or phenomena (“hen,” “eating,” “illness”), but they improved massively on imperceptible concepts (“law,” “pledge,” “citizen”). The gains around the world on Raven’s Progressive Matrices—where change was least expected—were the biggest of all. “The huge Raven’s gains show that today’s children are far better at solving problems on the spot without a previously learned method for doing so,” Flynn concluded. They are more able to extract rules and patterns where none are given. Even in countries that have recently had a decrease in verbal and math IQ test scores, Raven’s scores went up. The cause, it seemed, was some ineffable thing in modern air. Not only that, but the mystery air additive somehow supercharged modern brains specifically for the most abstract tests. What manner of change, Flynn wondered, could be at once so large and yet so particular? • • • Through the late 1920s and early 1930s, remote reaches of the Soviet Union were forced through social and economic changes that would normally take generations. Individual farmers in isolated areas of what is now Uzbekistan had long survived by cultivating small gardens for food, and cotton for everything else. Nearby in the mountain pasturelands of present-day Kyrgyzstan, herders kept animals. The population was entirely illiterate, and a hierarchical social structure was enforced by strict religious rules. The socialist revolution dismantled that way of life almost overnight. The Soviet government forced all that agricultural land to become large collective farms and began industrial development. The economy quickly became interconnected and complex. Farmers had to form collective work strategies, plan ahead for production, divvy up functions, and assess work along the way. Remote villages began communicating with distant cities. A network of schools opened in regions with 100 percent illiteracy, and adults began learning a system of matching symbols to sounds. Villagers had used numbers before, but only in practical transactions. Now they were taught the concept of a number as an abstraction that existed even without reference to counting animals or apportioning food. Some village women remained fully illiterate but took short courses on how to teach kindergartners. Other women were admitted for longer study at a teachers’ school. Classes in preschool education and the science and technology of agriculture were offered to students who had no formal education of any kind. Secondary schools and technical institutes soon followed. In 1931, amid that incredible transformation, a brilliant young Russian psychologist named Alexander Luria recognized a fleeting “natural experiment,” unique in the history of the world. He wondered if changing citizens’ work might also change their minds. When Luria arrived, the most remote villages had not yet been touched by the warp-speed restructuring of traditional society. Those villages gave him a control group. He learned the local language and brought fellow psychologists to engage villagers in relaxed social situations—teahouses or pastures—and discuss questions or tasks designed to discern their habits of mind. Some were very simple: present skeins of wool or silk in an array of hues and ask participants to describe them. The collective farmers and farm leaders, as well as the female students, easily picked out blue, red, and yellow, sometimes with variations, like dark blue or light yellow. The most remote villagers, who were still “premodern,” gave more diversified descriptions: cotton in bloom, decayed teeth, a lot of water, sky, pistachio. Then they were asked to sort the skeins into groups. The collective farmers, and young people with even a little formal education, did so easily, naturally forming color groups. Even when they did not know the name of a particular color, they had little trouble putting together darker and lighter shades of the same one. The remote villagers, on the other hand, refused, even those whose work was embroidery. “It can’t be done,” they said, or, “None of them are the same, you can’t put them together.” When prodded vigorously, and only if they were allowed to make many small groups, some relented and created sets that were apparently random. A few others appeared to sort the skeins according to color saturation, without regard to the color. Geometric shapes followed suit. The greater the dose of modernity, the more likely an individual grasped the abstract concept of “shapes” and made groups of triangles, rectangles, and circles, even if they had no formal education and did not know the shapes’ names. The remote villagers, meanwhile, saw nothing alike in a square drawn with solid lines and the same exact square drawn with dotted lines. To Alieva, a twenty-six-year-old remote villager, the solid-line square was obviously a map, and the dotted-line square was a watch. “How can a map and a watch be put together?” she asked, incredulous. Khamid, a twenty-four-year-old remote villager, insisted that filled and unfilled circles could not go together because one was a coin and the other a moon. The pattern continued for every genre of question. Pressed to make conceptual groupings—akin to the similarities questions on IQ tests—remote villagers reverted to practical narratives based on their direct experience. When psychologists attempted to explain a “which one does not belong” grouping exercise to thirty-nine-year-old Rakmat, they gave him the example of three adults and one child, with the child obviously different from the others. Except Rakmat could not see it that way. “The boy must stay with the others!” he argued. The adults are working, “and if they have to keep running out to fetch things, they’ll never get the job done, but the boy can do the running for them.” Okay, then, how about a hammer, a saw, a hatchet, and a log—three of them are tools. They are not a group, Rakmat replied, because they are useless without the log, so why would they be together? Other villagers removed either the hammer or the hatchet, which they saw as less versatile for use with the log, unless they considered pounding the hatchet into the log with the hammer, in which case it could stay. Perhaps, then, bird/rifle/dagger/bullet? You can’t possibly remove one and have a group, a remote villager insisted. The bullet must be loaded in the rifle to kill the bird, and “then you have to cut the bird up with the dagger, since there’s no other way to do it.” These were just the introductions explaining the grouping task, not the actual questions. No amount of cajoling, explanation, or examples could get remote villagers to use reasoning based on any concept that was not a concrete part of their daily lives. The farmers and students who had begun to join the modern world were able to practice a kind of thinking called “eduction,” to work out guiding principles when given facts or materials, even in the absence of instructions, and even when they had never seen the material before. This, it turns out, is precisely what Raven’s Progressive Matrices tests. Imagine presenting the villagers living in premodern circumstances with abstract designs from the Raven’s test. Some of the changes wrought by modernity and collective culture seem almost magical. Luria found that most remote villagers were not subject to the same optical illusions as citizens of the industrialized world, like the Ebbinghaus illusion. Which middle circle below looks bigger? If you said the one on the right, you’re probably a citizen of the industrialized world. The remote villagers saw, correctly, that they are the same, while the collective farmers and women in teachers’ school picked the one on the right. Those findings have been repeated in other traditional societies, and scientists have suggested it may reflect the fact that premodern people are not as drawn to the holistic context—the relationship of the various circles to one another—so their perception is not changed by the presence of extra circles. To use a common metaphor, premodern people miss the forest for the trees; modern people miss the trees for the forest. Since Luria’s voyage to the interior, scientists have replicated his work in other cultures. The Kpelle people in Liberia were subsistence rice farmers, but in the 1970s roads began snaking toward them, connecting the Kpelle to cities. Given similarities tests, teenagers who were engaged with modern institutions grouped items by abstract categories (“All of these things can keep us warm”), while the traditional teens generated groups that were comparatively arbitrary, and changed frequently even when they were asked to repeat the exact same task. Because the touched-by-modernity teens had constructed meaningful thematic groups, they also had far superior recall when asked later to recount the items. The more they had moved toward modernity, the more powerful their abstract thinking, and the less they had to rely on their concrete experience of the world as a reference point. • • • In Flynn’s terms, we now see the world through “scientific spectacles.” He means that rather than relying on our own direct experiences, we make sense of reality through classification schemes, using layers of abstract concepts to understand how pieces of information relate to one another. We have grown up in a world of classification schemes totally foreign to the remote villagers; we classify some animals as mammals, and inside of that class make more detailed connections based on the similarity of their physiology and DNA. Words that represent concepts that were previously the domain of scholars became widely understood in a few generations. The word “percent” was almost absent from books in 1900. By 2000 it appeared about once every five thousand words. (This chapter is 5,500 words long.) Computer programmers pile layers of abstraction. (They do very well on Raven’s.) In the progress bar on your computer screen that fills up to indicate a download, abstractions are legion, from the fundamental—the programming language that created it is a representation of binary code, the raw 1s and 0s the computer uses—to the psychological: the bar is a visual projection of time that provides peace of mind by estimating the progress of an immense number of underlying activities. Lawyers might consider how results of one court case brought by an individual in Oklahoma could be relevant to a different one brought by a company in California. In order to prep, they might try out different hypothetical arguments while putting themselves in the shoes of an opposing attorney to predict how they will argue. Conceptual schemes are flexible, able to arrange information and ideas for a wide variety of uses, and to transfer knowledge between domains. Modern work demands knowledge transfer: the ability to apply knowledge to new situations and different domains. Our most fundamental thought processes have changed to accommodate increasing complexity and the need to derive new patterns rather than rely only on familiar ones. Our conceptual classification schemes provide a scaffolding for connecting knowledge, making it accessible and flexible. Research on thousands of adults in six industrializing nations found that exposure to modern work with self-directed problem solving and nonrepetitive challenges was correlated with being “cognitively flexible.” As Flynn makes sure to point out, this does not mean that brains now have more inherent potential than a generation ago, but rather that utilitarian spectacles have been swapped for spectacles through which the world is classified by concepts.* Even recently, within some very traditional or orthodox religious communities that have modernized but that still block women from engaging in modern work, the Flynn effect has proceeded more slowly for women than for men in the same community. Exposure to the modern world has made us better adapted for complexity, and that has manifested as flexibility, with profound implications for the breadth of our intellectual world. In every cognitive direction, the minds of premodern citizens were severely constrained by the concrete world before them. With cajoling, some solved the following logic sequence: “Cotton grows well where it is hot and dry. England is cold and damp. Can cotton grow there or not?” They had direct experience growing cotton, so some of them could answer (tentatively and when pushed) for a country they had never visited. The same exact puzzle with different details stumped them: “In the Far North, where there is snow, all bears are white. Novaya Zemlya is in the Far North and there is always snow there. What colors are the bears there?” That time, no amount of pushing could get the remote villagers to answer. They would respond only with principles. “Your words can be answered only by someone who was there,” one man said, even though he had never been to England but had just answered the cotton question. But even a faint taste of modern work began to change that. Given the white bear puzzle, Abdull, forty-five and barely literate but chairman of a collective farm, would not give an answer confidently, but he did exercise formal logic. “To go by your words,” he said, “they should all be white.” The transition completely transformed the villagers’ inner worlds. When the scientists from Moscow asked the villagers what they would like to know about them or the place they came from, the isolated farmers and herders generally could not come up with a single question. “I haven’t seen what people do in other cities,” one said, “so how can I ask?” Whereas those engaged in collective farming were readily curious. “Well, you just spoke about white bears,” said thirty-one-year-old Akhmetzhan, a collective farmer. “I don’t understand where they come from.” He stopped for a moment to ponder. “And then you mentioned America. Is it governed by us or by some other power?” Nineteen-year-old Siddakh, who worked on a collective farm and had studied in a school for two years, was brimming with imaginative questions that probed self-improvement, from the personal to the local and global: “Well, what could I do to make our kolkhozniks [collective farmers] better people? How can we obtain bigger plants, or plant ones which will grow to be like big trees? And then I’m interested in how the world exists, where things come from, how the rich became rich and why the poor are poor.” Where the very thoughts of premodern villagers were circumscribed by their direct experiences, modern minds are comparatively free. This is not to say that one way of life is uniformly better than another. As Arab historiographer Ibn Khaldun, considered a founder of sociology, pointed out centuries ago, a city dweller traveling through the desert will be completely dependent on a nomad to keep him alive. So long as they remain in the desert, the nomad is a genius. But it is certainly true that modern life requires range, making connections across far-flung domains and ideas. Luria addressed this kind of “categorical” thinking, which Flynn would later style as scientific spectacles. “[It] is usually quite flexible,” Luria wrote. “Subjects readily shift from one attribute to another and construct suitable categories. They classify objects by substance (animals, flowers, tools), materials (wood, metal, glass), size (large, small), and color (light, dark), or other property. The ability to move freely, to shift from one category to another, is one of the chief characteristics of ‘abstract thinking.’” • • • Flynn’s great disappointment is the degree to which society, and particularly higher education, has responded to the broadening of the mind by pushing specialization, rather than focusing early training on conceptual, transferable knowledge. Flynn conducted a study in which he compared the grade point averages of seniors at one of America’s top state universities, from neuroscience to English majors, to their performance on a test of critical thinking. The test gauged students’ ability to apply fundamental abstract concepts from economics, social and physical sciences, and logic to common, real-world scenarios. Flynn was bemused to find that the correlation between the test of broad conceptual thinking and GPA was about zero. In Flynn’s words, “the traits that earn good grades at [the university] do not include critical ability of any broad significance.”* Each of twenty test questions gauged a form of conceptual thinking that can be put to widespread use in the modern world. For test items that required the kind of conceptual reasoning that can be gleaned with no formal training—detecting circular logic, for example—the students did well. But in terms of frameworks that can best put their conceptual reasoning skills to use, they were horrible. Biology and English majors did poorly on everything that was not directly related to their field. None of the majors, including psychology, understood social science methods. Science students learned the facts of their specific field without understanding how science should work in order to draw true conclusions. Neuroscience majors did not do particularly well on anything. Business majors performed very poorly across the board, including in economics. Econ majors did the best overall. Economics is a broad field by nature, and econ professors have been shown to apply the reasoning principles they’ve learned to problems outside their area.* Chemists, on the other hand, are extraordinarily bright, but in several studies struggled to apply scientific reasoning to nonchemistry problems. Students Flynn tested often mistook subtle value judgments for scientific conclusions, and in a question that presented a tricky scenario and required students not to mistake a correlation for evidence of causation, they performed worse than random. Almost none of the students in any major showed a consistent understanding of how to apply methods of evaluating truth they had learned in their own discipline to other areas. In that way, the students had something in common with Luria’s remote villagers—even the science majors were typically unable to generalize research methods from their own field to other fields. Flynn’s conclusion: “There is no sign that any department attempts to develop [anything] other than narrow critical competence.” • • • Flynn is now in his eighties. He has a full white beard, the wind-buffeted cheeks of a lifelong runner, and piles of white curls that tuft and billow like cumulus clouds around his head. His house on a hill in Dunedin looks out over a gently rolling green farmscape. When he recounts his own education at the University of Chicago, where he was captain of the cross-country team, he raises his voice. “Even the best universities aren’t developing critical intelligence,” he told me. “They aren’t giving students the tools to analyze the modern world, except in their area of specialization. Their education is too narrow.” He does not mean this in the simple sense that every computer science major needs an art history class, but rather that everyone needs habits of mind that allow them to dance across disciplines. Chicago has long prided itself on a core curriculum dedicated to interdisciplinary critical thinking. The two-year core, according to the university, “is intended as an introduction to the tools of inquiry used in every discipline—science, mathematics, humanities, and social sciences. The goal is not just to transfer knowledge, but to raise fundamental questions and to become familiar with the powerful ideas that shape our society.” But even at Chicago, Flynn argues, his education did not maximize the modern potential for applying conceptual thinking across domains. Professors, he told me, are just too eager to share their favorite facts gleaned from years of acceleratingly narrow study. He has taught for fifty years, from Cornell to Canterbury, and is quick to include himself in that criticism. When he taught intro to moral and political philosophy, he couldn’t resist the urge to impart his favorite minutiae from Plato, Aristotle, Hobbes, Marx, and Nietzsche. Flynn introduced broad concepts in class, but he is sure that he often buried them in a mountain of other information specific to that class alone—a bad habit he worked to overcome. The study he conducted at the state university convinced him that college departments rush to develop students in a narrow specialty area, while failing to sharpen the tools of thinking that can serve them in every area. This must change, he argues, if students are to capitalize on their unprecedented capacity for abstract thought. They must be taught to think before being taught what to think about. Students come prepared with scientific spectacles, but do not leave carrying a scientific-reasoning Swiss Army knife. Here and there, professors have begun to pick up the challenge. A class at the University of Washington titled “Calling Bullshit” (in staid coursebook language: INFO 198/BIOL 106B), focused on broad principles fundamental to understanding the interdisciplinary world and critically evaluating the daily firehose of information. When the class was first posted in 2017, registration filled up in the first minute. Jeannette Wing, a computer science professor at Columbia University and former corporate vice president of Microsoft Research, has pushed broad “computational thinking” as the mental Swiss Army knife. She advocated that it become as fundamental as reading, even for those who will have nothing to do with computer science or programming. “Computational thinking is using abstraction and decomposition when attacking a large complex task,” she wrote. “It is choosing an appropriate representation for a problem.” Mostly, though, students get what economist Bryan Caplan called narrow vocational training for jobs few of them will ever have. Three-quarters of American college graduates go on to a career unrelated to their major—a trend that includes math and science majors—after having become competent only with the tools of a single discipline. One good tool is rarely enough in a complex, interconnected, rapidly changing world. As the historian and philosopher Arnold Toynbee said when he described analyzing the world in an age of technological and social change, “No tool is omnicompetent.” • • • Flynn’s passion resonated deeply with me. Before turning to journalism, I was in grad school, living in a tent in the Arctic, studying how changes in plant life might impact the subterranean permafrost. Classes consisted of stuffing my brain with the details of Arctic plant physiology. Only years later—as an investigative journalist writing about poor scientific research—did I realize that I had committed statistical malpractice in one section of the thesis that earned me a master’s degree from Columbia University. Like many a grad student, I had a big database and hit a computer button to run a common statistical analysis, never having been taught to think deeply (or at all) about how that statistical analysis even worked. The stat program spit out a number summarily deemed “statistically significant.” Unfortunately, it was almost certainly a false positive, because I did not understand the limitations of the statistical test in the context in which I applied it. Nor did the scientists who reviewed the work. As statistician Doug Altman put it, “Everyone is so busy doing research they don’t have time to stop and think about the way they’re doing it.” I rushed into extremely specialized scientific research without having learned scientific reasoning. (And then I was rewarded for it, with a master’s degree, which made for a very wicked learning environment.) As backward as it sounds, I only began to think broadly about how science should work years after I left it. Fortunately, as an undergrad, I did have a chemistry professor who embodied Flynn’s ideal. On every exam, amid typical chemistry questions, was something like this: “How many piano tuners are there in New York City?” Students had to estimate, just by reasoning, and try to get the right order of magnitude. The professor later explained that these were “Fermi problems,” because Enrico Fermi—who created the first nuclear reactor beneath the University of Chicago football field—constantly made back-of-the-envelope estimates to help him approach problems.* The ultimate lesson of the question was that detailed prior knowledge was less important than a way of thinking. On the first exam, I went with gut instinct (“I have no clue, maybe ten thousand?”)—way too high. By the end of the class, I had a new tool in my conceptual Swiss Army knife, a way of using what little I did know to make a guess at what I didn’t. I knew the population of New York City; most single people in studio apartments probably don’t have pianos that get tuned, and most of my friends’ parents had one to three children, so how many households are in New York? What portion might have pianos? How often are pianos tuned? How long might it take to tune a piano? How many homes can one tuner reach in a day? How many days a year does a tuner work? None of the individual estimates has to be particularly accurate in order to get a reasonable overall answer. Remote Uzbek villagers would not perform well on Fermi problems, but neither did I before taking that class. It was easy to learn, though. Having grown up in the twentieth century, I was already wearing the spectacles, I just needed help capitalizing on them. I remember nothing about stoichiometry, but I use Fermi thinking regularly, breaking down a problem so I can leverage what little I know to start investigating what I don’t, a “similarities” problem of sorts. Fortunately, several studies have found that a little training in broad thinking strategies, like Fermi-izing, can go a long way, and can be applied across domains. Unsurprisingly, Fermi problems were a topic in the “Calling Bullshit” course. It used a deceptive cable news report as a case study to demonstrate “how Fermi estimation can cut through bullshit like a hot knife through butter.” It gives anyone consuming numbers, from news articles to advertisements, the ability quickly to sniff out deceptive stats. That’s a pretty handy hot butter knife. I would have been a much better researcher in any domain, including Arctic plant physiology, had I learned broadly applicable reasoning tools rather than the finer details of Arctic plant physiology. • • • Like chess masters and firefighters, premodern villagers relied on things being the same tomorrow as they were yesterday. They were extremely well prepared for what they had experienced before, and extremely poorly equipped for everything else. Their very thinking was highly specialized in a manner that the modern world has been telling us is increasingly obsolete. They were perfectly capable of learning from experience, but failed at learning without experience. And that is what a rapidly changing, wicked world demands—conceptual reasoning skills that can connect new ideas and work across contexts. Faced with any problem they had not directly experienced before, the remote villagers were completely lost. That is not an option for us. The more constrained and repetitive a challenge, the more likely it will be automated, while great rewards will accrue to those who can take conceptual knowledge from one problem or domain and apply it in an entirely new one. The ability to apply knowledge broadly comes from broad training. A particular skilled group of performers in another place and time turned broad training into an art form. Their story is older, and yet a much better parable than chess prodigies for the modern age. CHAPTER 3 When Less of the Same Is More ANYWHERE A TRAVELER to seventeenth-century Venice turned an ear, they could hear music exploding from its traditional bounds. Even the name of the musical era, “Baroque,” is taken from a jewelers’ term to describe a pearl that was extravagantly large and unusually shaped. Instrumental music—music that did not depend on words—underwent a complete revolution. Some of the instruments were brand-new, like the piano; others were enhanced—violins made by Antonio Stradivari would sell centuries later for millions of dollars. The modern system of major and minor keys was created. Virtuosos, the original musical celebrities, were anointed. Composers seized on their skill and wrote elaborate solos to push the boundaries of the best players’ abilities. The concerto was born—in which a virtuoso soloist plays back and forth against an orchestra—and Venetian composer Antonio Vivaldi (known as il Prete Rosso, the Red Priest, for his flame-red hair) became the form’s undisputed champion. The Four Seasons is as close to a pop hit as three-hundred-year-old music gets. (A mashup with a song from Disney’s Frozen has ninety million YouTube plays.) Vivaldi’s creativity was facilitated by a particular group of musicians who could learn new music quickly on a staggering array of instruments. They drew emperors, kings, princes, cardinals, and countesses from across Europe to be regaled by the most innovative music of the time. They were the all-female cast known as the figlie del coro, literally, “daughters of the choir.” Leisure activities like horseback riding and field sports were scarce in the floating city, so music bore the full weight of entertainment for its citizens. The sounds of violins, flutes, horns, and voices spilled into the night from every bobbing barge and gondola. And in a time and place seething with music, the figlie dominated for a century. “Only in Venice,” a prominent visitor wrote, “can one see these musical prodigies.” They were both ground zero of a musical revolution and an oddity. Elsewhere, their instruments were reserved for men. “They sing like angels, play the violin, the flute, the organ, the oboe, the cello, and the bassoon,” an astonished French politician remarked. “In short, no instrument is large enough to frighten them.” Others were less diplomatic. Aristocratic British writer Hester Thrale complained, “The sight of girls handling the double bass, and blowing into the bassoon did not much please me.” After all, “suitable feminine instruments” were more along the lines of the harpsichord or musical glasses. The figlie left the king of Sweden in awe. Literary rogue Casanova marveled at the standing-room-only crowds. A dour French concert reviewer singled out a particular violinist: “She is the first of her sex to challenge the success of our great artists.” Even listeners not obviously disposed to support the arts were moved. Francesco Coli described “angelic Sirens,” who exceeded “even the most ethereal of birds” and “threw open for listeners the doors of Paradise.” Especially surprising praise, perhaps, considering that Coli was the official book censor for the Venetian Inquisition. The best figlie became Europe-wide celebrities, like Anna Maria della Piet?. A German baron flatly declared her “the premier violinist in Europe.” The president of the parliament of Burgundy said she was “unsurpassed” even in Paris. An expense report that Vivaldi recorded in 1712 shows that he spent twenty ducats on a violin for sixteen-year-old Anna Maria, an engagement-ring-like sum for Vivaldi, who made that much in four months. Among the hundreds of concertos Vivaldi wrote for the figlie del coro are twenty-eight that survived in the “Anna Maria notebook.” Bound in leather and dyed Venetian scarlet, it bears Anna Maria’s name in gold leaf calligraphy. The concertos, written specifically to showcase her prowess, are filled with high-speed passages that require different notes to be played on multiple strings at the same time. In 1716, Anna Maria and the figlie were ordered by the Senate to intensify their musical work in an effort to bring God’s favor to the Venetian armies as they battled the Ottoman Empire on the island of Corfu. (In that siege, the Venetian violin, and a well-timed storm, proved mightier than the Turkish cannon.) Anna Maria was middle-aged in the 1740s, when Jean-Jacques Rousseau came to visit. The rebel philosopher who would fuel the French Revolution was also a composer. “I had brought with me from Paris the national prejudice against Italian music,” Rousseau wrote. And yet he declared that the music played by the figlie del coro “has not its like, either in Italy, or the rest of the world.” Rousseau had one problem, though, that “drove me to despair.” He could not see the women. They performed behind a thin crepe hung in front of wrought-iron latticework grilles in elevated church balconies. They could be heard, but only their silhouettes seen, tilting and swaying with the tides of the music, like shadow pictures in a vaudeville stage set. The grilles “concealed from me the angels of beauty,” Rousseau wrote. “I could talk of nothing else.” He talked about it so much that he happened to talk about it with one of the figlie’s important patrons. “If you are so desirous to see those little girls,” the man told Rousseau, “it will be an easy matter to satisfy your wishes.” Rousseau was so desirous. He pestered the man incessantly until he took him to meet the musicians. And there, Rousseau, whose fearless writing would be banned and burned before it fertilized the soil of democracy, grew anxious. “When we entered the salon which confined these longed-for beauties,” he wrote, “I felt an amorous trembling, which I had never before experienced.” The patron introduced the women, the siren prodigies whose fame had spread like a grassfire through Europe—and Rousseau was stunned. • • • There was Sophia—“horrid,” Rousseau wrote. Cattina—“she had but one eye.” Bettina—“the smallpox had entirely disfigured her.” “Scarcely one of them,” according to Rousseau, “was without some striking defect.” A poem had recently been written about one of the best singers: “Missing are the fingers of her left hand / Also absent is her left foot.” An accomplished instrumentalist was the “poor limping lady.” Other guests left even less considerate records. Like Rousseau, English visitor Lady Anna Miller was entranced by the music and pleaded to see the women perform with no barrier hiding them. “My request was granted,” Miller wrote, “but when I entered I was seized with so violent a fit of laughter, that I am surprised they had not driven me out again. . . . My eyes were struck with the sight of a dozen or fourteen beldams ugly and old . . . these with several young girls.” Miller changed her mind about watching them play, “so much had the sight of the performers disgusted me.” The girls and women who delighted delicate ears had not lived delicate lives. Many of their mothers worked in Venice’s vibrant sex industry and contracted syphilis before they had babies and dropped them off at the Ospedale della Piet?. The name literally means “Hospital of Pity,” but figuratively it was the House of Mercy, where the girls grew up and learned music. It was the largest of four ospedali, charitable institutions in Venice founded to ameliorate particular social ills. In the Piet?’s case, the ill was that fatherless babies (mostly girls) frequently ended up in the canals. Most of them would never know their mothers. They were dropped off in the scaffetta, a drawer built into the outer wall of the Piet?. Like the size tester for carry-on luggage at the airport, if a baby was small enough to fit, the Piet? would raise her. The great Anna Maria was a representative example. Someone, probably her mother, who was probably a prostitute, took baby Anna Maria to the doorstep of the Piet? on the waterfront of Venice’s St. Mark’s Basin, along a bustling promenade. A bell attached to the scaffetta alerted staff of each new arrival. Babies were frequently delivered with a piece of fabric, a coin, ring, or some trinket left in the scaffetta as a form of identification should anyone ever return to claim them. One mother left half of a brilliantly illustrated weather chart, hoping one day to return with the other half. As with many of the objects, and many of the girls, it remained forever in the Piet?. Like Anna Maria, most of the foundlings would never know a blood relative, and so they were named for their home: Anna Maria della Piet?—Anna Maria of the Pieta. An eighteenth-century roster lists Anna Maria’s de facto sisters: Adelaide della Piet?, Agata della Piet?, Ambrosina della Piet?, and on and on, all the way through Violeta, Virginia, and Vittoria della Piet?. The ospedali were public-private partnerships, each overseen by a volunteer board of upper-class Venetians. The institutions were officially secular, but they were adjoined to churches, and life inside ran according to quasi-monastic rules. Residents were separated according to age and gender. Daily Mass was required before breakfast, and regular confession was expected. Everyone, even children, worked constantly to keep the institution running. One day a year, girls were allowed a trip to the countryside, chaperoned, of course. It was a rigid existence, but there were benefits. The children were taught to read, write, and do arithmetic, as well as vocational skills. Some became pharmacists for the residents, others laundered silk or sewed ship sails that could be sold. The ospedali were fully functioning, self-contained communities. Everyone was compensated for their work, and the Piet? had its own interest-paying bank meant to help wards learn to manage their own money. Boys learned a trade or joined the navy and left as teenagers. For girls, marriage was the primary route to emancipation. Dowries were kept ready, but many wards stayed forever. As the ospedali accrued instruments, music was added to the education of dozens of girls so that they could play during religious ceremonies in the adjacent churches. After a plague in 1630 wiped out one-third of the population, Venetians found themselves in an especially “penitential mood,” as one historian put it. The musicians suddenly became more important. The ospedali governors noticed that a lot more people were attending church, and that the institutional endowments swelled with donations proportional to the quality of the girls’ music. By the eighteenth century, the governors were openly promoting the musicians for fund-raising. Each Saturday and Sunday, concerts began before sunset. The church was so packed that the Eucharist had to be moved. Visitors were still welcome for free, of course, but if a guest wanted to sit, ospedali staff were happy to rent out chairs. Once the indoor space was full, listeners crowded outside windows, or paused their gondolas in the basin outside. Foundlings became an economic engine not just sustaining the social welfare system in Venice, but drawing tourists from abroad. Entertainment and penitence mixed in amusing ways. Audience members were not allowed to applaud in church, so after the final note they coughed and hemmed and scraped their feet and blew their noses in admiration. The ospedali commissioned composers for original works. Over one six-year period, Vivaldi wrote 140 concertos exclusively for the Piet? musicians. A teaching system evolved, where the older figlie taught the younger, and the younger the beginners. They held multiple jobs—Anna Maria was a teacher and copyist—and yet they produced star after virtuoso star. After Anna Maria, her soloist successor, Chiara della Piet?, was hailed as the greatest violinist in all of Europe. It all raises the question: Just what magical training mechanism was deployed to transform the orphan foundlings of the Venetian sex industry, who but for the grace of charity would have died in the city’s canals, into the world’s original international rock stars? • • • The Piet?’s music program was not unique for its rigor. According to a list of Piet? directives, formal lessons were Tuesdays, Thursdays, and Saturdays, and figlie were free to practice on their own. Early in the rise of the figlie del coro, work and chores took most of their time, so they were only allowed an hour a day of music study. The most surprising feature was how many instruments they learned. Shortly after he received his music doctorate from Oxford, eighteenth-century English composer and historian Charles Burney set out to write a definitive history of modern music, which involved several ospedali visits. Burney, who became famous as both a travel writer and the foremost music scholar of the day, was astounded by what he saw in Venice. On one ospedali trip, he was given a two-hour private performance, with no curtain between him and the performers. “It was really curious to see, as well as to hear, every part of this excellent concert, performed by female violins, hautbois [oboes], tenors, bases, harpsichords, french-horns, and even double bases,” Burney wrote. More curious still, “these young persons frequently change instruments.” Figlie took singing lessons, and learned to play every instrument their institution owned. It helped that they were paid for learning new skills. A musician named Maddalena married and left institutional life, and toured from London to St. Petersburg, performing as a violinist, harpsichordist, cellist, and soprano. She wrote of “acquiring skills not expected of my sex,” and became so famous that her personal life was covered by one of the day’s gossip writers. For those who stayed a lifetime in the institution, their multi-instrument background had practical importance. Pelegrina della Piet?, who arrived at the scaffetta swaddled in rags, started on the bass, moved to violin, and then to oboe, all while working as a nurse. Vivaldi wrote oboe parts specifically for Pelegrina, but in her sixties her teeth fell out, abruptly ending her oboe career. So she switched back to violin, and continued performing into her seventies. The Pieta’s musicians loved to show off their versatility. According to a French writer, they were trained “in all styles of music, sacred or profane,” and gave concerts that “lent themselves to the most varied vocal and instrumental combinations.” Audience members commonly remarked on the wide range of instruments the figlie could play, or on their surprise at seeing a virtuosa singer come out during intermission to improvise an instrumental solo. Beyond instruments the figlie played in concert, they learned instruments that were probably used primarily for teaching or experimentation: a harpsichord-like spinet; a chamber organ; a giant string instrument known as a tromba marina; a wooden, flutelike instrument covered with leather called a zink; and a viola da gamba, a string instrument played upright and with a bow like a cello, but with more strings, a subtly different shape, and frets befitting a guitar. The figlie weren’t merely playing well, they were participants in an extraordinary period for instrument invention and reinvention. According to musicologist Marc Pincherle, in the multiskilled figlie and their menagerie of instruments, “Vivaldi had at his disposal a musical laboratory of unlimited resources.” Some of the instruments the figlie learned are so obscure that nobody knows what exactly they were. A young Piet? musician named Prudenza apparently sang beautifully, and performed fluently with the violin and the “violoncello all’inglese.” Music scholars have argued about what that even is, but, as with anything else the Piet? could get its musical mitts on—like the chalumeau (wind) and the psaltery (string)—the figlie learned to play it. They lifted composers to unexplored heights. They were part of the bridge that carried music from Baroque composers to the classical masters: Bach (who transcribed Vivaldi’s concertos); Haydn (who composed specifically for one of the figlie, Bianchetta, a singer, harpist, and organist); and perhaps Mozart, who visited an ospedale with his father as a boy, and returned as a teen. The figlie’s skills on a vast array of instruments enabled musical experimentation so profound that it laid a foundation for the modern orchestra. According to musicologist Denis Arnold, the modernization of church music that occurred through the figlie was so influential that one of Mozart’s iconic sacred pieces, without the girls of the Venetian orphanages, “might never have been composed at all.” But their stories were largely forgotten, or thrown away, literally. When Napoleon’s troops arrived in 1797, they tossed manuscripts and records out the ospedali windows. When, two hundred years later, a famous eighteenth-century painting of women giving a concert was displayed at the National Gallery of Art in Washington, D.C., the mysterious figures dressed in black, in an upper balcony above the audience, went entirely unidentified. Maybe the memories of the figlie faded because they were women—playing music in public religious ceremonies defied papal authority. Or because so many of them neither came with families nor left any behind. They lacked family names, but the abandoned girls were so synonymous with their instruments that those became their names. The baby who came through a notch in the wall and began her way in the world as Anna Maria della Piet? left the world having been, by various stages, Anna Maria del violino, Anna Maria del theorbo, Anna Maria del cembalo, Anna Maria del violoncello, Anna Maria del luta, Anna Maria della viola d’amore, and Anna Maria del mandolin. • • • Imagine it today: click a tourism site and the entertainment recommendation is the world-famous orchestra comprised of orphans left at the doorstep of the music venue. You will be treated to virtuoso solos on instruments you know and love, as well as those you’ve never heard of. Occasionally the musicians will switch instruments during the show. And please follow us on Twitter, @FamousFoundlings. Never mind 200-ducat dowries, the figlie would have speaking agents and feature film deals. Just like Tiger Woods’s television appearance when he was two, it would foment a frenzy of parents and media seeking to excavate the mysterious secret to success. Parents actually did flock in the eighteenth century. Noblemen vied (and paid) to get their daughters a chance to play with those “able indigents,” as one historian put it. But the strategies of their musical development would be a hard sell. Today, the massively multi-instrument approach seems to go against everything we know about how to get good at a skill like playing music. It certainly goes against the deliberate practice framework, which only counts highly focused attempts at exactly the skill to be performed. Multiple instruments, in that view, should be a waste of time. In the genre of modern self-help narratives, music training has stood beside golf atop the podium, exemplars of the power of a narrowly focused head start in highly technical training. Whether it is the story of Tiger Woods or the Yale law professor known as the Tiger Mother, the message is the same: choose early, focus narrowly, never waver. The Tiger Mother’s real name is Amy Chua, and she coined the term in her 2011 book Battle Hymn of the Tiger Mother. Like Tiger, the Tiger Mother permeated popular culture. Chua advertised the secrets to “how Chinese parents raise such stereotypically successful kids.” On the very first page of the very first chapter is the litany of things Sophia and Lulu must never do, including: “play any instrument other than the piano or the violin.” (Sophia gets piano, Lulu is assigned violin.) Chua supervised three, four, and sometimes five hours of music practice a day. Parents in online forums agonize over what instrument to pick for their child, because the child is too young to pick for herself and will fall irredeemably behind if she waits. “I am slowly trying to convince him how nice playing music is,” a parent of a two-and-a-half-year-old posted. “I am just not too sure which instrument would be best.” Another post advised nixing violin if a child has not started by seven, as she will be too far behind. In response to such concerns, the director of a private music school wrote a “how to choose” advice column with tips for picking an instrument for a child who can’t yet stick with the same favorite color from one week to the next. There are, of course, many routes to expertise. Some outstanding musicians have focused very young. The supreme cellist Yo-Yo Ma is a well-known example. Less well known, though, is that Ma started on violin, moved to piano, and then to the cello because he didn’t really like the first two instruments. He just went through the sampling period a lot faster than the typical student. Tiger parents are trying to skip that phase entirely. It reminds me of a conversation I had with Ian Yates, a British sports scientist and coach who helped develop future professional athletes in a range of sports. Parents, Yates told me, increasingly come to him and “want their kids doing what the Olympians are doing right now, not what the Olympians were doing when they were twelve or thirteen,” which included a wider variety of activities that developed their general athleticism and allowed them to probe their talents and interests before they focused narrowly on technical skills. The sampling period is not incidental to the development of great performers—something to be excised in the interest of a head start—it is integral. • • • John Sloboda is undoubtedly one of the most influential researchers in the psychology of music. His 1985 book The Musical Mind ranged from the origins of music to the acquisition of playing skill, and set a research agenda that the field is still carrying out today. Through the 1990s, Sloboda and his colleagues studied strategies for musical growth. Practice, unsurprisingly, was crucial in the development of musicians. But the details were less intuitive. A study of music students aged eight to eighteen and ranging in skill from rank novices to students in a highly selective music school found that when they began training there was no difference in the amount of practice undertaken between any of the groups of players, from the least to the most accomplished. The students who would go on to be most successful only started practicing much more once they identified an instrument they wanted to focus on, whether because they were better at it or just liked it more. The instrument, it appeared, was driving the practitioner, rather than the reverse. In a separate study of twelve hundred young musicians, those who quit reported “a mismatch between the instruments [they] wanted to learn to play and the instruments they actually played.” Amy Chua described her daughter Lulu as a “natural musician.” Chua’s singer friend called Lulu “extraordinary,” with a gift “no one can teach.” Lulu made rapid progress on the violin, but pretty soon told her mother ominously, “You picked it, not me.” At thirteen, she quit most of her violin activities. Chua, candid and introspective, wondered in the coda of her book if Lulu would still be playing if she had been allowed to choose her own instrument. When Sloboda and a colleague conducted a study with students at a British boarding school that recruited from around the country—admission rested entirely on an audition—they were surprised to find that the students classified as exceptional by the school came from less musically active families compared to less accomplished students, did not start playing at a younger age, were less likely to have had an instrument in the home at a very young age, had taken fewer lessons prior to entering the school, and had simply practiced less overall before arriving—a lot less. “It seems very clear,” the psychologists wrote, “that sheer amount of lesson or practice time is not a good indicator of exceptionality.” As to structured lessons, every single one of the students who had received a large amount of structured lesson time early in development fell into the “average” skill category, and not one was in the exceptional group. “The strong implication,” the researchers wrote, is “that that too many lessons at a young age may not be helpful.” “However,” they added, “the distribution of effort across different instruments seems important. Those children identified as exceptional by [the school] turn out to be those children who distributed their effort more evenly across three instruments.” The less skilled students tended to spend their time on the first instrument they picked up, as if they could not give up a perceived head start. The exceptional students developed more like the figlie del coro. “The modest investment in a third instrument paid off handsomely for the exceptional children,” the scientists concluded. The psychologists highlighted the variety of paths to excellence, but the most common was a sampling period, often lightly structured with some lessons and a breadth of instruments and activities, followed only later by a narrowing of focus, increased structure, and an explosion of practice volume. Sound familiar? A study that followed up on Sloboda’s work two decades later compared young musicians admitted to a competitive conservatory to similarly committed but less skilled music students. Nearly all of the more accomplished students had played at least three instruments, proportionally much more than the lower-level students, and more than half played four or five. Learning to play classical music is a narrative linchpin for the cult of the head start; as music goes, it is a relatively golflike endeavor. It comes with a blueprint; errors are immediately apparent; it requires repetitive practice of the exact same task until execution becomes automatic and deviation is minimal. How could picking an instrument as early as possible and starting in technical training not be the standard path to success? And yet even classical music defies a simple Tiger story. The Cambridge Handbook of Expertise and Expert Performance, published in 2006, is a sort of bible for popular writers, speakers, and researchers in the ten-thousand-hours school. It is a compilation of essayistic chapters, each written by different researchers who delve into dance, math, sports, surgery, writing, and chess. The music section focuses very conspicuously on classical playing. At nine hundred oversized pages, it is a handbook for large hands. In the chapter on developing music expertise, there is just one single substantive mention of the beginnings of expert players in all the genres of music in the world that are not classical. The Handbook simply notes that, in contrast to classical players, jazz and folk and modern popular musicians and singers do not follow a simple, narrow trajectory of technical training, and they “start much later.” • • • Jack Cecchini can thank two stumbles, one metaphorical and one literal, for making him one of the rare musicians who is world class in both jazz and classical. The first was in 1950 in Chicago, when he was thirteen and stumbled across a guitar resting on his landlord’s couch. He ran his fingers over the strings as he walked by. The landlord picked it up, demonstrated two chords, and immediately asked Cecchini to play accompaniment with them. Of course, he couldn’t. “He’d shake his head when it was time for me to change the chord, and if I didn’t he’d start swearing,” Cecchini recalled with a chuckle. Cecchini’s interest was ignited, and he started trying to imitate songs he heard on the radio. By sixteen, he was playing jazz in the background of Chicago clubs he was too young to patronize. “It was like a factory,” he told me. “If you had to go to the bathroom, you had to get one of the other guys to pick it up. But you’re experimenting every night.” He took the only free music lessons he could find, in clarinet, and tried to transfer what he learned to the guitar. “There are eight million places on the guitar to play the same notes,” he said. “I was just trying to find solutions to problems, and you start to learn the fingerboard.” Pretty soon he was performing with Frank Sinatra at the Villa Venice, Miriam Makeba at the Apollo, and touring with Harry Belafonte from Carnegie Hall to packed baseball stadiums. That’s where the second stumble came in. During a show when Cecchini was twenty-three, one of Belafonte’s stage dancers stepped on the cable that connected his guitar to an amplifier. His instrument was reduced to a whisper. “Harry freaked out,” Cecchini recalled. “He said, ‘Get rid of that thing and get yourself a classical guitar!’” Getting one was easy, but he had been using a pick, and for acoustic he had to learn fingering, so the trouble was learning to play it on tour. He fell in love with the instrument, and by thirty-one was so adept that he was chosen as the soloist to play a concerto by none other than Vivaldi accompanied by an orchestra for a crowd in Chicago’s Grant Park. The next day, the Chicago Tribune’s music critic began his review: “Despite the ever-increasing number of enthusiasts who untiringly promote the resurrection of the guitar as a classical instrument, there are but few men who possess the talent and patience to master what remains one of the most beautiful but obstinately difficult of all instruments.” Cecchini, he continued, “proved to be one of those few.” Despite his late and haphazard start, Cecchini also became a renowned teacher of both jazz and classical guitar. Students traveled from out of state to pick his brain, and by the early 1980s lines formed down the stairs of his Chicago school in the evenings. His own formal training, of course, had been those free clarinet lessons. “I’d say I’m 98 percent self-taught,” he told me. He switched between instruments and found his way through trial and error. It might sound unusual, but when Cecchini reeled off legends he played with or admired, there was not a Tiger among them. Duke Ellington was one of the few who ever actually took formal lessons, when he was seven, from the exuberantly named teacher Marietta Clinkscales. He lost interest immediately, before he learned to read notes, and quit music entirely to focus on baseball. In school, his interests were drawing and painting. (He later turned down a college art scholarship.) When he was fourteen, Ellington heard ragtime, and for the first time in seven years sat down at a piano and tried to copy what he had heard. “There was no connection between me and music, until I started fiddling with it myself,” he remembered. “As far as anyone teaching me, there was too many rules and regulations. . . . As long as I could sit down and figure it out for myself, then that was all right.” Even once he became arguably America’s preeminent composer, he relied on copyists to decode his personal musical shorthand into traditional musical notation. Johnny Smith was Cecchini’s absolute favorite. Smith grew up in a shotgun house in Alabama. Neighbors gathered to play music, and young Johnny goofed around with whatever they left in a corner overnight. “John played anything,” his brother Ben recalled. It allowed him to enter local competitions for any instrument, and the prizes were groceries. He once fiddled his way to a five-pound bag of sugar. He didn’t particularly like violin, though. Smith said he would have walked fifty miles for a guitar lesson, but there were no teachers around, so he just had to experiment. When the United States entered World War II, Smith enlisted in the Army hoping to be a pilot, but a left-eye problem disqualified him. He was sent to the marching band, which had absolutely no use for a guitar player. He could not yet read music, but was assigned to teach himself a variety of instruments so he could play at recruiting events. Wide-ranging experience set him up for his postwar work as NBC’s musical arranger. He had learned to learn, and his multi-instrument and poly-genre skill became so renowned that it got him into a tricky spot. He was leaving NBC one Friday evening when he was stopped at the elevator and asked to learn a new guitar part. The classical player hired for the job couldn’t hack it. It was for a live celebration of composer Arnold Schoenberg’s seventy-fifth birthday, and would feature one of Schoenberg’s atonal compositions, which had not been performed in twenty-five years. Smith had four days. He continued with his Friday night, got home at 5 a.m., and then joined an emergency rehearsal at 7 a.m. On Wednesday, he performed so beautifully that the audience demanded an encore of all seven movements. In 1998, alongside Sir Edmund Hillary, who with Tenzing Norgay was the first to summit Mount Everest, Smith was awarded Smithsonian’s Bicentennial Medal for outstanding cultural contributions. Pianist Dave Brubeck earned the medal as well. His song “Take Five” was chosen by NPR listeners as the quintessential jazz tune of all time. Brubeck’s mother tried to teach him piano, but he refused to follow instructions. He was born cross-eyed, and his childhood reluctance was related to his inability to see the musical notation. His mother gave up, but he listened when she taught others and tried to imitate. Brubeck still could not read music when he dropped out of veterinary premed at the College of the Pacific and walked across the lawn to the music department, but he was a masterful faker. He put off studying piano for instruments that would more easily allow him to improvise his way through exercises. Senior year, he could hide no longer. “I got a wonderful piano teacher,” he recalled, “who figured out I couldn’t read in about five minutes.” The dean informed Brubeck that he could not graduate and furthermore was a disgrace to the conservatory. Another teacher who had noticed his creativity stuck up for him, and the dean cut a deal. Brubeck was allowed to graduate on the condition that he promise never to embarrass the institution by teaching. Twenty years later, the college apparently felt it had sufficiently escaped embarrassment, and awarded him an honorary doctorate. Perhaps the greatest improv master of all could not read, period—words or music. Django Reinhardt was born in Belgium in 1910, in a Romani caravan. His early childhood talents were chicken stealing and trout tickling—feeling along a riverbank for fish and rubbing their bellies until they relaxed and could be tossed ashore. Django grew up outside Paris in an area called la Zone, where the city’s cesspool cleaners unloaded waste each night. His mother, N?gros, was too busy supporting the family making bracelets out of spent artillery shell casings she gathered from a World War I battlefield to lord over anyone’s music practice. Django went to school if he felt like it, but he mostly didn’t. He crashed movie theaters and shot billiards, and was surrounded by music. Wherever Romani gathered, there were banjos, harps, pianos, and especially violins. The violin’s portability made it the classic Romani instrument, and Django started there, but he didn’t love it. He learned in the call-and-response style. An adult would play a section of music and he would try to copy it. When he was twelve, an acquaintance gave him a hybrid banjo-guitar. He had found his thing, and became obsessed. He experimented with different objects as picks when his fingers needed a break: spoons, sewing thimbles, coins, a piece of whalebone. He teamed up with a banjo-playing hunchback named Lagard?re, and they wandered the Paris streets, busking and improvising duets. In his mid teens, Django was at a restaurant in Paris where the city’s accordionists had gathered. He and his banjo-guitar were asked to the stage to play for the other musicians. Django launched into a polka that was known as a skill-proving piece for accordionists because it was so hard to play. When he finished the traditional form, rather than stopping he careened into a series of lightning improvisations, bending and twisting the song into creations none of the veteran musicians had ever heard. Django was playing “with a drawn knife,” as the lingo went. He was looking for a fight by warping a sacred dancehall tune, but he was so original that he got away with it. His creativity was unbound. “I wonder if, in his younger days,” one of his music partners said, “he even knew that printed music existed.” Django would soon need all the versatility he had learned. He was eighteen when a candle in his wagon ignited a batch of celluloid flowers that his wife, Bella, had fashioned for a funeral. The wagon exploded into an inferno. Django was burned over half his body and ended up bedridden for a year and a half. For the rest of his life the pinkie and ring finger of his left hand, his fret hand, were dangling flesh, useless on the strings. Django was used to improvising. Like Pelegrina of the figlie del coro when she lost her teeth, he pivoted. He taught himself how to play chords with a thumb and two fingers. His left hand had to sprint up and down the neck of his guitar, the index and middle finger flitting waterbug-like over the strings. He reemerged with a new way of handling the instrument, and his creativity erupted. With a French violinist, Django fused dancehall musette with jazz and invented a new form of improvisational music that defied easy characterization, so it was just called “Gypsy jazz.” Some of his spontaneous compositions became “standards,” pieces that enter the canon from which other musicians improvise. He revolutionized the now-familiar virtuosic guitar solo that pervaded the next generation’s music, from Jimi Hendrix, who kept an album of Django’s recordings and named one of his groups Band of Gypsys, to Prince (self-taught, played more than a half-dozen different genres of instruments on his debut album). Long before Hendrix melted “The Star-Spangled Banner” into his own wondrous creation, Django did it with the French national anthem, “La Marseillaise.” Even though he never learned to read music (or words—a fellow musician had to teach him to sign his autograph for fans), Django composed a symphony, playing on his guitar what he wanted each instrument in the ensemble to do while another musician struggled to transcribe it. He died of a brain hemorrhage at forty-three, but music he made nearly a century ago continues to show up in pop culture, including Hollywood blockbusters like The Matrix and The Aviator, and in the hit BioShock video games. The author of The Making of Jazz anointed the man who could neither read music nor study it with the traditional fingerings “without question, the single most important guitarist in the history of jazz.” • • • Cecchini has bushy eyebrows and a beard that parts and closes quickly like ruffled shrubbery when he talks excitedly. Like now: he’s talking Django, and he’s a huge fan. He used to have a black poodle named Django. He opens a sepia-toned YouTube clip and whispers conspiratorially, “Watch this.” There is Django, bow tie, pencil mustache, and slicked-back hair. The two useless fingers on his left hand are tucked into a claw. Suddenly, the hand shoots all the way up the guitar neck, and then all the way back down, firing a rapid succession of notes. “That’s amazing!” Cecchini says. “The synchronization between the left and right hand is phenomenal.” The strict deliberate practice school describes useful training as focused consciously on error correction. But the most comprehensive examination of development in improvisational forms, by Duke University professor Paul Berliner, described the childhoods of professionals as “one of osmosis,” not formal instruction. “Most explored the band room’s diverse options as a prelude to selecting an instrument of specialization,” he wrote. “It was not uncommon for youngsters to develop skills on a variety of instruments.” Berliner added that aspiring improvisational musicians “whose educational background has fostered a fundamental dependence on [formal] teachers must adopt new approaches to learning.” A number of musicians recounted Brubeck-like scenarios to Berliner, the time a teacher found out that they could not read music but had become adept enough at imitation and improvisation that “they had simply pretended to follow the notation.” Berliner relayed the advice of professional musicians to a young improvisational learner as “not to think about playing—just play.” While I was sitting with Cecchini, he reeled off an impressive improvisation. I asked him to repeat it so I could record it. “I couldn’t play that again if you put a gun to my head,” he said. Charles Limb, a musician, hearing specialist, and auditory surgeon at the University of California, San Francisco, designed an iron-free keyboard so that jazz musicians could improvise while inside an MRI scanner. Limb saw that brain areas associated with focused attention, inhibition, and self-censoring turned down when the musicians were creating. “It’s almost as if the brain turned off its own ability to criticize itself,” he told National Geographic. While improvising, musicians do pretty much the opposite of consciously identifying errors and stopping to correct them. Improv masters learn like babies: dive in and imitate and improvise first, learn the formal rules later. “At the beginning, your mom didn’t give you a book and say, ‘This is a noun, this is a pronoun, this is a dangling participle,’” Cecchini told me. “You acquired the sound first. And then you acquire the grammar later.” Django Reinhardt was once in a taxi with Les Paul, inventor of the solid-body electric guitar. Paul was a self-taught musician, and the only person in both the Rock and Roll and National Inventors halls of fame. Reinhardt tapped Paul on the shoulder and asked if he could read music. “I said no, I didn’t,” Paul recounted, “and he laughed till he was crying and said, ‘Well, I can’t read either. I don’t even know what a C is; I just play them.’” Cecchini told me that he was regularly stunned when he would ask an exceptional jazz performer onstage to play a certain note, and find the musician could not understand him. “It’s an old joke among jazz musicians,” Cecchini said. “You ask, ‘Can you read music?’ And the guy says, ‘Not enough to hurt my playing.’” There is truth in the joke. Cecchini has taught musicians who played professionally for the Chicago Symphony, which in 2015 was rated as the top orchestra in the country and fifth in the world by a panel of critics. “It’s easier for a jazz musician to learn to play classical literature than for a classical player to learn how to play jazz,” he said. “The jazz musician is a creative artist, the classical musician is a re-creative artist.” After Django Reinhardt lit the nightclub music scene on fire, classically trained musicians began trying to transition to jazz. According to Michael Dregni, who wrote multiple books on that period, improvisation was “a concept that went against conservatory training. . . . After years of rigorous conservatory training, it was an impossible transition for some.” Leon Fleisher, regarded as one of the great classical pianists of the twentieth century, told the coauthor of his 2010 memoir that his “greatest wish” was to be able to improvise. But despite a lifetime of masterful interpretation of notes on the page, he said, “I can’t improvise at all.” • • • Cecchini’s analogy to language learning is hardly unique. Even the Suzuki Method of music instruction, synonymous in the public consciousness with early drilling, was designed by Shinichi Suzuki to mimic natural language acquisition. Suzuki grew up around his father’s violin factory, but considered the instrument nothing more than a toy. When he fought with his siblings, they beat one another with violins. He did not attempt to play the instrument until he was seventeen, moved by a recording of Ave Maria. He brought a violin home from the factory and tried to imitate a classical recording by ear. “My complete self-taught technique was more a scraping than anything else,” he said of that initial foray, “but somehow I finally got so I could play the piece.” Only later did he seek out technical lessons and become a performer and then an educator. According to the Suzuki Association of the Americas, “Children do not practice exercises to learn to talk. . . . Children learn to read after their ability to talk has been well established.” In totality, the picture is in line with a classic research finding that is not specific to music: breadth of training predicts breadth of transfer. That is, the more contexts in which something is learned, the more the learner creates abstract models, and the less they rely on any particular example. Learners become better at applying their knowledge to a situation they’ve never seen before, which is the essence of creativity. Compared to the Tiger Mother’s tome, a parenting manual oriented toward creative achievement would have to open with a much shorter list of rules. In offering advice to parents, psychologist Adam Grant noted that creativity may be difficult to nurture, but it is easy to thwart. He pointed to a study that found an average of six household rules for typical children, compared to one in households with extremely creative children. The parents with creative children made their opinions known after their kids did something they didn’t like, they just did not proscribe it beforehand. Their households were low on prior restraint. “It’s strange,” Cecchini told me at the end of one of our hours-long discussions, “that some of the greatest musicians were self-taught or never learned to read music. I’m not saying one way is the best, but now I get a lot of students from schools that are teaching jazz, and they all sound the same. They don’t seem to find their own voice. I think when you’re self-taught you experiment more, trying to find the same sound in different places, you learn how to solve problems.” Cecchini stopped speaking for a moment, reclined in his chair, and stared at the ceiling. A few moments passed. “I could show somebody in two minutes what would take them years of screwing around on the fingerboard like I did to find it. You don’t know what’s right or what’s wrong. You don’t have that in your head. You’re just trying to find a solution to problems, and after fifty lifetimes, it starts to come together for you. It’s slow,” he told me, “but at the same time, there’s something to learning that way.” CHAPTER 4 Learning, Fast and Slow “OKAY? YOU’RE GOING to an Eagles game,” the charismatic math teacher tells her eighth-grade class. She takes care to frame problems using situations that motivate students. “They’re selling hot dogs,” she continues. “They’re very good, by the way, in Philadelphia.” Students giggle. One interjects, “So are the cheesesteaks.” The teacher brings them back to today’s lesson, simple algebraic expressions: “The hot dogs at [the] stadium where the Eagles play sell for three dollars. I want you to give me a variable expression for [the cost of] N hot dogs.” The students need to learn what it means for a letter to represent an undetermined number. It is an abstraction they must grasp in order to progress in math, but not a particularly easy one to explain. Marcus volunteers: “N over three dollars.” “Not over,” the teacher responds, “because that means divided.” She gives the correct expression: “Three N. Three N means however many I buy I have to pay three dollars for [each], right?” Another student is confused. “Where do you get the N from?” he asks. “That’s the N number of hot dogs,” the teacher explains. “That’s what I’m using as my variable.” A student named Jen asks if that means you should multiply. “That’s right. So if I got two hot dogs, how much money am I spending?” Six dollars, Jen answers correctly. “Three times two. Good, Jen.” Another hand shoots up. “Yes?” “Can it be any letter?” Michelle wants to know. Yes, it can. “But isn’t it confusing?” Brandon asks. It can be any letter at all, the teacher explains. On to part two of today’s lesson: evaluating expressions. “What I just did with the three dollars for a hot dog was ‘evaluating an expression,’” the teacher explains. She points to “7H” on the board and asks, if you make seven dollars an hour and work two hours this week, how much would you earn? Fourteen, Ryan answers correctly. What about if you worked ten hours? Seventy, Josh says. The teacher can see they’re getting it. Soon, though, it will become clear that they never actually understood the expression, they just figured out to multiply whatever two numbers the teacher said aloud. “What we just did was we took the number of hours and did what? Michelle?” Multiplied it by seven, Michelle answers. Right, but really what we did, the teacher explains, was put it into the expression where H is. “That’s what evaluating means,” she adds, “substituting a number for a variable.” But now another girl is confused. “So for the hot-dog thing, would the N be two?” she asks. “Yes. We substituted two for the N,” the teacher replies. “We evaluated that example.” Why, then, the girl wants to know, can’t you just write however many dollars a hot dog costs times two? If N is just two, what sense does it make to write “N” instead of “2”? The students ask more questions that slowly make clear they have failed to connect the abstraction of a variable to more than a single particular number for any given example. When she tries to move back to a realistic context—“social studies class is three times as long as math”—they are totally lost. “I thought fifth period was the longest?” one chimes in. When the students are asked to turn phrases into variable expressions, they have to start guessing. “What if I say ‘six less than a number’? Michelle?” the teacher asks. “Six minus N,” Michelle answers. Incorrect. Aubrey guesses the only other possibility: “N minus six.” Great. The kids repeat this form of platoon multiple choice. Watched in real time, it can give the impression that they understand. “What if I gave you 15 minus B?” the teacher asks the class, telling them to transform that back into words. Multiple-choice time. “Fifteen less than B?” Patrick offers. The teacher does not respond immediately, so he tries something else. “B less than 15.” This time the response is immediate; he nailed it. The pattern repeats. Kim is six inches shorter than her mother. “N minus negative six,” Steve offers. No. “N minus six.” Good. Mike is three years older than Jill. Ryan? “Three X,” he says. No, that would be multiply, wouldn’t it? “Three plus X.” Great. Marcus has now figured out the surefire way to get to the right answer. His hand shoots up for the next question. Three divided by W. Marcus? “W over three, or three over W,” he answers, covering his bases. Good, three over W, got it. Despite the teacher’s clever vignettes, it is clear that students do not understand how these numbers and letters might be useful anywhere but on a school worksheet. When she asks where variable expressions might be used in the world, Patrick answers: when you’re trying to figure out math problems. Still, the students have figured out how to get the right answers on their worksheets: shrewdly interrogating their teacher. She mistakes the multiple-choice game they are mastering for productive exploration. Sometimes, the students team up. In staccato succession: “K over eight,” one offers, “K into eight,” another says, “K of eight,” a third tries. The teacher is kind and encouraging even if they don’t manage to toss out the right answer. “It’s okay,” she says, “you’re thinking.” The problem, though, is the way in which they are thinking. • • • That was one American class period out of hundreds in the United States, Asia, and Europe that were filmed and analyzed in an effort to understand effective math teaching. Needless to say, classrooms were very different. In the Netherlands, students regularly trickled into class late, and spent a lot of class time working on their own. In Hong Kong, class looked pretty similar to the United States: lectures rather than individual work filled most of the time. Some countries used a lot of problems in real-world contexts, others relied more on symbolic math. Some classes kept kids in their seats, others had them approach the blackboard. Some teachers were very energetic, others staid. The litany of differences was long, but not one of those features was associated with differences in student achievement across countries. There were similarities too. In every classroom in every country, teachers relied on two main types of questions. The more common were “using procedures” questions: basically, practice at something that was just learned. For instance, take the formula for the sum of the interior angles of a polygon (180 ? (number of polygon sides ? 2)), and apply it to polygons on a worksheet. The other common variety was “making connections” questions, which connected students to a broader concept, rather than just a procedure. That was more like when the teacher asked students why the formula works, or made them try to figure out if it works for absolutely any polygon from a triangle to an octagon. Both types of questions are useful and both were posed by teachers in every classroom in every country studied. But an important difference emerged in what teachers did after they asked a making-connections problem. Rather than letting students grapple with some confusion, teachers often responded to their solicitations with hint-giving that morphed a making-connections problem into a using-procedures one. That is exactly what the charismatic teacher in the American classroom was doing. Lindsey Richland, a University of Chicago professor who studies learning, watched that video with me, and told me that when the students were playing multiple choice with the teacher, “what they’re actually doing is seeking rules.” They were trying to turn a conceptual problem they didn’t understand into a procedural one they could just execute. “We’re very good, humans are, at trying to do the least amount of work that we have to in order to accomplish a task,” Richland told me. Soliciting hints toward a solution is both clever and expedient. The problem is that when it comes to learning concepts that can be broadly wielded, expedience can backfire. In the United States, about one-fifth of questions posed to students began as making-connections problems. But by the time the students were done soliciting hints from the teacher and solving the problems, a grand total of zero percent remained making-connections problems. Making-connections problems did not survive the teacher-student interactions. Teachers in every country fell into the same trap at times, but in the higher-performing countries plenty of making-connections problems remained that way as the class struggled to figure them out. In Japan, a little more than half of all problems were making-connections problems, and half of those stayed that way through the solving. An entire class period could be just one problem with many parts. When a student offered an idea for how to approach a problem, rather than engaging in multiple choice, the teacher had them come to the board and put a magnet with their name on it next to the idea. By the end of class, one problem on a blackboard the size of an entire wall served as a captain’s log of the class’s collective intellectual voyage, dead ends and all. Richland originally tried to label the videotaped lessons with a single topic of the day, “but we couldn’t do it with Japan,” she said, “because you could engage with these problems using so much different content.” (There is a specific Japanese word to describe chalkboard writing that tracks conceptual connections over the course of collective problem solving: bansho.) Just as it is in golf, procedure practice is important in math. But when it comprises the entire math training strategy, it’s a problem. “Students do not view mathematics as a system,” Richland and her colleagues wrote. They view it as just a set of procedures. Like when Patrick was asked how variable expressions connected to the world, and answered that they were good for answering questions in math class. In their research, Richland and her collaborators highlighted the stunning degree of reliance community college students—41 percent of all undergraduate students in the United States—have on memorized algorithms. Asked whether a/5 or a/8 is greater, 53 percent of students answered correctly, barely better than guessing. Asked to explain their answers, students frequently pointed to some algorithm. Students remembered that they should focus on the bottom number, but a lot of them recalled that a larger denominator meant a/8 was bigger than a/5. Others remembered that they should try to get a common denominator, but weren’t sure why. There were students who reflexively cross-multiplied, because they knew that’s what you do when you see fractions, even though it had no relevance to the problem at hand. Only 15 percent of the students began with broad, conceptual reasoning that if you divide something into five parts, each piece will be larger than if you divide the same thing into eight parts. Every single one of those students got the correct answer. Some of the college students seemed to have unlearned number sense that most children have, like that adding two numbers gives you a third comprised of the first two. A student who was asked to verify that 462 253 = 715, subtracted 253 from 715, and got 462. When he was asked for another strategy, he could not come up with subtracting 462 from 715 to see that it equals 253, because the rule he learned was to subtract the number to the right of the plus sign to check the answer. When younger students bring home problems that force them to make connections, Richland told me, “parents are like, ‘Lemme show you, there’s a faster, easier way.’” If the teacher didn’t already turn the work into using-procedures practice, well-meaning parents will. They aren’t comfortable with bewildered kids, and they want understanding to come quickly and easily. But for learning that is both durable (it sticks) and flexible (it can be applied broadly), fast and easy is precisely the problem. • • • “Some people argue that part of the reason U.S. students don’t do as well on international measures of high school knowledge is that they’re doing too well in class,” Nate Kornell, a cognitive psychologist at Williams College, told me. “What you want is to make it easy to make it hard.” Kornell was explaining the concept of “desirable difficulties,” obstacles that make learning more challenging, slower, and more frustrating in the short term, but better in the long term. Excessive hint-giving, like in the eighth-grade math classroom, does the opposite; it bolsters immediate performance, but undermines progress in the long run. Several desirable difficulties that can be used in the classroom are among the most rigorously supported methods of enhancing learning, and the engaging eighth-grade math teacher accidentally subverted all of them in the well-intended interest of before-your-eyes progress. One of those desirable difficulties is known as the “generation effect.” Struggling to generate an answer on your own, even a wrong one, enhances subsequent learning. Socrates was apparently on to something when he forced pupils to generate answers rather than bestowing them. It requires the learner to intentionally sacrifice current performance for future benefit. Kornell and psychologist Janet Metcalfe tested sixth graders in the South Bronx on vocabulary learning, and varied how they studied in order to explore the generation effect. Students were given some of the words and definitions together. For example, To discuss something in order to come to an agreement: Negotiate. For others, they were shown only the definition and given a little time to think of the right word, even if they had no clue, before it was revealed. When they were tested later, students did way better on the definition-first words. The experiment was repeated on students at Columbia University, with more obscure words (Characterized by haughty scorn: Supercilious). The results were the same. Being forced to generate answers improves subsequent learning even if the generated answer is wrong. It can even help to be wildly wrong. Metcalfe and colleagues have repeatedly demonstrated a “hypercorrection effect.” The more confident a learner is of their wrong answer, the better the information sticks when they subsequently learn the right answer. Tolerating big mistakes can create the best learning opportunities.* Kornell helped show that the long-run benefits of facilitated screwups extend to primates only slightly less studious than Columbia students. Specifically, to Oberon and Macduff, two rhesus macaques trained to learn lists by trial and error. In a fascinating experiment, Kornell worked with an animal cognition expert to give Oberon and Macduff lists of random pictures to memorize, in a particular order. (Example: a tulip, a school of fish, a cardinal, Halle Berry, and a raven.) The pictures were all displayed simultaneously on a screen. By pressing them in trial-and-error fashion, the monkeys had to learn the desired order and then practice it repeatedly. But all practice was not designed equal. In some practice sessions, Oberon (who was generally brighter) and Macduff were automatically given hints on every trial, showing them the next picture in the list. For other lists, they could voluntarily touch a hint box on the screen whenever they were stuck and wanted to be shown the next item. For still other lists, they could ask for a hint on half of their practice attempts. And for a final group of lists, no hints at all. In the practice sessions with hints upon request, the monkeys behaved a lot like humans. They almost always requested hints when they were available, and thus got a lot of the lists right. Overall, they had about 250 trials to learn each list. After three days of practice, the scientists took off the training wheels. Starting on day four, the memorizing monkeys had to repeat all the lists from every training condition without any hints whatsoever. It was a performance disaster. Oberon only got about one-third of the lists right. Macduff got less than one in five. There was, though, an exception: the lists on which they never had hints at all. For those lists, on day one of practice the duo had performed terribly. They were literally monkeys hitting buttons. But they improved steadily each training day. On test day, Oberon nailed almost three-quarters of the lists that he had learned with no hints. Macduff got about half of them. The overall experiment results went like this: the more hints that were available during training, the better the monkeys performed during early practice, and the worse they performed on test day. For the lists that Macduff spent three days practicing with automatic hints, he got zero correct. It was as if the pair had suddenly unlearned every list that they practiced with hints. The study conclusion was simple: “training with hints did not produce any lasting learning.” Training without hints is slow and error-ridden. It is, essentially, what we normally think of as testing, except for the purpose of learning rather than evaluation—when “test” becomes a dreaded four-letter word. The eighth-grade math teacher was essentially testing her students in class, but she was facilitating or outright giving them the answers. Used for learning, testing, including self-testing, is a very desirable difficulty. Even testing prior to studying works, at the point when wrong answers are assured. In one of Kornell’s experiments, participants were made to learn pairs of words and later tested on recall. At test time, they did the best with pairs that they learned via practice quizzes, even if they had gotten the answers on those quizzes wrong. Struggling to retrieve information primes the brain for subsequent learning, even when the retrieval itself is unsuccessful. The struggle is real, and really useful. “Like life,” Kornell and team wrote, “retrieval is all about the journey.” • • • If that eighth-grade classroom followed a typical academic plan over the course of the year, it is precisely the opposite of what science recommends for durable learning—one topic was probably confined to one week and another to the next. Like a lot of professional development efforts, each particular concept or skill gets a short period of intense focus, and then on to the next thing, never to return. That structure makes intuitive sense, but it forgoes another important desirable difficulty: “spacing,” or distributed practice. It is what it sounds like—leaving time between practice sessions for the same material. You might call it deliberate not-practicing between bouts of deliberate practice. “There’s a limit to how long you should wait,” Kornell told me, “but it’s longer than people think. It could be anything, studying foreign language vocabulary or learning how to fly a plane, the harder it is, the more you learn.” Space between practice sessions creates the hardness that enhances learning. One study separated Spanish vocabulary learners into two groups—a group that learned the vocab and then was tested on it the same day, and a second that learned the vocab but was tested on it a month later. Eight years later, with no studying in the interim, the latter group retained 250 percent more. For a given amount of Spanish study, spacing made learning more productive by making it easy to make it hard. It does not take nearly that long to see the spacing effect. Iowa State researchers read people lists of words, and then asked for each list to be recited back either right away, after fifteen seconds of rehearsal, or after fifteen seconds of doing very simple math problems that prevented rehearsal. The subjects who were allowed to reproduce the lists right after hearing them did the best. Those who had fifteen seconds to rehearse before reciting came in second. The group distracted with math problems finished last. Later, when everyone thought they were finished, they were all surprised with a pop quiz: write down every word you can recall from the lists. Suddenly, the worst group became the best. Short-term rehearsal gave purely short-term benefits. Struggling to hold on to information and then recall it had helped the group distracted by math problems transfer the information from short-term to long-term memory. The group with more and immediate rehearsal opportunity recalled nearly nothing on the pop quiz. Repetition, it turned out, was less important than struggle. It isn’t bad to get an answer right while studying. Progress just should not happen too quickly, unless the learner wants to end up like Oberon (or, worse, Macduff), with a knowledge mirage that evaporates when it matters most. As with excessive hint-giving, it will, as a group of psychologists put it, “produce misleadingly high levels of immediate mastery that will not survive the passage of substantial periods of time.” For a given amount of material, learning is most efficient in the long run when it is really inefficient in the short run. If you are doing too well when you test yourself, the simple antidote is to wait longer before practicing the same material again, so that the test will be more difficult when you do. Frustration is not a sign you are not learning, but ease is. Platforms like Medium and LinkedIn are absolutely rife with posts about shiny new, unsupported learning hacks that lead to mind-blowingly rapid progress—from special dietary supplements and “brain-training” apps to audio cues meant to alter brain waves. In 2007, the U.S. Department of Education published a report by six scientists and an accomplished teacher who were asked to identify learning strategies that truly have scientific backing. Spacing, testing, and using making-connections questions were on the extremely short list. All three impair performance in the short term. As with the making-connections questions Richland studied, it is difficult to accept that the best learning road is slow, and that doing poorly now is essential for better performance later. It is so deeply counterintuitive that it fools the learners themselves, both about their own progress and their teachers’ skill. Demonstrating that required an extraordinarily unique study. One that only a setting like the U.S. Air Force Academy could provide. • • • In return for full scholarships, cadets at the Air Force Academy commit to serve as military officers for a minimum of eight years after graduation.* They submit to a highly structured and rigorous academic program heavy on science and engineering. It includes a minimum of three math courses for every student. Every year, an algorithm randomly assigns incoming cadets to sections of Calculus I, each with about twenty students. To examine the impact of professors, two economists compiled data on more than ten thousand cadets who had been randomly assigned to calculus sections taught by nearly a hundred professors over a decade. Every section used the exact same syllabus, the exact same exam, and the exact same post-course professor evaluation form for cadets to fill out. After Calculus I, students were randomized again to Calculus II sections, again with the same syllabus and exam, and then again to more advanced math, science, and engineering courses. The economists confirmed that standardized test scores and high school grades were spread evenly across sections, so the instructors were facing similar challenges. The Academy even standardized test-grading procedures, so every student was evaluated in the same manner. “Potential ‘bleeding heart’ professors,” the economists wrote, “had no discretion to boost grades.” That was important, because they wanted to see what differences individual teachers made. Unsurprisingly, there was a group of Calculus I professors whose instruction most strongly boosted student performance on the Calculus I exam, and who got sterling student evaluation ratings. Another group of professors consistently added less to student performance on the exam, and students judged them more harshly in evaluations. But when the economists looked at another, longer-term measure of teacher value added—how those students did on subsequent math and engineering courses that required Calculus I as a prerequisite—the results were stunning. The Calculus I teachers who were the best at promoting student overachievement in their own class were somehow not great for their students in the long run. “Professors who excel at promoting contemporaneous student achievement,” the economists wrote, “on average, harm the subsequent performance of their students in more advanced classes.” What looked like a head start evaporated. The economists suggested that the professors who caused short-term struggle but long-term gains were facilitating “deep learning” by making connections. They “broaden the curriculum and produce students with a deeper understanding of the material.” It also made their courses more difficult and frustrating, as evidenced by both the students’ lower Calculus I exam scores and their harsher evaluations of their instructors. And vice versa. The calculus professor who ranked dead last in deep learning out of the hundred studied—that is, his students underperformed in subsequent classes—was sixth in student evaluations, and seventh in student performance during his own class. Students evaluated their instructors based on how they performed on tests right now—a poor measure of how well the teachers set them up for later development—so they gave the best marks to professors who provided them with the least long-term benefit. The economists concluded that students were actually selectively punishing the teachers who provided them the most long-term benefit. Tellingly, Calculus I students whose teachers had fewer qualifications and less experience did better in that class, while the students of more experienced and qualified teachers struggled in Calculus I but did better in subsequent courses. A similar study was conducted at Italy’s Bocconi University, on twelve hundred first-year students who were randomized into introductory course sections in management, economics, or law, and then the courses that followed them in a prescribed sequence over four years. It showed precisely the same pattern. Teachers who guided students to overachievement in their own course were rated highly, and undermined student performance in the long run. Psychologist Robert Bjork first used the phrase “desirable difficulties” in 1994. Twenty years later, he and a coauthor concluded a book chapter on applying the science of learning like this: “Above all, the most basic message is that teachers and students must avoid interpreting current performance as learning. Good performance on a test during the learning process can indicate mastery, but learners and teachers need to be aware that such performance will often index, instead, fast but fleeting progress.” • • • Here is the bright side: over the past forty years, Americans have increasingly said in national surveys that current students are getting a worse education than they themselves did, and they have been wrong. Scores from the National Assessment of Educational Progress, “the nation’s report card,” have risen steadily since the 1970s. Unquestionably, students today have mastery of basic skills that is superior to students of the past. School has not gotten worse. The goals of education have just become loftier. Education economist Greg Duncan, one of the most influential education professors in the world, has documented this trend. Focusing on “using procedures” problems worked well forty years ago when the world was flush with jobs that paid middle-class salaries for procedural tasks, like typing, filing, and working on an assembly line. “Increasingly,” according to Duncan, “jobs that pay well require employees to be able to solve unexpected problems, often while working in groups. . . . These shifts in labor force demands have in turn put new and increasingly stringent demands on schools.” Here is a math question from the early 1980s basic skills test of all public school sixth graders in Massachusetts: Carol can ride her bike 10 miles per hour. If Carol rides her bike to the store, how long will it take? To solve this problem, you would need to know: A) How far it is to the store. B) What kind of bike Carol has. C) What time Carol will leave. D) How much Carol has to spend. And here is a question Massachusetts sixth graders got in 2011: Paige, Rosie, and Cheryl each spent exactly $9.00 at the same snack bar. •Paige bought 3 bags of peanuts. •Rosie bought 2 bags of peanuts and 2 pretzels. •Cheryl bought 1 bag of peanuts, 1 pretzel, and 1 milk shake. A.What is the cost, in dollars, of 1 bag of peanuts? Show or explain how you got your answer. B.What is the cost, in dollars, of 1 pretzel? Show or explain how you got your answer. C.What is the total number of pretzels that can be bought for the cost of 1 milk shake? Show or explain how you got your answer. For every problem like the first one, the simple formula “distance = rate ? time” could be memorized and applied. The second problem requires the connection of multiple concepts that are then applied to a new situation. The teaching strategies that current teachers experienced when they were students are no longer good enough. Knowledge increasingly needs not merely to be durable, but also flexible—both sticky and capable of broad application. Toward the end of the eighth-grade math class that I watched with Lindsey Richland, the students settled into a worksheet for what psychologists call “blocked” practice. That is, practicing the same thing repeatedly, each problem employing the same procedure. It leads to excellent immediate performance, but for knowledge to be flexible, it should be learned under varied conditions, an approach called varied or mixed practice, or, to researchers, “interleaving.” Interleaving has been shown to improve inductive reasoning. When presented with different examples mixed together, students learn to create abstract generalizations that allow them to apply what they learned to material they have never encountered before. For example, say you plan to visit a museum and want to be able to identify the artist (C?zanne, Picasso, or Renoir) of paintings there that you have never seen. Before you go, instead of studying a stack of C?zanne flash cards, and then a stack of Picasso flash cards, and then a stack of Renoir, you should put the cards together and shuffle, so they will be interleaved. You will struggle more (and probably feel less confident) during practice, but be better equipped on museum day to discern each painter’s style, even for paintings that weren’t in the flash cards. In a study using college math problems, students who learned in blocks—all examples of a particular type of problem at once—performed a lot worse come test time than students who studied the exact same problems but all mixed up. The blocked-practice students learned procedures for each type of problem through repetition. The mixed-practice students learned how to differentiate types of problems. The same effect has appeared among learners studying everything from butterfly species identification to psychological-disorder diagnosis. In research on naval air defense simulations, individuals who engaged in highly mixed practice performed worse than blocked practicers during training, when they had to respond to potential threat scenarios that became familiar over the course of the training. At test time, everyone faced completely new scenarios, and the mixed-practice group destroyed the blocked-practice group. And yet interleaving tends to fool learners about their own progress. In one of Kornell and Bjork’s interleaving studies, 80 percent of students were sure they had learned better with blocked than mixed practice, whereas 80 percent performed in a manner that proved the opposite. The feeling of learning, it turns out, is based on before-your-eyes progress, while deep learning is not. “When your intuition says block,” Kornell told me, “you should probably interleave.” Interleaving is a desirable difficulty that frequently holds for both physical and mental skills. A simple motor-skill example is an experiment in which piano students were asked to learn to execute, in one-fifth of a second, a particular left-hand jump across fifteen keys. They were allowed 190 practice attempts. Some used all of those practicing the fifteen-key jump, while others switched between eight-, twelve-, fifteen-, and twenty-two-key jumps. When the piano students were invited back for a test, those who underwent the mixed practice were faster and more accurate at the fifteen-key jump than the students who had only practiced that exact jump. The “desirable difficulty” coiner himself, Robert Bjork, once commented on Shaquille O’Neal’s perpetual free-throw woes to say that instead of continuing to practice from the free-throw line, O’Neal should practice from a foot in front of and behind it to learn the motor modulation he needed. Whether the task is mental or physical, interleaving improves the ability to match the right strategy to a problem. That happens to be a hallmark of expert problem solving. Whether chemists, physicists, or political scientists, the most successful problem solvers spend mental energy figuring out what type of problem they are facing before matching a strategy to it, rather than jumping in with memorized procedures. In that way, they are just about the precise opposite of experts who develop in kind learning environments, like chess masters, who rely heavily on intuition. Kind learning environment experts choose a strategy and then evaluate; experts in less repetitive environments evaluate and then choose. • • • Desirable difficulties like testing and spacing make knowledge stick. It becomes durable. Desirable difficulties like making connections and interleaving make knowledge flexible, useful for problems that never appeared in training. All slow down learning and make performance suffer, in the short term. That can be a problem, because like the Air Force cadets, we all reflexively assess our progress by how we are doing right now. And like the Air Force cadets, we are often wrong. In 2017, Greg Duncan, the education economist, along with psychologist Drew Bailey and colleagues, reviewed sixty-seven early childhood education programs meant to boost academic achievement. Programs like Head Start did give a head start, but academically that was about it. The researchers found a pervasive “fadeout” effect, where a temporary academic advantage quickly diminished and often completely vanished. On a graph, it looks eerily like the kind that show future elite athletes catching up to their peers who got a head start in deliberate practice. A reason for this, the researchers concluded, is that early childhood education programs teach “closed” skills that can be acquired quickly with repetition of procedures, but that everyone will pick up at some point anyway. The fadeout was not a disappearance of skill so much as the rest of the world catching up. The motor-skill equivalent would be teaching a kid to walk a little early. Everyone is going to learn it anyway, and while it might be temporarily impressive, there is no evidence that rushing it matters. The research team recommended that if programs want to impart lasting academic benefits they should focus instead on “open” skills that scaffold later knowledge. Teaching kids to read a little early is not a lasting advantage. Teaching them how to hunt for and connect contextual clues to understand what they read can be. As with all desirable difficulties, the trouble is that a head start comes fast, but deep learning is slow. “The slowest growth,” the researchers wrote, occurs “for the most complex skills.” Duncan landed on the Today show discussing his team’s findings. The counteropinion was supplied by parents and an early childhood teacher who were confident that they could see a child’s progress. That is not in dispute. The question is how well they can judge the impact on future learning, and the evidence says that, like the Air Force cadets, the answer is not very well.* Before-our-eyes progress reinforces our instinct to do more of the same, but just like the case of the typhoid doctor, the feedback teaches the wrong lesson. Learning deeply means learning slowly. The cult of the head start fails the learners it seeks to serve. Knowledge with enduring utility must be very flexible, composed of mental schemes that can be matched to new problems. The virtual naval officers in the air defense simulation and the math students who engaged in interleaved practice were learning to recognize deep structural commonalities in types of problems. They could not rely on the same type of problem repeating, so they had to identify underlying conceptual connections in simulated battle threats, or math problems, that they had never actually seen before. They then matched a strategy to each new problem. When a knowledge structure is so flexible that it can be applied effectively even in new domains or extremely novel situations, it is called “far transfer.” There is a particular type of thinking that facilitates far transfer—a type that Alexander Luria’s Uzbek villagers could not employ—and that can seem far-fetched precisely because of how far it transfers. And it’s a mode of broad thinking that none of us employ enough. CHAPTER 5 Thinking Outside Experience THE SEVENTEENTH CENTURY was approaching. The universe was one in which celestial bodies moved around the stationary Earth powered by individual spirits, ineffable planetary souls. The Polish astronomer Nicolaus Copernicus had proposed that planets moved around the sun, but the idea was still so unorthodox that Italian philosopher Giordano Bruno was censured for teaching it, and later burned at the stake as a heretic for insisting there were other suns surrounded by other planets. Their spirits may have been driving, but the planets also needed a vehicle for motion, so they were assumed to be riding on pure crystalline spheres. The spheres were invisible from Earth and interlocked, like the gears of a clock, to produce collective motion at a constant speed for all eternity. Plato and Aristotle had laid the foundation for the accepted model, and it dominated for two thousand years. That clockwork universe was the one German astronomer Johannes Kepler inherited. He accepted it, at first. When the constellation Cassiopeia suddenly gained a new star (it was actually a supernova, the bright explosion at the end of a star’s life), Kepler recognized that the idea of the unchanging heavens could not be correct. A few years later, a comet tracked across the European sky. Shouldn’t it have cracked the crystalline spheres as it traveled, Kepler wondered? He began to doubt two millennia worth of accepted wisdom. By 1596, when he turned twenty-five, Kepler had accepted the Copernican model of planets orbiting the sun, and now he posed another profound question: Why do planets that are farther away from the sun move more slowly? Perhaps the more distant planets had weaker “moving souls.” But why would that be? Just coincidence? Maybe, he thought, rather than many spirits, there was just one, inside the sun, which for some reason acted more powerfully on nearby planets. Kepler was so far outside the bounds of previous thought that there was no evidence in existence for him to work from. He had to use analogies. Smells and heat dissipate predictably farther from their source, which meant that a mysterious planet-moving power from the sun might as well. But smells and heat are also detectable everywhere along their path, whereas the sun’s moving soul, Kepler wrote, is “poured out throughout the whole world, and yet does not exist anywhere but where there is something movable.” Was there any proof that such a thing could exist? Light “makes its nest in the sun,” Kepler wrote, and yet appears not to exist between its source and an object it lights up. If light can do it, so could some other physical entity. He began using the words “power” or “force” instead of “soul” and “spirit.” Kepler’s “moving power” was a precursor to gravity, an astounding mental leap because it came before science embraced the notion of physical forces that act throughout the universe. Given how the moving power seemed to emanate from the sun and disperse in space, Kepler wondered if light itself or some light-like force caused planetary motion. Well, then, could the moving power be blocked like light? Planetary motion did not stop during an eclipse, Kepler reasoned, so the moving power could not be just like light, or depend on light. He needed a new analogy. Kepler read a newly published description of magnetism, and thought maybe the planets were like magnets, with poles at either end. He realized that each planet moved more slowly when it was farther in its orbit from the sun, so perhaps the planets and the sun were attracting and repelling one another depending on which poles were nearby. That might explain why the planets moved toward and away from the sun, but why did they keep moving forward in their orbits? The sun’s power seemed somehow to also push them forward. On to the next analogy. The sun rotates on its axis and creates a whirlpool of moving power that sweeps the planets around like boats in a current. Kepler liked that, but it raised a new problem. He had realized that orbits were not perfectly circular, so what kind of strange current was the sun creating? The whirlpool analogy was incomplete without boatmen. Boatmen in a whirling river can steer their boats perpendicular to the current, so maybe planets could steer in the sun’s current, Kepler surmised. A circular current could explain why all the planets move in the same direction, and then each planet steered through the current to keep from getting sucked into the center, which made the orbits not quite circular. But then who was captaining each ship? That brought Kepler all the way back to spirits, and he was not happy about it. “Kepler,” he wrote to himself, “does’t thou wish then to equip each planet with two eyes?” Each time he got stuck, Kepler unleashed a fusillade of analogies. Not just light, heat, odor, currents and boatmen, but optics of lenses, balance scales, a broom, magnets, a magnetic broom, orators gazing at a crowd, and more. He interrogated each one ruthlessly, every time alighting on new questions. He eventually decided that celestial bodies pulled one another, and larger bodies had more pull. That led him to claim (correctly) that the moon influenced tides on Earth. Galileo, the embodiment of bold truths, mocked him for the ridiculous idea of “the moon’s dominion over the waters.” Kepler’s intellectual wanderings traced a staggering journey, from planets imbued with souls and riding on interlocking crystalline spheres in perfect circles around the stationary Earth, to his illumination of the laws of planetary motion, which showed that the planets move in ellipses that are predictable based on their relation to the sun. More important, Kepler invented astrophysics. He did not inherit an idea of universal physical forces. There was no concept of gravity as a force, and he had no notion of momentum that keeps the planets in motion. Analogies were all he had. He became the first discoverer of causal physical laws for phenomena in the heavens, and he realized it. “Ye physicists,” he wrote when he published his laws of planetary motion, “prick your ears, for now we are going to invade your territory.” The title of his magnum opus: A New Astronomy Based upon Causes. In an age when alchemy was still a common approach to natural phenomena, Kepler filled the universe with invisible forces acting all around us, and helped usher in the Scientific Revolution. His fastidious documentation of every meandering path his brain blazed is one of the great records of a mind undergoing creative transformation. It is a truism to say that Kepler thought outside the box. But what he really did, whenever he was stuck, was to think entirely outside the domain. He left a brightly lit trail of his favorite tools for doing that, the ones that allowed him to cast outside eyes upon wisdom his peers simply accepted. “I especially love analogies,” he wrote, “my most faithful masters, acquainted with all the secrets of nature. . . . One should make great use of them.” • • • Mention Kepler if you want to get Northwestern University psychologist Dedre Gentner excited. She gesticulates. Her tortoiseshell glasses bob up and down. She is probably the world’s foremost authority on analogical thinking. Deep analogical thinking is the practice of recognizing conceptual similarities in multiple domains or scenarios that may seem to have little in common on the surface. It is a powerful tool for solving wicked problems, and Kepler was an analogy addict, so Gentner is naturally very fond of him. When she mentions a trivial historical detail about him that might be misunderstood by modern readers, she suggests that maybe it’s best not to publish it as it might make him look bad, though he has been dead for nearly four hundred years. “In my opinion,” Gentner told me, “our ability to think relationally is one of the reasons we’re running the planet. Relations are really hard for other species.” Analogical thinking takes the new and makes it familiar, or takes the familiar and puts it in a new light, and allows humans to reason through problems they have never seen in unfamiliar contexts. It also allows us to understand that which we cannot see at all. Students might learn about the motion of molecules by analogy to billiard-ball collisions; principles of electricity can be understood with analogies to water flow through plumbing. Concepts from biology serve as analogies to inform the cutting edge of artificial intelligence: “neural networks” that learn how to identify images from examples (when you search cat pictures, for instance) were conceived as akin to the neurons of the brain, and “genetic algorithms” are conceptually based on evolution by natural selection—solutions are tried, evaluated, and the more successful solutions pass on properties to the next round of solutions, ad infinitum. It is the furthest extension of the type of thinking that was foreign to Luria’s premodern villagers, whose problem solving depended on direct experience. Kepler was facing a problem not just new to himself, but to all humanity. There was no experience database to draw on. To investigate whether he should be the first ever to propose “action at a distance” in the heavens (a mysterious power invisibly traversing space and then appearing at its target), he turned to analogy (odor, heat, light) to consider whether it was conceptually possible. He followed that up with a litany of distant analogies (magnets, boats) to think through the problem. Most problems, of course, are not new, so we can rely on what Gentner calls “surface” analogies from our own experience. “Most of the time, if you’re reminded of things that are similar on the surface, they’re going to be relationally similar as well,” she explained. Remember how you fixed the clogged bathtub drain in the old apartment? That will probably come to mind when the kitchen sink is clogged in the new one. But the idea that surface analogies that pop to mind work for novel problems is a “kind world” hypothesis, Gentner told me. Like kind learning environments, a kind world is based on repeating patterns. “It’s perfectly fine,” she said, “if you stay in the same village or the same savannah all your life.” The current world is not so kind; it requires thinking that cannot fall back on previous experience. Like math students, we need to be able to pick a strategy for problems we have never seen before. “In the life we lead today,” Gentner told me, “we need to be reminded of things that are only abstractly or relationally similar. And the more creative you want to be, the more important that is.” • • • In the course of studying problem solving in the 1930s, Karl Duncker posed one of the most famous hypothetical problems in all of cognitive psychology. It goes like this: Suppose you are a doctor faced with a patient who has a malignant stomach tumor. It is impossible to operate on this patient, but unless the tumor is destroyed the patient will die. There is a kind of ray that can be used to destroy the tumor. If the rays reach the tumor all at once at a sufficiently high intensity, the tumor will be destroyed. Unfortunately, at this intensity the healthy tissue that the rays pass through on the way to the tumor will also be destroyed. At lower intensities the rays are harmless to healthy tissue, but they will not affect the tumor either. What type of procedure might be used to destroy the tumor with the rays, and at the same time avoid destroying the healthy tissue? It’s on you to excise the tumor and save the patient, but the rays are either too powerful or too weak. How can you solve this? While you’re thinking, a little story to pass the time: There once was a general who needed to capture a fortress in the middle of a country from a brutal dictator. If the general could get all of his troops to the fortress at the same time, they would have no problem taking it. Plenty of roads that the troops could travel radiated out from the fort like wheel spokes, but they were strewn with mines, so only small groups of soldiers could safely traverse any one road. The general came up with a plan. He divided the army into small groups, and each group traveled a different road leading to the fortress. They synchronized their watches, and made sure to converge on the fortress at the same time via their separate roads. The plan worked. The general captured the fortress and overthrew the dictator. Have you saved the patient yet? Just one last story while you’re still thinking: Years ago, a small-town fire chief arrived at a woodshed fire, concerned that it would spread to a nearby house if it was not extinguished quickly. There was no hydrant nearby, but the shed was next to a lake, so there was plenty of water. Dozens of neighbors were already taking turns with buckets throwing water on the shed, but they weren’t making any progress. The neighbors were surprised when the fire chief yelled at them to stop, and to all go fill their buckets in the lake. When they returned, the chief arranged them in a circle around the shed, and on the count of three had them all throw their water at once. The fire was immediately dampened, and soon thereafter extinguished. The town gave the fire chief a pay raise as a reward for quick thinking. Are you done saving your patient? Don’t feel bad, almost no one solves it. At least not at first, and then nearly everyone solves it. Only about 10 percent of people solve “Duncker’s radiation problem” initially. Presented with both the radiation problem and the fortress story, about 30 percent solve it and save the patient. Given both of those plus the fire chief story, half solve it. Given the fortress and the fire chief stories and then told to use them to help solve the radiation problem, 80 percent save the patient. The answer is that you (the doctor) could direct multiple low-intensity rays at the tumor from different directions, leaving healthy tissue intact, but converging at the tumor site with enough collective intensity to destroy it. Just like how the general divided up troops and directed them to converge at the fortress, and how the fire chief arranged neighbors with their buckets around the burning shed so that their water would converge on the fire simultaneously. Those results are from a series of 1980s analogical thinking studies. Really, don’t feel bad if you didn’t get it. In a real experiment you would have taken more time, and whether you got it or not is unimportant. The important part is what it shows about problem solving. A gift of a single analogy from a different domain tripled the proportion of solvers who got the radiation problem. Two analogies from disparate domains gave an even bigger boost. The impact of the fortress story alone was as large as if solvers were just straight out told this guiding principle: “If you need a large force to accomplish some purpose, but are prevented from applying such a force directly, many smaller forces applied simultaneously from different directions may work just as well.” The scientists who did that work expected that analogies would be fuel for problem solving, but they were surprised that most solvers working on the radiation problem did not find clues in the fortress story until they were directed to do so. “One might well have supposed,” the scientists wrote, that “being in a psychology experiment would have led virtually all subjects to consider how the first part [of the study] might be related to the second.” Human intuition, it appears, is not very well engineered to make use of the best tools when faced with what the researchers called “ill-defined” problems. Our experience-based instincts are set up well for Tiger domains, the kind world Gentner described, where problems and solutions repeat. An experiment on Stanford international relations students during the Cold War provided a cautionary tale about relying on kind-world reasoning—that is, drawing only on the first analogy that feels familiar. The students were told that a small, fictional democratic country was under threat from a totalitarian neighbor, and they had to decide how the United States should respond. Some students were given descriptions that likened the situation to World War II (refugees in boxcars; a president “from New York, the same state as FDR”; a meeting in “Winston Churchill Hall”). For others, it was likened to Vietnam, (a president “from Texas, the same state as LBJ,” and refugees in boats). The international relations students who were reminded of World War II were far more likely to choose to go to war; the students reminded of Vietnam opted for nonmilitary diplomacy. That phenomenon has been documented all over the place. College football coaches rated the same player’s potential very differently depending on what former player he was likened to in an introductory description, even with all other information kept exactly the same. With the difficult radiation problem, the most successful strategy employed multiple situations that were not at all alike on the surface, but held deep structural similarities. Most problem solvers are not like Kepler. They will stay inside of the problem at hand, focused on the internal details, and perhaps summon other medical knowledge, since it is on the surface a medical problem. They will not intuitively turn to distant analogies to probe solutions. They should, though, and they should make sure some of those analogies are, on the surface, far removed from the current problem. In a wicked world, relying upon experience from a single domain is not only limiting, it can be disastrous. • • • The trouble with using no more than a single analogy, particularly one from a very similar situation, is that it does not help battle the natural impulse to employ the “inside view,” a term coined by psychologists Daniel Kahneman and Amos Tversky. We take the inside view when we make judgments based narrowly on the details of a particular project that are right in front of us. Kahneman had a personal experience with the dangers of the inside view when he assembled a team to write a high school curriculum on the science of decision making. After a full year of weekly meetings, he surveyed the entire team to find out how long everyone thought the project would take. The lowest estimate was one and a half years, the highest two and a half years. Kahneman then asked a team member named Seymour, a distinguished curriculum expert who had seen the process with other teams, how this one compared. Seymour thought for a while. Moments earlier, he had estimated it would take about two more years. Faced with Kahneman’s question about other teams, he said he had never even thought to compare this instance to separate projects, but that about 40 percent of the teams he’d seen never finished at all, and not a single one he could think of took less than seven years. Kahneman’s group was not willing to spend six more years on a curriculum project that might fail. They spent a few minutes debating the new opinion, and decided to forge ahead trusting the about-two-years wisdom of the group. Eight years later, they finished, by which point Kahneman was not even on the team or living in the country, and the agency that asked for the curriculum was no longer interested. Our natural inclination to take the inside view can be defeated by following analogies to the “outside view.” The outside view probes for deep structural similarities to the current problem in different ones. The outside view is deeply counterintuitive because it requires a decision maker to ignore unique surface features of the current project, on which they are the expert, and instead look outside for structurally similar analogies. It requires a mindset switch from narrow to broad. For a unique 2012 experiment, University of Sydney business strategy professor Dan Lovallo—who had conducted inside-view research with Kahneman—and a pair of economists theorized that starting out by making loads of diverse analogies, Kepler style, would naturally lead to the outside view and improve decisions. They recruited investors from large private equity firms who consider a huge number of potential projects in a variety of domains. The researchers thought the investors’ work might naturally lend itself to the outside view. The private equity investors were told to assess a real project they were currently working on with a detailed description of the steps to success, and to predict the project’s return on investment. They were then asked to write down a batch of other investment projects they knew of with broad conceptual similarity to theirs—for instance, other examples of a business owner looking to sell, or a start-up with a technologically risky product. They were instructed to estimate the return for each of those examples too. In the end, the investors estimated that the return on their own project would be about 50 percent higher than the outside projects they had identified as conceptually similar. When given the chance at the end to rethink and revise, they slashed their own initial estimate. “They were sort of shocked,” Lovallo told me, “and the senior people were the most shocked.” The investors initially judged their own projects, where they knew all the details, completely differently from similar projects to which they were outsiders. This is a widespread phenomenon. If you’re asked to predict whether a particular horse will win a race or a particular politician will win an election, the more internal details you learn about any particular scenario—physical qualities of the specific horse, the background and strategy of the particular politician—the more likely you are to say that the scenario you are investigating will occur. Psychologists have shown repeatedly that the more internal details an individual can be made to consider, the more extreme their judgment becomes. For the venture capitalists, they knew more details about their own project, and judged that it would be an extreme success, until they were forced to consider other projects with broad conceptual similarities. In another example, students rated a university a lot better if they were told about a few specific science departments that were ranked in the top ten nationally than if they were simply told that every science department at the university was ranked among the top ten. In one famous study, participants judged an individual as more likely to die from “heart disease, cancer, or other natural causes” than from “natural causes.” Focusing narrowly on many fine details specific to a problem at hand feels like the exact right thing to do, when it is often exactly wrong. Bent Flyvbjerg, chair of Major Programme Management at Oxford University’s business school, has shown that around 90 percent of major infrastructure projects worldwide go over budget (by an average of 28 percent) in part because managers focus on the details of their project and become overly optimistic. Project managers can become like Kahneman’s curriculum-building team, which decided that thanks to its roster of experts it would certainly not encounter the same delays as did other groups. Flyvbjerg studied a project to build a tram system in Scotland, in which an outside consulting team actually went through an analogy process akin to what the private equity investors were instructed to do. They ignored specifics of the project at hand and focused on others with structural similarities. The consulting team saw that the project group had made a rigorous analysis using all of the details of the work to be done. And yet, using analogies to separate projects, the consulting team concluded that the cost projection of ?320 million (more than $400 million) was probably a massive underestimate. When the tram opened three years late, it was headed toward ?1 billion. After that, other UK infrastructure projects began implementing outside-view approaches, essentially forcing managers to make analogies to many outside projects of the past. Following their private-equity-investor experiment, the outside-view researchers turned to the movie business, a notoriously uncertain realm with high risk, high reward, and a huge store of data on actual outcomes. They wondered if forcing analogical thinking on moviegoers could lead to accurate forecasts of film success. They started by giving hundreds of movie fans basic film information—lead actor names, the promotional poster, and a synopsis—for an upcoming release. At the time, those included Wedding Crashers, Fantastic Four, Deuce Bigalow: European Gigolo, and others. The moviegoers were also given a list of forty older movies, and asked to score how well each one probably served as an analogy to each upcoming release. The researchers used those similarity scores (and a little basic film information, like whether it was a sequel) to predict the eventual revenue of the upcoming releases. They pitted those predictions against a mathematical model stuffed with information about seventeen hundred past movies and each upcoming film, including genre, budget, star actors, release year, and whether it was a holiday release. Even without all that detailed information, the revenue predictions that used moviegoer analogy scores were vastly better. The moviegoer-analogies forecast performed better on fifteen of nineteen upcoming releases. Using the moviegoers’ analogies gave revenue projections that were less than 4 percent off for War of the Worlds, Bewitched, and Red Eye, and 1.7 percent off for Deuce Bigalow: European Gigolo. Netflix came to a similar conclusion for improving its recommendation algorithm. Decoding movies’ traits to figure out what you like was very complex and less accurate than simply analogizing you to many other customers with similar viewing histories. Instead of predicting what you might like, they examine who you are like, and the complexity is captured therein. Interestingly, if the researchers used only the single film that the movie fans ranked as most analogous to the new release, predictive power collapsed. What seemed like the single best analogy did not do well on its own. Using a full “reference class” of analogies—the pillar of the outside view—was immensely more accurate. Think back to chapter 1, to the types of intuitive experts that Gary Klein studied in kind learning environments, like chess masters and firefighters. Rather than beginning by generating options, they leap to a decision based on pattern recognition of surface features. They may then evaluate it, if they have time, but often stick with it. This time will probably be like the last time, so extensive narrow experience works. Generating new ideas or facing novel problems with high uncertainty is nothing like that. Evaluating an array of options before letting intuition reign is a trick for the wicked world. In another experiment, Lovallo and his collaborator Ferdinand Dubin asked 150 business students to generate strategies to help the fictitious Mickey Company, which was struggling with its computer mouse business in Australia and China. After business students learned about the company’s challenges, they were told to write down all the strategies they could think of to try to improve Mickey’s position. Lovallo and Dubin gave some students one or more analogies in their instructions. (For example: “The profile of Nike Inc. and McDonald’s Corp. may be helpful to supplement your recommendations but should not limit them.”) Other students got none. The students prompted with one analogy came up with more strategies than those given no analogies, and students given multiple analogies came up with more strategies than those reminded only of one. And the more distant the analogy, the better it was for idea generation. Students who were pointed to Nike and McDonald’s generated more strategic options than their peers who were reminded of computer companies Apple and Dell. Just being reminded to analogize widely made the business students more creative. Unfortunately, students also said that if they were to use analogy companies at all, they believed the best way to generate strategic options would be to focus on a single example in the same field. Like the venture capitalists, their intuition was to use too few analogies, and to rely on those that were the most superficially similar. “That’s usually exactly the wrong way to go about it regardless of what you’re using analogy for,” Lovallo told me. The good news is that it is easy to ride analogies from the intuitive inside view to the outside view. In 2001, the Boston Consulting Group, one of the most successful in the world, created an intranet site to provide consultants with collections of material to facilitate wide-ranging analogical thinking. The interactive “exhibits” were sorted by discipline (anthropology, psychology, history, and others), concept (change, logistics, productivity, and so on), and strategic theme (competition, cooperation, unions and alliances, and more). A consultant generating strategies for a post-merger integration might have perused the exhibit on how William the Conqueror “merged” England with the Norman Kingdom in the eleventh century. An exhibit that described Sherlock Holmes’s observational strategies could have provided ideas for learning from details that experienced professionals take for granted. And a consultant working with a rapidly expanding start-up might have gleaned ideas from the writing of a Prussian military strategist who studied the fragile equilibrium between maintaining momentum after a victory and overshooting a goal by so much that it turns into a defeat. If that all sounds incredibly remote from pressing business concerns, that is exactly the point. • • • Dedre Gentner wanted to find out if everyone can be a bit more like Kepler, capable of wielding distant analogies to understand problems. So she helped create the “Ambiguous Sorting Task.” It consists of twenty-five cards, each one describing a real-world phenomenon, like how internet routers or economic bubbles work. Each card falls into two main categories, one for its domain (economics, biology, and so on) and one for its deep structure. Participants are asked to sort the cards into like categories. For a deep structure example, you might put economic bubbles and melting polar ice caps together as positive-feedback loops. (In economic bubbles, consumers buy stocks or property with the idea that the price will increase; that buying causes the price to increase, which leads to more buying. When ice caps melt, they reflect less sunlight back to space, which warms the planet, causing more ice to melt.) Or perhaps you would put the act of sweating and actions of the Federal Reserve together as negative-feedback loops. (Sweating cools the body so that more sweating is no longer required. The Fed lowers interest rates to spur the economy; if the economy grows too quickly, the Fed raises rates to slow down the activity it launched.) The way gas prices lead to an increase in grocery prices and the steps needed for a message to traverse neurons in your brain are both examples of causal chains, where one event leads to another, which leads to another, in linear order. Alternatively, you might group Federal Reserve rate changes, economic bubbles, and gas price changes together because they are all in the same domain: economics. And you might put sweating and neurotransmission together under biology. Gentner and colleagues gave the Ambiguous Sorting Task to Northwestern University students from an array of majors and found that all of the students figured out how to group phenomena by domains. But fewer could come up with groupings based on causal structure. There was a group of students, however, who were particularly good at finding common deep structures: students who had taken classes in a range of domains, like those in the Integrated Science Program. Northwestern’s website for the program features an alum’s description: “Think of the Integrated Science Program as a biology minor, chemistry minor, physics minor, and math minor combined into a single major. The primary intent of this program is to expose students to all fields of the natural and mathematical sciences so that they can see commonalities among different fields of the natural sciences. . . . The ISP major allows you to see connections across different disciplines.” A professor I asked about the Integrated Science Program told me that specific academic departments are generally not big fans. They want students to take more specialized classes in a single department. They are concerned about the students falling behind. They would rather rush them to specialization than equip them with ideas from what Gentner referred to as a “variety of base domains,” which foster analogical thinking and conceptual connections that can help students categorize the type of problem they are facing. That is precisely a skill that sets the most adept problem solvers apart. In one of the most cited studies of expert problem solving ever conducted, an interdisciplinary team of scientists came to a pretty simple conclusion: successful problem solvers are more able to determine the deep structure of a problem before they proceed to match a strategy to it. Less successful problem solvers are more like most students in the Ambiguous Sorting Task: they mentally classify problems only by superficial, overtly stated features, like the domain context. For the best performers, they wrote, problem solving “begins with the typing of the problem.” As education pioneer John Dewey put it in Logic, The Theory of Inquiry, “a problem well put is half-solved.” • • • Before he began his tortuous march of analogies toward reimagining the universe, Kepler had to get very confused on his homework. Unlike Galileo and Isaac Newton, he documented his confusion. “What matters to me,” Kepler wrote, “is not merely to impart to the reader what I have to say, but above all to convey to him the reasons, subterfuges, and lucky hazards which led me to my discoveries.” Kepler was a young man when he showed up to work at Tycho Brahe’s observatory—so cutting edge at the time that it cost 1 percent of the national budget of Denmark. He was given the assignment nobody wanted: Mars and its perplexing orbit. The orbit had to be a circle, Kepler was told, so he had to figure out why Brahe’s observations didn’t match that. Every once in a while, Mars appears to reverse course in the sky, do a little loop, and then carry on in the original direction, a feat known as retrograde motion. Astronomers proposed elaborate contortions to explain how Mars could accomplish this while riding the interlocking spheres of the sky. As usual, Kepler could not accept contortions. He asked peers for help, but his pleas fell on deaf ears. His predecessors had always managed to explain away the Mars deviations without scrapping the overall scheme. Kepler’s short Mars assignment (he guessed it would take eight days) turned into five years of calculations trying to describe where Mars appeared in the sky at any given moment. No sooner had Kepler done it with great accuracy than he threw it away. It was close, but not perfect. The imperfection was minuscule. Just two of Brahe’s observations differed from Kepler’s calculations of where Mars should be, and by just eight minutes of arc, a sliver of sky one-eighth the width of a pinkie finger held at arm’s length. Kepler could have assumed his model was correct and those two observations were slightly off, or he could dispense with five years of work. He chose to trash his model. “If I had believed we could ignore these eight minutes,” he wrote, “I would have patched my hypothesis accordingly.” The assignment no one wanted became Kepler’s keyhole view into a new understanding of the universe. He was in uncharted territory. The analogies began in earnest, and he reinvented astronomy. Light, heat, smells, boats, brooms, magnets—it began with those pesky observations that didn’t quite fit, and ended in the complete undoing of Aristotle’s clockwork universe. Kepler did something that turns out to be characteristic of today’s world-class research labs. Psychologist Kevin Dunbar began documenting how productive labs work in the 1990s, and stumbled upon a modern version of Keplerian thinking. Faced with an unexpected finding, rather than assuming the current theory is correct and that an observation must be off, the unexpected became an opportunity to venture somewhere new—and analogies served as the wilderness guide. When Dunbar started, he simply set out to document the process of discovery in real time. He focused on molecular biology labs because they were blazing new trails, particularly in genetics and treatments for viruses, like HIV. He spent a year with four labs in the United States, playing a fly on the wall, visiting the labs every day for months, and later extended the work to more labs in the United States, Canada, and Italy. He became such a familiar presence that scientists called him to make sure he knew about impromptu meetings. The surface features of the labs were very different. One had dozens of members, others were small. A few were all men, one was all women. All had international reputations. The weekly lab meetings made the most interesting viewing. Once a week, the entire team came together—lab director, grad students, postdoctoral fellows, technicians—to discuss some challenge a lab member was facing. The meetings were nothing like the heads-down, solitary work in stereotypical portrayals of scientists, huddled over their test tubes. Dunbar saw free-flowing and spontaneous exchange. Ideas were batted back and forth, new experiments proposed, obstacles discussed. “Those are some of the most creative moments in science,” he told me. So he recorded them. The first fifteen minutes could be housekeeping—whose turn it was to order supplies, or who had left a mess. Then the action started. Someone presented an unexpected or confusing finding, their version of Kepler’s Mars orbit. Prudently, scientists’ first instinct was to blame themselves, some error in calculation or poorly calibrated equipment. If it kept up, the lab accepted the result as real, and ideas about what to try and what might be going on started flying. Every hour of lab meeting Dunbar recorded required eight hours of transcribing and labeling problem-solving behaviors so that he could analyze the process of scientific creativity, and he found an analogy fest. Dunbar witnessed important breakthroughs live, and saw that the labs most likely to turn unexpected findings into new knowledge for humanity made a lot of analogies, and made them from a variety of base domains. The labs in which scientists had more diverse professional backgrounds were the ones where more and more varied analogies were offered, and where breakthroughs were more reliably produced when the unexpected arose. Those labs were Keplers by committee. They included members with a wide variety of experiences and interests. When the moment came to either dismiss or embrace and grapple with information that puzzled them, they drew on their range to make analogies. Lots of them. For relatively straightforward challenges, labs started with analogies to other, very similar experiments. The more unusual the challenge, the more distant the analogies, moving away from surface similarities and toward deep structural similarities. In some lab meetings a new analogy entered the conversation every four minutes on average, some of them from outside of biology entirely. In one instance, Dunbar actually saw two labs encounter the same experimental problem at around the same time. Proteins they wanted to measure would get stuck to a filter, which made them hard to analyze. One of the labs was entirely E. coli experts, and the other had scientists with chemistry, physics, biology, and genetics backgrounds, plus medical students. “One lab made an analogy drawing on knowledge from the person with a medical degree, and they figured it out right there at the meeting,” Dunbar told me. “The other lab used E. coli knowledge to deal with every problem. That didn’t work here so they had to just start experimenting for weeks to get rid of the problem. It put me in an awkward position because I had seen the answer in another lab’s meeting.” (As part of the conditions of the study, he was not allowed to share information between labs.) In the face of the unexpected, the range of available analogies helped determine who learned something new. In the lone lab that did not make any new findings during Dunbar’s project, everyone had similar and highly specialized backgrounds, and analogies were almost never used. “When all the members of the laboratory have the same knowledge at their disposal, then when a problem arises, a group of similar minded individuals will not provide more information to make analogies than a single individual,” Dunbar concluded. “It’s sort of like the stock market,” he told me. “You need a mixture of strategies.” • • • The trouble with courses of study like Northwestern’s Integrated Science Program, which impart a broad mixture of strategies, is that they may require abandoning a head start toward a major or career. That is a tough sell, even if it better serves learners in the long run. Whether it is the making-connections knowledge Lindsey Richland studied, or the broad concepts that Flynn tested, or the distant, deep structural analogical reasoning that Gentner assessed, there is often no entrenched interest fighting on the side of range, or of knowledge that must be slowly acquired. All forces align to incentivize a head start and early, narrow specialization, even if that is a poor long-term strategy. That is a problem, because another kind of knowledge, perhaps the most important of all, is necessarily slowly acquired—the kind that helps you match yourself to the right challenge in the first place.