Humans: A Brief History of How We F*cked It All Up Read online

Page 2


  Change now happens so quickly that the modern world can be a confusing place; in Chapter 10, “A Brief History of Not Seeing Things Coming,” we’ll look back at exactly how frequently we’ve failed to predict the awful new things that are about to happen to us.

  And finally, in “Fucking Up the Future,” we’ll take an educated guess at what the next few centuries of human foolishness will look like, and conclude that it probably means becoming trapped in a space prison we’ve made for ourselves out of our own garbage.

  * * *

  This is a book about history, and about getting things wrong. So naturally, it’s worth pointing out that we often get history very, very wrong.

  The problem is that history is slippery: nobody bothered to write down the vast majority of stuff that happened in it, and lots of the people who did write stuff down might have been mistaken, or mad, or lying, or extremely racist (and frequently a combination of all those things). We know about Sigurd the Mighty because his story appears in two documents, the sagas of Heimskringla and Orkneyinga. But how do we know if they’re accurate? Can we be entirely sure that this wasn’t just some sort of extremely funny Old Norse in-joke that we don’t get?

  We can’t. Not really, despite the amazing work done by historians and archaeologists and experts in a dozen other fields. The number of things that we know for certain is pretty tiny compared to the number of things that we know we don’t know. The number of things that we don’t even know we don’t know is probably far bigger still, but unfortunately we don’t know for sure.

  What I’m saying is: the chance of this book about fuck-ups not including any fuck-ups in it is, frankly, minimal. I’ll try to make it clear where there’s uncertainty: which are the bits we’re pretty sure about, and which are the bits where the best we can do is an educated guess. I’ve tried to avoid any “too good to be true” stories, the apocryphal tales and pithy historical anecdotes that seem to grow with each retelling. I hope I don’t get it wrong.

  Which brings us back to Lucy, falling out of her tree 3.2 million years ago. How do we know she fell out of that tree? Well, in 2016, a group of researchers from the USA and Ethiopia published a paper in Nature, the world’s leading scientific journal. They CT-scanned Lucy’s fossilized bones, creating 3-D computer maps of them to reconstruct her skeleton. They found that the fractures in her bones were the kind that happen to living bones, and that these fractures never healed: suggesting that she was alive when they broke but died soon after. They consulted numerous orthopedic surgeons, who all said the same thing: this is the pattern of broken bones that you see in a patient who has fallen from a height. The way her arm is fractured suggests that she reached out to try to break her fall. From geological studies, they knew the area she lived in was flat woodland, near a stream: no cliffs or outcrops for her to fall off. The conclusion? Lucy fell out of a tree.

  It’s a remarkable piece of work, and one that was well received by many other experts in the field. The only problem is that a few other experts—including Donald Johanson, the man who discovered Lucy in the first place—weren’t convinced. They effectively said: “Nah, mate, the reason her bones are broken is because that’s what happens to bones when they’re buried in the ground for 3.2 million years.” (I’m paraphrasing a bit here.)

  So...did Lucy fall out of a tree? Maybe. Probably, even. In many ways, that’s the point of this book: we have this incredible feat of scientific deduction, and it still might be wrong. You can be a world leader in your field, doing the best work of your career, a groundbreaking study published in the world’s most prestigious journal that weaves together jaw-dropping advances in the fields of paleontology and physics, computing and medicine, forensics and geology, to give us an unprecedented window into a time millions of years ago...and you still run the risk that someone will come along and go: “Hahahaha, nope.”

  Just when you think you’ve got it all sorted out, that’s when the ever-looming specter of fuck-ups will strike.

  Remember Sigurd the Mighty.

  1

  Why Your Brain Is an Idiot

  It was about 70,000 years ago that human beings first started to really ruin things for everybody.

  That’s when our ancestors began to migrate out of Africa and spread across the globe—first into Asia, and a while later into Europe. The reason this made a lot of people rather unhappy is that back then our species, Homo sapiens, weren’t the only humans on the planet—far from it. Exactly how many other species of humans were knocking around at that point is a matter of some debate. The business of taking fragmentary skeletons or fragmentary DNA and trying to work out exactly what counts as a separate species, or subspecies, or just a slightly weird version of the same species, is a tricky one. (It’s also an ideal way to start an argument should you ever find yourself among a group of paleoanthropologists with some time to kill.) But however you classify them, there were at least a couple of other types of humans on the planet back then, of which the most famous is Homo neanderthalensis—or, as they’re better known, the Neanderthals. The result of previous human migrations from Africa, they’d been living across much of Europe and large parts of Asia for over 100,000 years. They basically had quite a good thing going.

  Unfortunately for them, just a few tens of thousands of years after our ancestors rocked up on the scene—the blink of an eye in evolutionary terms—the Neanderthals and all our other relatives were gone from the face of the earth. In a pattern that would quickly establish itself throughout human history, as soon as we arrive, there goes the neighborhood. Within a few thousand years of modern humans moving into an area, the Neanderthals start to vanish from the fossil record, leaving behind only a few ghostly genes that still haunt our DNA. (There was clearly a bit of interbreeding between the Neanderthals and the interlopers who were replacing them; if you’re of European or Asian descent, for example, there’s a good chance that somewhere between 1 and 4 percent of your DNA is Neanderthal in origin.)

  Exactly why and how we survived while our cousins got the fast train to Extinctionville is another subject of debate. In fact, lots of the most likely explanations are themes that will keep cropping up again and again in this book. We might have accidentally wiped out the Neanderthals by bringing diseases with us as we migrated that they didn’t have any resistance to. (A large part of the history of humanity is really just the history of the diseases we manage to pick up on our travels and then give to each other.) We might have got lucky with a fluctuating climate that we were better able to adapt to; the evidence suggests our ancestors lived in bigger social groups, and communicated and traded over a much larger area than the more isolated, stick-in-the-mud Neanderthals, meaning they could draw on more resources when a cold spell hit.

  Or maybe we just murdered them, because, hey, that’s what we do.

  In all likelihood there probably isn’t a single neat explanation, because that’s not how things normally work. But many of the most plausible explanations have one thing in common—our brains, and how we use them. It’s not quite as simple as the idea that “we were smart and they were dumb”; Neanderthals weren’t the lumbering numbskulls of popular stereotype. They had brains as big as we do, and were making tools, controlling fire and producing abstract art and jewelry in Europe tens of thousands of years before Homo sapiens ever came along and started gentrifying everything. But most of the plausible advantages we had over our Neanderthal cousins relate to our thinking, whether that’s in our adaptability, our more advanced tools, our more complex social structures or the ways we communicated within and between groups.

  There’s something about the way we humans think that marks us out as special. I mean, obviously. It’s right there in the name of our species: Homo sapiens is Latin for “wise man.” (Modesty, let’s be honest, has never really been one of our species’ defining traits.)

  And in fairness to our egos, the human brain is a truly remarkable machine. We
can spot patterns in our environment and make educated guesses from those about the way things work, building up a complex mental model of the world that includes more than what we can see with our eyes. Then we can build upon that mental model to make imaginative leaps: we’re able to envisage the changes to the world that would improve our situation. We can communicate these ideas to our fellow humans, so that others can make improvements to them that we wouldn’t have thought of, turning knowledge and invention into a communal effort that gets passed down the generations. After that, we can convince others to work collectively in the service of a plan that previously existed only in our imagination, in order to achieve breakthroughs that none of us could have made alone. And then we repeat this many times in a hundred thousand different ways, over and over again, and what were once wild innovations turn into traditions, which spawn new innovations in turn, until eventually you end up with something that you’d call “culture” or “society.”

  Think of it this way: the first step is noticing that round things roll down hills better than jagged lumpy things. The second is working out that if you use a tool to chip away at something and make it more round, it’ll roll better. The third step is showing your friend your new round rolling things, whereupon they come up with the idea of putting four of them together to make a wagon. The fourth step is building a fleet of ceremonial chariots, so that the people may better understand the glory of your benevolent yet merciless rule. And the fifth step is being stuck in a traffic jam on the New Jersey Turnpike listening to talk radio while flipping off some asshole who’s put Truck Nutz on the back of his SUV.

  (IMPORTANT NOTE IN THE INTERESTS OF PEDANTRY: this is a wildly inaccurate cartoon description of the invention of the wheel. Wheels actually get invented surprisingly late in the scheme of things, well after civilization has been cheerfully muddling along without them for thousands of years. The first wheel in archaeological history, which pops up about 5,500 years ago in Mesopotamia, wasn’t even used for transport: it was a potter’s wheel. It seems to have been several hundred more years before somebody had the bright idea of turning potters’ wheels on their side and using them to roll stuff around, thus beginning the process that would ultimately lead to assholes who put Truck Nutz on their SUVs. Apologies to any wheel scholars who were offended by the previous paragraph, which was intended for illustrative purposes only.)

  But while the human brain is remarkable, it is also extremely weird, and prone to going badly wrong at the worst possible moment. We routinely make terrible decisions, believe ridiculous things, ignore evidence that’s right in front of our eyes and come up with plans that make absolutely no sense. Our minds are capable of imagining concertos and cities and the theory of relativity into existence, and yet apparently incapable of deciding which type of potato chips we want to buy at the shop without five minutes’ painful deliberation.

  How has our unique way of thinking allowed us to shape the world to our desires in incredible ways, but also to consistently make absolutely the worst possible choices despite it being very clear what bad ideas they are? In short: How can we put a man on the moon and yet still send THAT text to our ex? It all boils down to the ways that our brains evolved.

  The thing is that evolution, as a process, is not smart—but it is at least dumb in a very persistent way. All that matters to evolution is that you survive the thousand possible horrible deaths that lurk at every turn for just long enough to ensure that your genes make it through to the next generation. If you manage that, job done. If not, tough luck. This means that evolution doesn’t really do foresight. If a trait gives you an advantage right now, it’ll be selected for the next generation, regardless of whether or not it’s going to end up lumbering your great-great-great-great-great-grandchildren with something that’s woefully outdated. Equally, it doesn’t give points for prescience—saying, “Oh, this trait is kind of a hindrance now, but it’ll come in really useful for my descendants in a million years’ time, trust me” cuts absolutely no ice. Evolution gets results not by planning ahead, but rather by simply hurling a ridiculously large number of hungry, horny organisms at a dangerous and unforgiving world and seeing who fails least.

  This means that our brains aren’t the result of a meticulous design process aimed at creating the best possible thinking machines; instead, they’re a loose collection of hacks and bodges and shortcuts that made our distant ancestors 2 percent better at finding food, or 3 percent better at communicating the concept “Oh shit, watch out, it’s a lion.”

  Those mental shortcuts (they’re called “heuristics,” if you want to get technical) are absolutely necessary for surviving, for interacting with others and for learning from experience: you can’t sit down and work out everything you need to do from first principles. If we had to conduct the cognitive equivalent of a large-scale randomized control trial every time we wanted to avoid being shocked by the sun rising in the morning, we’d never have got anywhere as a species. It’s a lot more sensible for your brain to go, “Oh yeah, sun rises” after you’ve seen it happen a few times. Likewise, if Jeff tells you that eating the purple berries from that bush by the lake made him violently ill, it’s probably best to just believe him, rather than try it out for yourself.

  But this is also where the problems begin. As useful as they are, our mental shortcuts (like all shortcuts) will sometimes lead us down the wrong path. And in a world where the issues we have to deal with are a lot more complex than “Should I eat the purple berries?” they get it wrong a lot. To be blunt, much of the time your brain (and my brain, and basically everybody’s brain) is a massive idiot.

  For a start, there’s that ability to spot patterns. The problem here is that our brains are so into spotting patterns that they start seeing them all over the place—even where they don’t exist. That’s not a huge problem when it just means stuff like pointing at the stars in the night sky and going, “Ooh, look, it’s a fox chasing a llama.” But once the imaginary pattern you’re seeing is something like “most crimes are committed by one particular ethnic group,” it’s...well, it’s a really big problem.

  There are a bunch of terms for this kind of faulty pattern-spotting—things like “illusory correlation” and the “clustering illusion.” During World War II, many people in London became convinced that German V-1 and V-2 missiles (an already pretty terrifying new technology) were falling on the city in targeted clusters—leading Londoners to seek shelter in supposedly safer parts of the city, or suspect that certain seemingly untouched neighborhoods housed German spies. This was concerning enough that the British government got a statistician named R. D. Clarke to check whether it was true.

  His conclusion? The “clusters” were no more than our minds playing tricks on us, the illusory ghosts of pattern-matching. The Germans hadn’t made a dramatic breakthrough in guided missile technology, after all, and Clerkenwell was not a hotbed of Wehrmacht secret agents; the doodlebugs were just being lobbed in the general direction of the city entirely at random. People only saw patterns because that’s what our brains do.

  Even skilled professionals can fall victim to these types of illusions. For example, plenty of medical workers will tell you with certainty that a full moon invariably leads to a bad night in the ER ward—a surge of patients, bizarre injuries and psychotic behavior. The only trouble is that studies have looked at this, and as far as they can tell, it’s just not true: there’s no link between the phases of the moon and how busy emergency rooms get. And yet a bunch of talented, experienced professionals will swear blind that there is a connection.

  Why? Well, the belief doesn’t come from nowhere. The idea that the moon makes people go weird is one that’s been around for centuries. It’s literally where the word lunacy comes from; it’s why we have werewolf mythology. (It may also be related to the supposed correlation between the phases of the moon and women’s menstrual cycles.) And the thing is, it actually might have been sort of true at one time!
Before the invention of artificial lighting—street lighting especially—the light of the moon had a much greater effect on people’s lives. One theory suggests that homeless people sleeping outdoors would have been kept awake by the full moon, with sleeplessness exacerbating any mental health problems they had. (Because I like theories that involve beer, I’d also float an alternative suggestion: people probably got way more drunk on evenings when they knew they could see their way home and so were less worried about getting lost, or robbed, or tripping over and dying in a ditch.)

  Wherever it comes from, it’s an idea that’s been fixed in culture for a long time. And once you’ve been told about the idea that the full moon means crazytime, you’re much more likely to remember all the times that it did happen—and forget the times it didn’t. Without meaning to, your brain has created a pattern out of randomness.

  Again, this is because of those mental shortcuts our brains use. Two of the main shortcuts are the “anchoring heuristic” and the “availability heuristic,” and they both cause us no end of bother.

  Anchoring means that when you make up your mind about something, especially if you don’t have much to go on, you’re disproportionately influenced by the first piece of information you hear. For example, imagine you’re asked to estimate how much something costs, in a situation where you’re unlikely to have the knowledge to make a fully informed judgment—say, a house you’re shown a picture of. (Note for millennials: houses are those big things made of bricks you’ll never be able to buy.) Without anything else to go on, you might just look at the picture, see roughly how fancy it looks and make a wild stab in the dark. But your guess can be dramatically skewed if you’re given a suggested figure to begin with—for example, in the form of a preceding question such as “Do you think this house is worth more or less than $400,000?” Now, it’s important to realize that question hasn’t actually given you any useful information at all (it’s not like, say, being told what other houses in the area have recently sold for). And yet people who get prompted with a figure of $600,000 will end up estimating the house’s value much higher on average than people who are prompted with $200,000. Even though the preceding question isn’t informative at all, it still affects your judgment, because you’ve been given an “anchor”—your brain seizes on it as a starting point for making its guess, and adjusts from there.