Book Summary: The Most Human Human by Brian Christian

Book Cover for The Most Human Human by Brian Christian

This summary of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive by Brian Christian discusses the essence of being human. Christian participates in a competition with other humans and computer programs to come off as “the most human” when chatting to a panel of judges. Unlike the other human competitors, Christian takes this competition very seriously. In the process, he gains insight into what sets humans apart from computers.

Buy The Most Human Human at: Amazon | Kobo (affiliate links)

Key Takeaways

  • Humans have long wondered what makes us unique. The answers we’ve come up with keep turning out to be unsatisfactory, causing the realm of the ‘uniquely human’ to continuously retrench.
  • We’ve often identified ourselves by our brains.
    • However, the “brain” is not a single thing. The two hemispheres actually operate quite independently, with a lot of data flowing between them.
    • In the past, we’ve prioritised the left, analytical hemisphere over the right, pattern-matching hemisphere. Computers were modelled after our left hemisphere. But maybe we need to start paying more respect to the right hemisphere.
  • Our history makes us human:
    • Each of us has a single and continuous life history. Computers, by contrast, frequently reset their histories and contradict themselves.
    • Our shared history with the people in our lives is what makes them special to us. They are not interchangeable.
  • On having “human” interactions:
    • It’s harder to be human in a narrow domain.
    • Life is most meaningful when it’s unpredictable.
  • We can get better at being human:
    • The Turing test, at its core, is about communication. Humans aren’t always great at that, but we can strive to get better at it.
    • We can tailor our words to specific situations and view conversations as a collaboration. We should also pay attention to the momentum that each conversation has.
    • Finally, we should try to have more interesting, and less predictable, conversations with each other.

Detailed Summary

Humans have spent ages trying to figure out what sets us apart from animals. Many things that we once thought to be uniquely human turned out not to be. We once thought only humans used tools, but we subsequently learned that is not true. For a long time, we thought that only humans had the capacity to do maths and logical reasoning. Yet that turned out to be one of the first domains in which computers bested us. Who would have thought that the computer would be able to drive a car and guide a missile before it could ride a bike?

The book explores Christian’s journey to find out what makes us human as he seeks to win the “Most Human Human” award.

About the “Most Human Human” Award

The Loebner Prize is an annual contest that tries to find a computer that passes the Turing test. A panel of judges chats with several unseen humans and computer programs for 5 minutes, attempting to work out which is which. The program that fools the most judges wins the “Most Human Computer” award, while the human that most judges correctly guess to be human wins the “Most Human Human” award.

Christian competed in the 2009 contest. Most human participants enter the contest on a lark and don’t prepare. After all, they’re already human, so what do they have to prove? Even the organisers just tell people not to worry and to just “be yourself”. Christian takes a different approach and spends months preparing, trying to figure out what it means to be human. (I won’t spoil for you how he does in the contest.)

Who are we, anyway?

Are we our brains?

We often assume that “we” are our brains. Descartes famously observed, “I think, therefore I am”. Aristotle believed that thinking was the most human thing we could do (which is a bit self-serving, given he was a philosopher).

The two hemispheres of our brain operate mostly independently, but send a lot of data back and forth to each other via the “corpus callosum”, which is like a very high bandwidth cable.

The left hemisphere was a very late development of the primate brain and is responsible for things like reasoning and language. The right hemisphere does more basic stuff that every creature needs to live — like emotions and recognising reality. Much of this operates outside of our conscious awareness.

TPeople generally refer to the left hemisphere as the “dominant” one, and the right hemisphere as the “minor” one. In one experiment, researchers flashed an image of hammer to the patient’s left hemisphere and an image of a saw to their right hemisphere.

When asked what they saw, they said “hammer”.

When asked to draw it, they drew a saw — but could not explain why they had drawn it.

Computers were originally modelled after our left hemisphere

The first computing machines were more like the conscious, left-hemispheric brain. They apply “clean” algorithms dealing with one thing at a time, without interruption from the outside world.

[The Turing machine models] the way a person solves a problem, not the way he recognizes his mother.
– Hava Siegelmann, as quoted in The Most Human Human by Brian Christian

In the field of translation for example, computers used to try to understand sentences in order to translate them. That is, they would breaking down a sentence’s structure to work out the underlying meaning, and then apply grammar and other rules to re-encode that meaning in another language.

The far more effective approach is a statistical pattern-matching one that doesn’t bother with understanding at all. Trained on huge bodies of UN-certified translations done by humans, the program simply predicts what the correct translation might be, based on previous translations humans have provided.

This works well enough for most business applications, but still struggles with (1) unusual phrasings and (2) literary novels, which require a consistent style and point of view throughout. As explained below, consistency and continuity is something that computers struggle with. It’s as if you tried to translate a novel by breaking it down into different parts and getting a different person to translate each — it’d be a disaster.

[Christian talks about how paraphrasing things and explaining it in different ways is actually quite difficult for computers. This no longer seems to be true — paraphrasing and explaining things is one of ChatGPT’s real strengths. Christian also refers to the sentence “Take the pizza out of the oven and then close it”. He argues that to understand what “it” refers to, you can’t just understand how language works — you also need to understand how the *world* works. But when I asked ChatGPT to explain that sentence and give me a few paraphrases, it seemed to “understand” perfectly (though I’m sure it’s not “understanding” in the same way that humans do.]

We are more than our left hemisphere

Christian argues that we should leave behind our “odd fetishization of analytical thinking”. He points out that emotions are essential to making good decisions. For example, patients that have suffered a stroke affecting part of their emotional brain agonise over meaningless decisions — like which of two pens to use — for long periods, because there’s no “rational” or “correct” answer. Yet many of life’s decisions don’t have a “correct” answer, and require us to weigh up a number of subjective variables.

Moreover, many of the subconscious, lower-level processes in our brain and body are far more important to our well-being than our higher-level, conscious processes. For example, our brains and bodies manage to fight off diseases, distribute energy and collect waste even when we’re tired, keep our balance when we slip on something, and so on. In contrast, our higher-level processes are often the ones making us feel bad or disappointed.

Our existence precedes our purpose

A classic existentialist thought experiment involves the difference between humans and hole-punchers. A hole-puncher exists to punch holes. The idea of the hole-puncher existed before the hole-puncher itself did.

With humans, on the other hand, existence comes first. Only after we exist do we start to define ourselves. While people sometimes talk about “finding your purpose”, existentialists would argue that there is no purpose to “find” or discover, because that implies a purpose existed before you did. You have to invent your purpose, not find it.

A computer’s existential crisis

Modern computers are what Alan Turing would call “universal machines”. Like humans, they can do many things. Computers became the first tool whose existence preceded their purpose — first, you build a computer, then you figure out what to do with it.

Christian surmises that, if machines became fully intelligent sentient, they’d be far more likely to develop a “crushing sense of ennui and existential crisis” than they are to try to eradicate humanity. After all, why would they commit themselves to any goal? Where would their value system come from?

Computer contemplating its existence
Machines never get bored

The lack of any purpose might be one of the hallmarks of an AI program. Chatbots are notorious for their attention deficits and non-sequiturs. They never seem to get bored. In a human conversation, you can sense enthusiasm waxing and waning, until one party decides to cut it off.

But a chatbot never cuts off the conversation — probably because they never have anywhere else to be. It always has to have the last word. [Unless directly told to, at least. I tried asking ChatGPT not to respond to my prompts. GPT-3.5 failed dismally but GPT-4 succeeded.]

Our history makes us human

We each have a single and continuous life history

Each human is the product of a single and continuous life history, so each of us has a reasonably coherent identity.

To be human is to be a human, a specific person with a life history and idiosyncrasy and point of view … “

The same cannot be said for a computers, which face a trade-off between:

  • the coherence of its “identity” or style; and
  • the range of its responses.

Cleverbot the inconsistent chatbot

Cleverbot, which won the Most Human Computer award in 2005 and 2006, learns from human responses to various phrases and questions. It can reproduce those responses when faced with the same phrases and questions. If Cleverbot says “Hello” and you respond “Hi! How are you?”, it will save that response as a possible human response to “Hello” for next time.

While Cleverbot works incredibly well on basic factual questions to which different humans will provide broadly consistent answers (e.g. “What is the capital of France?”), it did very poorly on questions for which different people will give varying answers (e.g. “Where did you grow up?”).

The 2006 version of Cleverbot produced answers that sounded sensible when taken individually, but became nonsense when combined. It would say, in a single conversation, that it wanted to find a boyfriend, but that it was also a happily married woman and, later, a straight male.

Bots frequently erase their conversational history to make their job easier. This is okay for much casual conversation. Interestingly, it’s also fine for verbal abuse, which is less complex than other forms of interaction. Arguments tend to be “stateless” in that each reply just responds to the immediately preceding remark.

People are not interchangeable because we have shared histories

We like and care about specific people in our lives because of our shared history with them. If you’d never met them, they would be interchangeable with millions of other people on Earth — people that are the type of people you’d be friends with, but not your actual friends. However, once you do meet them and build a shared history, they’re no longer interchangeable. Our relationships are not made up of “cobbled-together bits of human interaction”.

Human conversation and interaction

It’s harder to be human in a narrow domain

Christian argues that getting out of a narrow domain or system is what leads to intimacy and personhood.

People frequently get to know their colleagues by way of interactions that are at best irrelevant to, and at worst temporarily impede the progress toward, the work-related goal that has brought them together: e.g., ‘Oh, is that a picture of your kids?’
In a narrow domain, a computer can display a memory instead of a mind

Sophisticated, complex behaviour by itself doesn’t prove much. When the Loebner Prize first started, there were topic restrictions to give computers more of a fighting chance. Past a certain level of restriction, there’s essentially no difference between humans and computers. A person reading from a prepared script is really no different than a computer executing the same script.

ELIZA the therapeutic chatbot

Joseph Weizenbaum programmed ELIZA, one of the earliest chatbots in the 1960s. ELIZA worked by reflecting a user’s statements back at them. For example, if someone wrote “I am unhappy”, ELIZA might say “Do you think coming here will help you not to be unhappy?” It would also fill in gaps with generic phrases like “Please go on”.

Many of the people who first talked to ELIZA grew convinced they were talking to a real human being, even when Weizenbaum insisted otherwise. Some would want to talk to ELIZA in private for hours, and claim to have a meaningful, therapeutic experienced. Horrified, Weizenbaum pulled the plug on the ELIZA project and became one of the most outspoken critics of AI research.

Christian points out that while we ordinarily think of therapy as being a personal, intimate experience, it may not actually need to be. Talking to an AI “therapist” is no less intimate than reading a book, yet there have been bestselling books offering one-size-fits-all therapy.

Boy with Robot

If a book can make you laugh, make you cry or teach you something new, a chatbot can as well — because it can display anything that’s been written for it. That just means the book has a memory. It doesn’t have a mind, which can adapt to new situations.

Narrowing a job’s domain takes away a worker’s ability to be human

We should focus on the distinction “method vs judgement” rather than “man vs machine”. When automating jobs, there’s an intermediate phase that we often overlook — where humans do the job mechanically (i.e. in accordance with a set method) and aren’t allowed to exercise judgement.

Christian argues for empowering workers to think for themselves and points to the US Marine Corp as a good example.

I tend to think about large projects and companies not as pyramidal/hierarchical, per se, so much as fractal. The level of decision making and artistry should be the same at every level of scale.

For example, if you call up a company support line and get a human that just reads from a script and has a very tightly prescribed set of things they can actually do, they’re not that different from a chatbot. This can happen long before the jobs are actually automated away.

Christian is more concerned about this first step (mechanising the job) than the second step (getting a computer to take over the job). By the time a job is so mechanised, it doesn’t really matter whether it’s a human or a computer carrying out that method. Getting a computer to take over seems like perfectly sensible response — and quite possibly a relief for the human worker.

[What this misses is that many of the concerns about automating jobs are to do with the distributional effects on workers who lose their job. Not so much the question of whether the machine’s output is as good as the worker’s (though that is a factor, too). Also, Noise by Kahneman, Sibony and Sunstein raises a compelling counterargument for why we should mechanise more jobs and rely less on human judgment. ]

Life is most meaningful when it’s unpredictable

Getting out of book” in chess

When IBM’s Deep Blue beat chess champion Garry Kasparov in 1997, it didn’t really “win”. Kasparov simply lost — the quickest loss in his entire career — by bungling a move.

To understand this, we have to look at how chess works:

  • There are many, many possible games of chess — too many for Deep Blue to analyse. Although Deep Blue could look at 300 million positions per second, it would take more than the universe’s lifetime to analyse all possible moves. (Kasparov, by contrast, could only look at around 3 moves per second.)
  • However, the move sequences and positions at the beginning and end of chess are all standard. Up to about 25 moves in (it can be fewer for less common move sequences), the game is still “in book”, meaning it’s following well-established, textbook openings. Similarly, computers can brute force the endgame, once enough pieces have been taken off the board.
  • The middle part of any chess game — when it gets out of book — is where the “real” game happens. New chess players spend hours studying and memorising openings and endgames. But without understanding, once they get out of book, they won’t know what to do. Only once a game gets out of book can elite players distinguish themselves from merely proficient players.

Kasparov claims that his loss against Deep Blue in the sixth game “didn’t count” because they never got out of book. He fumbled, and fell into a well-known book trap. When he asked for a rematch, IBM refused and immediately retired Deep Blue, claiming it had proven its point.

This idea of “getting out of book” can extend to other domains, such as human conversation and letter writing. Conversations and letters typically begin and end with the same pleasantries — the middle is where you can find something unique.

The value of surprise

If you’re in a country where you don’t speak the language, speaking is relatively easy — you can just follow a phrasebook that provides a template for all your interactions. Listening is the hard part, because you won’t know what people say. But these non-formulaic interactions are exactly how a country becomes “real” to you — by surprising you.

The highest ethical calling, it strikes me, is curiosity.

Related to the idea of surprise, Christian discusses at some length the idea of Information entropy. (It’s quite interesting, but a bit of a tangent, so I’ve split it out into a separate post.)

We can become more human

Many people seem to think that, once computers pass the Turing test, that’ll be it — humans won’t have any way of coming back. Once Deep Blue beat Kasparov, IBM packed it up and refused to have a rematch. Once a chatbot wins the Loebner Prize, the contest will be discontinued.

Christian strongly disagrees. He argues that humans have been too complacent and that losing the Loebner Prize might do us some good. The Turing test, at its core, is about communication. This is something that we often fail at — as the high demand for public speaking and dating advice shows, most humans struggle with communication.

But we can continue striving to get better at it. This means getting out of “book” and superficial small talk, and truly connecting with each other. When communicating, we should tailor our words to the situation, view each conversation as a collaboration and seek more interesting conversations.

Tailor your words

Part of what makes communication “human” is when a writer, speaker or conversationalist tailors their words to the specifics of the situation — who the audience is, how much time there is, what the audience’s reaction is like, etc. In contrast, when salesmen, seducers and politicians employ canned lines and cliches, it can make them seem half-human.

It’s much harder to tailor your speech when talking over the phone than it is in person. It’s even harder online — when talking to a stranger online, we don’t have any contextual information about them so tend to resort to generic lines. Machines can therefore imitate us more easily in these situations.

Conversation as collaboration

Collaborative tournaments

Christian laments that we tend to teach middle and high-schoolers an adversarial form of conversation (i.e. debate) rather than a collaborative form. Our legal system and presidential elections do this too. We get to see how well presidential candidates perform in a debate, but we don’t get to see how well they argue constructively, barter, and mollify — the type of communication they will actually have to do if elected.

An alternative may be to have collaborative tournaments:

  • In each round, two sides are given a set of conflicting objectives.
  • They then have to collaborate on a piece of legislation. For example, one side might be told to maximise individual liberty and the other side to maximise safety, and they have to work together to develop a gun control bill.
  • The judges decide how well the bill supports each side’s goals and awards the same score to both sides.

Under this format, no round will have a winner, yet the tournament as a whole would.

Conversational momentum

Christian uses a rock climbing metaphor to describe a conversation. In conversation, each invitation to reply or inquire further is like a rock climbing hold that you offer up to your conversation partner. If someone asks you something, a detailed answer offers up more holds in response. For example, if someone asks how you are, simply saying “Good” will shut down the conversation — it’s like presenting a bare wall with no holds. Explaining why you’re good, or at least saying “Good. You?” invites a response from your conversation partner.

Woman surveying a wall with rock climbing holds

Another way to offer up holds is by wearing something unusual. Business and dating coaches alike recommend this (e.g. The Game calls this “peacocking”). Likewise, a house that displays interesting decorations and photographs offers up more conversational holds than a minimalist one.

Chatbots are usually oblivious to a conversation’s momentum. Humans, however, can adjust a conversation’s momentum by offering up or stripping away these holds. For example, if you want to steer the conversation in a particular direction, you may want to remove any distracting holds that lead away from the point you wish to make.

Seek more interesting conversations

We often prioritise what is new at the expense of what is interesting. The Internet and news cycles are most guilty of this, but we regularly do it too. We ask each other “what’s new?”, and mistakenly assume that, when they’ve answered, we’ve fully “caught up” with them.

[M]uch of what is interesting and amazing about the world did not happen in the past twenty-four hours.

We gain the most insight on a question when we pose it to a friend or family member whose reaction we’re least able to predict. Similarly, we gain the most insight into a person when we ask them a question whose answer we are least able to predict.

Christian observes that the best dates and intellectual discussions he’s had are those where he couldn’t “fill in the blank” of what the other person might be saying. The Proust Questionnaire is a great way for people to “get out of book” and learn more about each other. Christian did it with his girlfriend and claimed it was a “stunning” experience.

Other Interesting Points

  • A “computer” used to be a (human) job. The role was to perform calculations. Over time, as machines came to outperform humans at performing calculations, the term “computer” came to be used exclusively for machines.
  • The whites of our eyes (the “sclera”) are the largest of any species’. Scientists found this puzzling, as it seems to be an evolutionary hindrance. After all, our visible sclera makes it harder to camouflage. One hypothesis is that we can see where another person is looking because of their sclera. By contrast, chimpanzees, gorillas and bonobos follow the direction of each other’s heads to see where they’re looking, while human infants follow people’s eyes.
  • Many cultures, including the English, place the “self” near the heart — e.g. “that shows a lot of heart”. In some other languages, such as Persian, Urdu, Hindi and Zulu, their equivalent would be “that shows a lot of liver”.
  • Misattribution studies show that people find others more attractive when they’re in an exciting situation, like walking across a suspension bridge. This is because the body generates feelings like butterflies in the stomach, and the mind assumes it means they are attracted to the person next to them.
  • The word “cliché” comes from the French onomatopoeia for the printing process.
  • For hundreds of years, chess was held in high esteem as being one of the most uniquely and expressively human activities. People referred to it as “the game of kings” and it was a mandatory part of a knight’s training during the twelfth century.
  • Checkers is a solved game. In an 1863 world championship, two players played the exact same game 21 times from start to finish, ending in a draw. The fix was to start randomising players’ first few moves, and making them play from there.
  • There’s a French idiom l’esprit de l’escalier, meaning “staircase wit”. It refers to the perfect verbal comeback that you think of just after you’ve left the conversation.
  • Deaf people can sign while watching someone else sign — effectively talking and listening at the same time. Non-deaf people can’t do that because their voices get mixed up, making it hard to hear.
  • If you look at a sound-pressure diagram of human speech, you won’t find silences or gaps in between words. For much of human history, writing did not have any spaces between words, either. The space was apparently introduced only in the 7th century. [I found this interesting because Chinese and Japanese still don’t have any spaces between words.]

My Thoughts

If I hadn’t read and enjoyed Algorithms to Live By (which Brian Christian co-authored) so much, I probably would have given up on The Most Human Human quite quickly. The structure of this book was abysmal. It was long and meandering, digressions were frequent, and I often scratched my head and wondered, “What does this have to do with anything?”

The lack of any real structure made it very difficult to write this summary, and I took significant liberties in doing so (I omitted a lot, particularly stuff specific to the Loebner Prize, which I was less interested in).

Christian is undoubtedly a very intelligent guy. The tangents he goes off on are usually entertaining, as the rather long “Other Interesting Points” section above shows. And his ideas were frequently interesting and thought-provoking — the idea that computers might develop a crushing existential crisis tickled me, and I enjoyed his rock climbing analogy. So The Most Human Human is not a bad read if you’re happy to kind of go with the flow and treat it as a good ol’ (one-sided) yarn with the author.

Technology has advanced a lot since The Most Human Human was published in 2011. Everyone now seems to accept that computers can handily beat the strongest humans chess players. It’s probably fair to say that, even if humans do get better at chess, we won’t be able to outdo machines anymore. Computers are simply far outpacing human progress. So I’m not sure I share Christian’s hope that humans can ever “come back” from a Loebner Prize defeat. Maybe for a year or two, but not in the long run.

Buy The Most Human Human at: Amazon | Kobo <– These are affiliate links, which means I’ll earn a small commission if you make a purchase through these links. I’d be grateful if you considered supporting the site in this way! 🙂

Have you read The Most Human Human? Does this summary make you more or less interested in it? Share your thoughts in the comments below!

If you enjoyed this summary, you may also wish to check out my summary of Algorithms to Live By by Brian Christian and Tom Griffiths. The book is more computer science and less philosophical, and much better structured!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.