Problems with the Total View of Population Ethics

I recently read parts of The Precipice by Toby Ord as part of the online Intro to EA Virtual Program. One thing that piqued my interest was the population ethics section. Ord just covered it in a short Appendix, but it’s crucial to many of his arguments about preserving humanity’s potential. He adopts what is called the “Total View” of population ethics, but I think there are problems with the Total View.

The same issue seems to crop up in What We Owe the Future by William MacAskill. Population ethics seems to be important to his conclusions but does not seem to be covered in any depth. I have not yet read MacAskill’s book, so will focus on Ord’s in this post.

What’s population ethics and why does anyone care about it?

Population ethics concerns how we view actions that affect the numbers of lives in existence.

Ord points out that humans are a relatively young species, only about 200,000 years old. Most mammalian species survive for around a million years and, with our intelligence, we could potentially survive for much longer. Ord therefore argues it would be really, really bad if the human species went extinct prematurely. Like, over and above the “badness” inherent in lives being cut short by some catastrophe and any attendant suffering. The extra “badness” is because many future lives that could have been born wouldn’t have existed at all. That argument rests on the Total View in population ethics.

In this post, I summarise three common views of population ethics: the Total View, Average View and Person-Affecting View. I also outline some objections against each of them.

Some of this stuff can get pretty theoretical. I’ve tried to focus on what the views might mean for us in practice because, personally, that’s what I find more interesting.

What is the Total View?

The Total View holds that an outcome with more total wellbeing is better than outcome with less total wellbeing. Under the Total View, the outcome with more total wellbeing is preferred even if the other outcome has higher average wellbeing.

Large Crowd of People

The Total View’s Implications for Extinction Risks

A key implication of the Total View is that an extinction event would be way worse than a non-extinction catastrophe with the same (or even more) amount of suffering and deaths. Compare for example two hypothetical pandemics:

  • A global pandemic in 1950 that kills all 2.5 billion people on Earth.
  • A global pandemic in 1960 that kills 99% of the 3 billion people on Earth (total 2.97 billion killed).

Which pandemic is worse? Under the Total View, that 1950 pandemic would be much worse than the 1960 pandemic. The latter causes more deaths in absolute terms but the former would prevent all lives born after 1950. There’s also an implicit assumption that those lives would have been net positive on average. The 1960 pandemic kills more people, but leaves 30 million survivors to go on and repopulate the Earth.

Now in practice, we don’t really get to “choose” between different catastrophes so this example may feel a bit abstract. However, the Total View influences what risks we prioritise and how we choose to go about addressing those risks. The Total View could suggest that developing communities that are permanently “socially distanced” on islands or bunkers is a better way to guard against a pandemic extinction risk than trying to reduce the risk of such a pandemic occurring in the first place.

Ord also points out that extinction risk may not necessarily involve untimely death or suffering. For example, if everyone just chose not to have children. Unlike Ord, I don’t really see such an extinction as a problem. (Though it might suck to be one of the last people to die.) We’ll probably go extinct at some point, so why not have perhaps the most peaceful extinction possible?

Problems with the Total View

The repugnant conclusion

The most commonly cited problem with the Total View is the repugnant conclusion. As explains, this is the idea that:

For any world A, there is a better world Z in which no one has a life that is more than barely worth living, provided the number of lives in Z is sufficiently great.

For example, say world A has 100 lives, each with 5 wellbeing. The total wellbeing in world A would be 500. In contrast, world Z could have lots of people with 0.1 wellbeing (barely worth living). As long as world Z had at least 5001 people people, it would be better than world A.

I wasn’t that bothered by the repugnant conclusion when I first heard it. It sounded very hypothetical and a natural consequence of prioritising totals over averages. On further reflection, however, the repugnant conclusion is not just hypothetical. I think some practical implications of the Total View could be harmful, which I outline below. These include:

  • justification of near-term harms;
  • abortion and contraception policies;
  • individual decisions to have children; and
  • animal farming and meat consumption.

Justification of near-term harms

Under the Total View, the “good” that could be done by slightly reducing extinction risk would be so large that it could justify some pretty serious near-term harms. After all, when you’re talking about the future of humanity, you’re not talking about millions or even billions of lives. You’re talking more like decillions.

This is not necessarily a problem if you really do believe in the Total View, if you adjust for uncertainty. But the fact is the Total View makes this possible, which suggests that it should be treated with considerable caution.

Abortion and Contraception

The Total View could also support anti-abortion or anti-contraception policies. Adherents to the Total View might still justify a pro-choice stance with arguments along the following lines:

  1. Allowing abortion or contraception would prevent bringing into existence lives that would be net negative on average. This doesn’t necessarily mean those prevented lives would themselves have been full of suffering if they had existed. It could also mean that the negative effects on others (e.g. the parents) outweigh the positives from the child’s existence.
  2. Even if a prevented life would have been net positive, the resources that would have gone into raising that child could be deployed in other ways that are even more positive.
  3. Banning abortion or contraception may just affect the timing of births, rather than the absolute number of lives created. For example, a couple may decide that they’d only ever have one child. With contraception or abortion, they’d be able to choose to have the child when it suited their life circumstances. Without contraception or abortion, they may still just have one child through abstinence and natural family planning, but with a higher risk that things don’t go as planned.

I think each of these arguments are plausible. But there is still an inherent tension that Total View believers will have to grapple with. And the prevented lives could be very numerous, especially once you take into account all the children that those immediately prevented lives could have gone on to have. So any negative impacts of banning contraception or abortion (e.g. negative effect on parents, greater anxiety, less sex) would have to be very high to offset the increased total wellbeing from all the prevented lives.

Individual decisions to have children

The counterarguments above may not apply when considering whether you should have a child, rather than whether abortion or contraception should be banned.

If you’re in a position to raise a happy child, the Total View arguably implies that you have a moral obligation to do so. Even if you don’t want to be a parent, you should have a child if their wellbeing will likely be positive. The contribution of that child’s wellbeing, and all of their future children’s wellbeing, to total wellbeing will likely be great enough to outweigh any reduction in your wellbeing. This moral obligation will be stronger if the future of humanity is at risk due to low birthrates (see e.g. The Handmaid’s Tale).

Now, I understand that most Effective Altruists accept that no one is perfectly altruistic. So people who hold the Total View may still reason (at least in our current world where existing birthrates do not pose an extinction risk) that people could decide not to have children even if would be more moral to. But still, do we really think that it’s more moral to have children than not, all else being equal?

Does the Total View imply that it is more moral to have children?
Photo by Garrett Jackson on Unsplash

Animal farming and meat consumption

Another implication of the Total View is that bringing a new life into existence, and then killing it in a humane and peaceful way, could still be viewed as “good” – or at least better than never bringing that life into existence at all – if that life was net positive overall.

At first, this sounds hypothetical. No one brings human lives into existence intending to kill them later. But there doesn’t seem to be any reason why the Total View should focus only on human lives. Surely the same logic applies to animal lives too? Provided animals raised for meat are treated humanely and killed humanely, would eating meat be more moral than not eating meat? After all, many pigs, cows and chickens would never have existed if humans did not eat meat.

I know that in practice, there are lots of problems with factory farming. Animals suffering is such that many of their lives are probably net negative overall (at least for pigs and chickens). But that is not universally the case. Some farms raise and kill animals humanely. While most people would agree that, if animals are farmed for food, it is more moral to treat and kill those animals humanely, the implications of the Total View go further than that. The Total View seems to suggest that not eating animals (and therefore not bringing them into existence at all) is less moral than eating animals. This seems, well, bizarre.

What is the Average View?

The Average View holds that one outcome is better than another if it contains higher average wellbeing. For example, a world with 10 people, each with 5 wellbeing would be better than a world with 100 people, each with 3 wellbeing.

Problems with the Average View

According to, the Average View has very little support among moral philosophers because of three problems described below.

  1. If average wellbeing was negative, you could do good by creating a new person whose wellbeing was also negative. That person’s wellbeing would just have to be less negative than the existing average. Most people don’t accept this, and think that creating a negative wellbeing life is never good.
  2. There is a sadistic conclusion that it can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point. An example helps illustrate this:
    • Assume the current world had 1000 people with an average wellbeing of, say, 5.
    • It would be better to add 10 people with wellbeing of -5 to this world than to add 500 people with wellbeing of 4.
    • Adding the 10 people would only drag the average down to 4.9, whereas adding the 500 people would drag the average down to 4.7.
  3. The average view prefers arbitrarily small populations over very large populations, as long as average wellbeing is higher. While true, I don’t find this to be a very strong objection.

There is potentially a fourth problem with the Average View. Similar to an issue with the Total View described above, if you think you’d raise a child with above-average wellbeing, you should have a child even if you don’t want to under the Total View. However, this objection holds much less force under the Average View than under the Total View. First, because the amount by which your child could lift average wellbeing will be incredibly small. Secondly, even if you’re quite certain your child’s wellbeing will be above-average, it’s much less certain if their children’s wellbeing will also be above-average.

Average View and Sunk Costs

The first two problems mentioned above seem to stem from the fact that, under the Average View, what you are prioritising is the average of all lives in the sample population, rather than your marginal effect on lives.

It’s kind of like the sunk cost fallacy. Put simply, the idea is that when deciding what to do, we should disregard “sunk” costs we can’t get back. Instead, we should focus on the consequences of our actions on the margin. If you’ve paid $20 to see a movie and realise partway through the movie is awful, you should leave. The fact that you’d already paid $20 is irrelevant – you’re not getting that money back. The only thing you can affect is how you spend the rest of your time.

Similarly, when deciding whether to add negative lives to the world, you should focus only on the lives you are affecting. But the Average View includes existing lives in calculating the average, even if those lives aren’t affected by your decision. This will lead to irrational decisions.

What is the Person-Affecting View?

The Person-Affecting View is the one I favour, and is probably the most intuitive to most people. The Person-Affecting View holds that there is no moral good in bringing into additional happy people into existence. As one guy put it:

“We are in favour of making people happy, but neutral about making happy people.”

– Jan Narveson

I’ve heard one person attempt to summarise the Person-Affecting View by saying that it only cares about people already in existence. While there are different versions of the Person-Affecting View, that’s not true under most conceptions of it. Most conceptions of the Person-Affecting View care about “inevitable” lives. That is, lives that would exist regardless of what we do, even if those lives don’t yet exist. I might be neutral about making new happy people, but if those people are going to exist regardless of what I do, I would rather they be happy. So I can hold the Person-Affecting View and still care about climate change and toxic waste negatively affecting future generations.

Problems with the Person-Affecting View

The two main problems with the Person-Affecting View seem to involve the procreative asymmetry and the transitivity principle.

Procreative asymmetry

If creating a happy person is neutral, that implies that creating a miserable person (by which I mean a life with so much suffering that living is worse – for that person and those around them – than never being born) is also neutral. Most people don’t accept this, and will prefer to live with an asymmetry – i.e. they’ll hold that creating a happy person is neutral, but creating a miserable person is bad.

This asymmetry seems intuitive but may be hard to justify. One possible justification is that suffering is inherently bad and worth fixing. In contrast, the absence of happiness is not bad and therefore not worth “fixing”. This sounds sensible to me, and I think most people consider morality in a similar way. We generally accept that it’s immoral to stand by and let someone else get hurt without doing anything, if you could help them at low or no cost to yourself. However, I don’t think most people would say it’s immoral to stand by and not make a happy person happier, even if you could do so easily.


Say you had a choice between the following three options:

  • Option A: creating no lives.
  • Option B: creating 100 lives, each with wellbeing +5.
  • Option C: creating the same 100 lives, each with wellbeing +10.

Under the Person-Affecting View, A = B, and A = C. But under a Person-Affecting View, C would seem to be better than B (i.e. C > B). So unless you rated B and C equally, this would violate the principle of transitivity.

Why I still prefer the Person-Affecting View

I accept there are problems with the Person-Affecting View but, unlike the problems with the Total View, they seem largely theoretical. Perhaps my intuition is blinding me but I can’t think of examples where the Person-Affecting View leads to conclusions that in practice would be bad.

In reality, we aren’t distant gods choosing between worlds with positive and negative lives. Any life we can cause to be created (or avoided) will have a non-neutral impact on an existing life. I can hold the Person-Affecting View and still favour technologies like IVF, which help bring happy children into existence. I’ll justify this not because of the happy child it might bring into existence, but because of the joy and happiness that child would bring to its parents (who already exist).

The only practical example I can think of that compellingly challenges the Person-Affecting View is where person chooses to have a child, knowing that the child’s life would be full of suffering (perhaps because of a congenital disease). I find it hard to view such a decision as morally neutral, so would probably accept the procreation asymmetry.

Another reason I am partial to the Person-Affecting View is because I see moral judgments – such as whether a life is positive or negative overall – as being inherently subjective. I’m not sure it’s possible to say that a life is objectively positive or negative. Unless a person exists to form that judgment, that judgment does not exist. It’s like that old philosophical chestnut about the tree falling in the woods. If no one is around to hear it, does it still make a sound? So I feel comfortable saying that a world with no life is no worse than a world with happy lives. Because in the world with no life, no one exists to form a judgment about the “goodness” of that world. At the very least, I would say that the two are not comparable.

That said, I don’t have a background in moral philosophy (or even philosophy). And I’ve thought about these issues for weeks, not years. So my view is still a tentative one and could change. I’d particularly welcome any counterarguments that point out practical problems with the Person-Affecting View.

My issues with Ord’s arguments

The main reasons why I don’t care as much as Ord does about preserving humanity’s long-term potential are as follows:

Rejection of the Total View

For the reasons given above, I am not persuaded by the Total View. I can understand that, as humans, we may feel sad about the idea of humanity going extinct, especially if that happens before we fulfil our “potential” (whatever that may be).

Even under a Person-Affecting View, it could be worth trying to prevent extinction risk on the grounds that human extinction would make existing humans sad. But obviously, if humanity does go extinct, none of us will be around to know that or feel sad about it. So under the Person-Affecting View, reducing extinction risk would only be about as good as making people think you are reducing extinction risk. (The latter would be slightly worse as there could be negative consequences such as loss of trust if found out.)

Excessive focus on humans at the expense of other living things

Ord is very optimistic about humanity’s long-term potential for good. For example, he writes:

“Our present world remains marred by malaria and HIV; depression and dementia; racism and sexism; torture and oppression. But with enough time, we can end these horrors – building a society that is truly just and humane.”


“If we fail, that upward force, that capacity to push toward what is best or what is just, will vanish from the world.”

However, I would prefer to focus on probabilities rather than capacities. There’s a high level of uncertainty about where humanity will go. I certainly accept that we’re a powerful species, but power has the potential to do bad as well as good. As pointed out in Sapiens, it’s possible there is more suffering now than 12,000 years ago if you factor in animal wellbeing. We’ve already caused or contributed to many species’ extinction. We’ve also created risks like nuclear winter and climate change that could jeopardise the survival of further species.

I certainly wish I shared Ord’s optimism, but I am much more sceptical of humanity as a whole. Ord doesn’t really explain the basis for his optimism either. I accept that doing so would be a pretty tall task and would probably require a separate book altogether! But based on The Precipice at least, Ord’s optimism seems like a leap of faith more than anything.

Ord claims that he focuses on humans not because he thinks only humans count, but because we are the only beings that are “responsive to moral reasons and moral argument”. I’m not sure I follow that reasoning. Being responsive to moral reasons and arguments might be a reason for appealing to humans to focus on a particular risk. But it doesn’t seem to be a reason to focus on preventing humanity’s extinction over that of other species. Frankly, Ord’s focus on humans seems to simply be a case of speciesism. Which I think is fine – most people, including myself, do value human lives above those of other species’. But it would be better if Ord just said that.

Since there’s so much overlap between preventing global catastrophes and preventing human extinction, does it matter?

Even though Ord points out that extinction risks don’t have to involve global catastrophes, widespread suffering and untimely deaths, the risks he focuses on (nuclear weapons, climate change, pandemics) generally do. I agree with Ord that these risks are worth mitigating, albeit for different reasons. I’m concerned about the suffering and untimely deaths rather than the loss of humanity’s potential. I also agree that these risks are neglected and that devoting more resources to them would be good. We therefore have a fair amount of common ground.

However, I still think that teasing out these philosophical differences matters. There are opportunity costs in deciding to focus on any particular cause. Under the Total View, the optimal amount of resources we should devote to reducing extinction risks would be much, much higher than the optimal amount justified by the Person-Affecting View. Also, as pointed out above, some policies might be justified under a Total View but not under a Person-Affecting View.

Currently, however, my concerns may be more theoretical than actual. The resources we’re devoting to reducing extinction risks is probably sub-optimal regardless of one’s stance on population ethics. And as far as I’m aware, Effective Altruism (EA) does not promote building bunkers or banning contraception and abortion as effective ways of doing good. So even though the Total View appears to be the most prevalent view in the EA community, it does not seem to be causing EAs to act in a way that is inconsistent with a Person-Affecting View (yet, anyway).

Final thoughts

A few concluding thoughts:

  • Population ethics is hard.
  • The Total View seems to be the most common EA view while the Person-Affecting View seems more common among laypeople. If my impression is correct, I’d like to find out why:
    • Is the Total View correct but simply unintuitive and impossible to understand without a lot of thought? (Sidenote: is it even possible for a view to be “correct”? Or is it just a question about what we value, and therefore inherently subjective?)
    • Or is the Total View missing something important, something that matters to most people? I think this possibility is worth considering seriously. There do seem to be people who have thought about this a lot and are still left unconvinced by the Total View.
  • Could we adopt a conservative position and only do actions that are “good” under both views? i.e. actions that have a positive net benefit on existing and inevitable lives? But this would seem to lead us back to the Person-Affecting View (with a procreation asymmetry).

One thought on “Problems with the Total View of Population Ethics

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.