Podcast Summary: 80k Hours – Hilary Greaves on Pascal’s mugging, strong longtermism and whether existing can be good for us

Summarising a podcast is a new one for me, and my decision to attempt this was a bit experimental. The original podcast for this is over 2 hours long, so I thought a summary could be useful for others and could help me better understand the issues. Unfortunately, the topic in this podcast is probably too dense and not a good candidate for my first podcast summary. Unless you already have an interest in longtermism and population ethics, or are at least very open to abstract philosophical arguments, you may prefer to skip this one.

Key Takeaways

Longtermism

  • Axiological strong longtermism is what most people in the EA community refer to when talking about longtermism.
    • “Axiological” means it’s concerned with what action has the best consequences (c.f. “deontological”, which focuses on whether action itself is “right”, rather than on its conseqeunces).
    • “Strong” just means it’s talking about a very long time scale – i.e. millions of years, rather than decades.
  • Longtermism isn’t just about extinction risk. Extinction risk is one example of a locked-in change, but there are other examples.
  • Longtermism is compatible with many moral views:
    • Longtermism is not itself a moral theory. It’s more like applied ethics or practical ethics.
    • While the argument for longtermism is simplest under the Total View of Population Ethics, it’s still compatible with other population ethics views. Greaves does note that the Person-Affecting View might undermine longtermism as it’s plausible that affecting the wellbeing of very long-run future generations may be too intractable. But we should at least do more research on it.
    • Longtermism is also compatible with prioritarianism (the idea that you should give more weight to the very worst-off).
    • Given how high the stakes are in longtermism, it may even be compatible with non-consequentialist theories. Most non-consequentialists theories are not willing to say you should disregard consequences when the stakes are very high – e.g. they’d generally say it’s okay to lie if a murderer asks where your friend is hiding, even if they hold that lying is generally wrong.
    • Time discounting (i.e. giving less moral weight to people further out in the future) would undermine longtermism.

Cluelessness and Moral Uncertainty

  • Cluelessness is not an objection to longtermism, though it may put people off longtermism:
    • Actions that are motivated by short-term benefits (e.g. funding bed nets) still have long-term consequences. So cluelessness is still present there.
    • Cluelessness may make it psychologically tempting to stop caring about long-term consequences. But this doesn’t mean it’s a defensible position.
  • There are several approaches for acting under moral uncertainty, when you’re not sure which theory of morality is correct:
    • Greaves is partial to the maximising expected choice-worthiness approach, which just treats moral uncertainty like other aspects of uncertainty. There are problems with that approach in practice, but the other approaches seem equally problematic.
    • For example, the bargaining theory approach was developed to address concerns with fanaticism. However, once people modelled it, it ends up similar to the maximising expected choice-worthiness approach.

Existence Comparativism

  • Hilary Greaves and John Cusbert wrote a paper about Existence Comparativism.
    • Existence Comparativism is the idea that it can be better for someone to exist (compared to never existing at all) if they have a good life in the actual world.
    • While they disagreed on whether they ultimately favour Existence Comparativism, they agreed that the current objections to it were not very good, and suggested some other ways to look at it.
    • In particular, one argument against Existence Comparativism is that relations can’t hold unless the things they’re relating exists. But we talk about things that don’t exist all the time.
  • Other possible ways of looking at Existence Comparativism:
    • Re-analysis – taking a sentence and re-analysing it in a different way that doesn’t rely on the person’s existence.
    • Lives framework – instead of focusing on a person, focus on the lives that a person has led (or not). This avoids the issue of the person having to exist in a comparison.

Detailed Summary

The Case for Strong Longtermism

In 2019, Hilary Greaves and William MacAskill put out a paper called “The Case for Strong Longtermism”. [This is a working paper, and they’ve updated it since 2019.]

The definition of axiological strong longtermism

In the paper, Greaves and MacAskill defined longtermism as being:

the thesis that, in a wide class of decision situations, the option that is best, ex-ante, is one that’s contained in some small subset of options whose effects on the very long-run future are best.

Greaves thinks that axiological strong longtermism is roughly what Effective Altruists usually refer to when talking about longtermism. Greaves and MacAskill just formulate it a bit differently – talking in terms of comparisons between actions, rather than where value lies for a single action.

“Axiological”

Moral philosophers draw a distinction between axiology and deontology.

Consequentialists think that you should do things that lead to the best consequences. In other words, what you should do and what has the best consequences are one and the same. This is the axiological view.

Consequentialism is very controversial and Greaves thinks most moral philosophers are probably not consequentialists. Most think that what we’re morally obliged to do doesn’t always overlap with what’s best. So they might recognise that something is axiologically good (has the best consequences) but is nevertheless inappropriate.

“Strong”

Adding the term “strong” just indicates that you’re talking about a much longer time scale (e.g. a million years) than what most people outside the EA community mean when they talk about “longtermism” (e.g. 10 or 20 years).

How they settled on the above definition

Greaves and MacAskill went through several definitions before settling on the above. The final definition is still surprisingly messy – e.g. how small is the “small subset”? How wide is the “wide class of decision situations”?

Another option considered was to say “the best option is almost always the one that’s the very best for the long run future”. That definition would be less messy. But then they realised longtermists normally wouldn’t agree with that. For example, say the second best action was only ever so slightly worse for the long term, but massively better for the short term. Longtermists usually wouldn’t say that you have to do the first best action. They recognise that you could make trade-offs. That’s why there’s the reference to that small subset – to allow those trade-offs to exist.

The above definition, though messy, can still guide decisions in our messy complex world because it lets you start by looking at a relatively small subset of actions whose effects on the very long-term future are best. Then you can do trade-offs within that relatively small subset.

The case for axiological strong longtermism

The basic argument is that either:

  • there’s a very long future for humanity, with an enormous number of future people; or
  • there may or may not be a long future of humanity, and there are things we can do now to change the probability that there’s an enormous number of future people.

In the paper, Greaves and MacAskill come up with a plausible ballpark estimate of something like 10^15 expected future people. When you’re dealing with such an enormous number of possible future people, anything you can do to nontrivially change how well off they are, or how likely it is that they get to exist, is going to compare very well to the best things you can do to improve the near term.

For example, compare an action that would improve things for the world’s poorest people now, with no knock-on effects down the centuries, with something that might reduce extinction risk or improve the course of the very long-run future just a bit. The thing that would improve the very long-run future would be better under a longtermist view.

Child looking out into night sky

Longtermism isn’t just about extinction risk

Extinction risk is an example of a locked-in change. Once extinction happens, it’s extremely unlikely it could be reversed, so an extinction event would persist for millennia.

However, there are other risks that could also be locked-in. For example, if a particular system of world governance got instituted, there may be reasons (e.g. lack of competition) why that system would get locked-in and persist for millennia also. We might be able to do things now to increase the probability that a good system of world governance got locked-in. Similar arguments apply to value systems or AI. Those things would all be worth doing under a longtermist view, without involving extinction risk.

Many moral views are compatible with longtermism

Longtermism is not a moral theory. Moral theories are generally true in any situation.

Longtermism is something more like applied ethics or practical ethics. It’s asking: given your moral theory, what should we do in a particular situation? A common misconception is the longtermism is only for total utilitarians [i.e. utilitarians who subscribe to the Total View of population ethics]. Instead, Greaves argues that axiological strong longtermism is very robust to quite a few different moral theories and decision theories.

Population ethics

Even if you don’t care about numbers of future people, the longtermist thesis is still plausible as there are still things you could to influence future average wellbeing levels across a very long-term scale. Under any view in population ethics, if people are going to exist, it’s better for their well-being to be higher than lower.

However, Greaves admits that the argument is simplest under the Total View of population ethics, though [she uses the term “totalism” but I’m pretty sure that’s what she means]. This is because the cost-effectiveness of measures that might affect future average well-being is significantly more difficult than cost-effectiveness of measures that might affect future numbers.

Prioritarianism

Prioritarianism gives more weight to the very worst-off. That view arguably suggests then that you should focus on reducing global poverty today because, as the world’s getting richer, people in the future are going to be better off than the people today.

Greaves’ response is that the prioritarianism argument depends on well-being improving over time, simply because GDP improves over time. However, many possible longtermist interventions are not about making future people who are already well-off, slightly better off. Instead, the interventions are aimed at preventing cases where things have gone badly wrong – e.g. a totalitarian world government, AI takeover gone wrong, etc. In these scenarios, the future people could be much worse-off than people today, so longtermist interventions could still be favoured under prioritarianism.

The case for longtermism doesn’t depend on you believing that future people would be much worse-off than people today. An intervention that reduces the probability of a totalitarian world government, even if you think that it’s unlikely, could still be justified under prioritarianism.

Non-consequentialists

Deontologists believe that the morality of an action should be based on whether that act itself is right or wrong, rather than whether its consequences are good.

Greaves and MacAskill’s argument is that if the best longtermist things you could do are in fact better than the best short-termist things you could do, they’ll be a lot better, maybe by many orders of magnitude. So for a non-consequentialist theory to undermine the longtermist argument, it would have to say that you should use a non-consequentialist principle to guide your decision, even when the stakes are so large. Most non-consequentialist theories just aren’t willing to do that. For example, even if a theory holds that lying is wrong, if a murderer asks where your friend is hiding, most non-consequentialists would be okay with a lie in that situation given the high stakes.

Views that do undermine longtermism

Greaves suggest that these views are ones that would undermine longtermism (these aren’t necessarily philosophical views):

  • Time discounting. Discounting means you place less moral weight on the welfare of future people than on the welfare of present people. For example, if you discount enough that you place zero weight on the welfare of people 100 years in the future, even if there are astronomically large numbers of people in the future, it doesn’t matter.
  • Person-affecting view of population ethics. As noted above, Greaves still thinks non-totalist views of population ethics are still compatible with longtermism. But as an empirical matter, she finds it plausible that affecting average, very long run future well-being, may be too intractable. She notes that MacAskill may have a different view to hers. However, what they wrote in the paper (which she stands by) is that we should at least do more research into it as it seems extremely neglected and important. [Note that Greaves doesn’t use the term “person-affecting view” but it’s clear from her description that it’s what she’s referring to.]

Cluelessness and Moral Uncertainty

Simple cluelessness vs Complex cluelessness

Both simple and complex cluelessness involve expected value theory, where we have to assign probabilities and weights to the various possible consequences of our actions.

With simple cluelessness, there are an enormous number of possible consequences of our actions but there’s symmetry in the reasons. For example, helping an old lady across the road might increase the future population. But you could equally say, for the same reasons, that not helping the old lady could lead to the same consequence. The probabilities of either of those two actions increasing the future population then cancel out. So you can effectively ignore them for purposes of making your decision.

That symmetry doesn’t arise for complex cluelessness, though. With complex cluelessness, you have some structured reason for thinking that your action could have a particular long-term consequence. For example, you may have some plausible, concrete reason for thinking that funding bed-nets would lead to higher population or environmental degradation. Even if you have some other plausible, structured reason for thinking that not funding actions would cause a higher population or environmental degradation, it seems unlikely those two would just “cancel out” in the same way. The probabilities you assign to the two structured reasons will likely be different and therefore asymmetric.

Greaves is not sure how we should model cluelessness or respond to it. For now, she’s just pointing out a problem, rather than suggesting a solution.

Cluelessness is not an objection to longtermism

Wiblin and Koehler ask Greaves why she doesn’t think cluelessness is an objection to longtermism, since that seems to be one of the top concerns with longtermism.

Greaves’ response is basically:

  • Cluelessness just makes you less certain about things. It should make you less confident about longtermism just as much as it should make you less confident about short-termism. There isn’t an asymmetry against longtermism.
  • All interventions are subject to cluelessness. Even when thinking about something like funding bed-nets, where the main motivation is the short-term benefits, the fact is that action still has long-term implications. So if you care about long-term impacts, you’ll still need to think about those. It isn’t necessarily easier to think through the long-term impacts of an action that is motivated by its short-term benefits than it is to think through the long-term impacts of an action motivated by expected long-term benefits. It may be tempting to say that cluelessness will favour more robust things. Greaves notes this is plausible, but not at all obvious.
  • Greaves accepts that there are different degrees of cluelessness, and that things generally get more uncertain further into the future. But longtermists in particular try to look for places where that stops happening. [It seems to me that the degree of cluelessness for a small, local, act is relatively low. For example, take donating to a local school so they can buy new sports uniforms. Sure, that donation could give rise to many unforeseen consequences. But that seems to be more a case of simple cluelessness, where unforeseen consequences largely cancel out and you can effectively ignore them. There doesn’t seem to be much complex cluelessness with such an action. So complex cluelessness seems perhaps more escapable than simple cluelessness? (Though obviously there is the trade-off that with small local actions, you’d have less expected impact – positive or negative.)]

Cluelessness might put people off longtermism, but that doesn’t mean that longtermism is wrong

Wiblin suggests that once people start thinking about the very long-term, and realise how difficult it is to understand the long-term impacts of actions, they may just give up and try to focus on more immediate concerns (e.g. caring about my family, following social rules).

Similarly, Koehler points out that Greaves’ position seems to stem from a different definition of “longtermism” than what people often use casually. When people refer to “longtermism”, often they just mean “caring about long-term consequences”. So cluelessness is an objection to the idea that we should even care about long-term consequences.

Greaves’ response is that these lines of thinking may be psychologically natural, and indeed psychologically very tempting. However, that doesn’t mean they’re defensible and she feels it’s a bit like wishful thinking. To the extent that cluelessness discourages people from trying to do anything to improve the world, or to focus on more immediate concerns, Greaves thinks that’s a dangerous tendency.

Moral uncertainty

There are several approaches for what to do when you aren’t sure which theory of morality is correct:

  • True moral theory approach. One theory says that you should do what the “true” moral theory requires. If you don’t know what that is, that means you won’t know what to do, but tough. Quite a lot of people are sympathetic to that.
  • Maximise expected choice-worthiness approach. The dominant approach just treats moral uncertainty like other types of uncertainty. So we can apply expected utility theory to moral uncertainty.
  • My favourite theory approach. You should just act according to the moral theory you give the highest credence and ignore all the rest.
  • Bargaining theory approach. This approach thinks of different moral theories as different people who disagree about what you should do. So under this approach, you can apply tools of bargaining theory to see what that leads to.

Greaves is partial to the maximising expected choice-worthiness approach. She accepts that there are problems with it in practice, mainly arising from the fact that there’s no master theory that explains how you should compare theory A to theory B. However, she doesn’t see that as an objection to the view that maximising expected choice-worthiness is the correct approach. It just seems like another instance of uncertainty where we’re really unsure of what the probabilities are.

Koehler asks if Greaves could suggest a useful heuristic for busy people who would like to take moral uncertainty into account in their decisions, without going through all the complicated calculations. Greaves suggests asking for options that are robustly good across a broad range of moral theories. However, she notes that there can be exceptions, and sometimes the appropriate thing to do is gamble.

The maximising expected choice-worthiness approach and Pascal’s mugging

Wiblin points out that one issue people might have with the maximising expected choice-worthiness approach is that your choices might end up getting dominated by very improbable theories with incredibly high stakes, such that the tail ends up wagging the dog. This is like Pascal’s mugging.

Greaves responds that it’s a really important and largely open research question. She points out it’s an issue in cases of empirical uncertainty too, and not limited to moral uncertainty. For example, the standard arguments for longtermism and extinction risk mitigation rely on small probabilities of generating astronomical amounts of value.

She recognises the problem with Pascal’s mugging, but she hasn’t seen a credible alternative to expected utility theory. Some people have suggested a de minimis principle where you can ignore very low probabilities below a certain threshold. The problem with that is you can always define outcomes so narrowly such that every individual outcome is below that de minimis. But of course you don’t want to ignore everything.

Bargaining theory approach ends up similar to the maximising expected choice-worthiness approach

The bargaining theory approach was partly motivated by the idea that it will be more resistant to fanaticism/Pascal’s mugging concerns.

The dominant approach in bargaining theory is to use what’s called the Nash bargaining solution. However, once people actually modelled how that approach would apply to moral uncertainty, they found the result was extremely similar to the maximising expected choice-worthiness approach. The problems with the maximising expected choice-worthiness approach still seemed to apply to the bargaining theory approach and, where there were differences, the bargaining theory approach looked worse on the details.

Comparing Existence and Non-existence

Greaves talks about a paper she co-authored with John Cusbert, Comparing Existence and Non-Existence. The paper looks at the question of whether some states of affairs can be better than other states of affairs for particular people. In particular, say we compare the actual world (World 1) where Greaves exists with an alternative world in which Greaves was never born (World 2).

Existence Comparativism

Existence Comparativism argues that if Greaves has a good life, we can say World 1 is better for her than World 2. Greaves personally is sympathetic to this view. However, she points out that many people think this view is incoherent. To make that comparison, the wellbeing subject (i.e. Greaves herself) has to exist in both worlds, but she doesn’t exist in World 2.

The reason why this matters is that it can make a big difference to which “population axiology” you adopt. [By which I think she means whether you adopt the Total View of Population Ethics or the Person-Affecting View.] So if you think that being born with a good life is better than not being born, you’ll probably adopt something like the Total View and think that premature human extinction is astronomically bad. Whereas if you don’t think those comparisons make sense, you’ll be less concerned about premature human extinction.

Variantism

Koehler points out that some people could hold asymmetric views. In the example above, people may be willing to accept that World 1 is better for Greaves than World 2. But they may be less comfortable with the idea that World 2 is worse for Greaves because she didn’t get to exist.

Greaves calls this variantism in the paper because the truth of a statement like “World 1 is better than World 2 for Greaves” varies from one possible world to another. However, it doesn’t seem like the truth value of that statement should vary. The statement just compares how good two possible worlds are, so it shouldn’t make a difference which world is actual for purposes of that statement.

Better “for people” or better in absolute terms?

There’s a bit of a tangent in the middle of the podcast where Wiblin questions the idea of things being better or worse “for people”. When he thinks about ethics or morality, he tends to think about the goodness of things in impartial, absolute terms rather than their goodness for particular people.

Greaves counters that talking about what’s better or worse for people is still useful to figure out which states of affairs are better in absolute terms. If you can’t talk in terms of what’s better or worse for people, that means you don’t care about people’s wellbeing levels. Then you can’t formulate a theory like utilitarianism at all. And she’s not sure you would go about formulating any way of deciding which states of affairs are better or worse overall.

Koehler suggests that Wiblin may be thinking of something like the idea that people are just containers for happiness. So we may just care how much happiness there is in the world, and not care who it belongs to. But Greaves doesn’t think that the container idea is irrelevant to this. She says that’s more of an anonymity principle, which is the idea that if you change people’s wellbeing levels while keeping overall wellbeing levels the same, it doesn’t change how good the world is.

Utilitarianism abides by the anonymity principle but some moral theories don’t. Regardless of which theory you use (Greaves mentions prioritarianism, egalitarianism, sufficientarianism), they all start by evaluating the state of affairs. To do that, you have to look at each person’s wellbeing. Wellbeing is just a representation of how good different states of affairs are for various people.

Wiblin tries to explain his view a bit further. He says he thinks that what’s valuable is the kind of experiences that have been had and whether the experiences attach to any individual is not really relevant. Both Wiblin and Greaves do acknowledge that the counter to that is whether experiences can even be had if not by persons or some subject. I found this part quite hard to follow, but the upshot seems to be that Greaves’ phrasing of “better or worse for people” depends on where you draw the “person” boundaries. Whereas Wiblin’s theory doesn’t care about where those boundaries are drawn.

Arguments against Existence Comparativism

Greaves’ sense from the literature is that most people seem to reject Existence Comparativism. Greaves and Cusbert disagree on whether they ultimately favour Existence Comparativism but they agree that the usual arguments against it are not very good.

There are two main premises against Existence Comparativism.

1. The Semantic Argument

The first premise is that the truth value of a statement like “World 1 is better for Rob than World 2” shouldn’t vary from one world to the next. So it can’t be true in a world where Rob exists, but untrue in a world where Rob doesn’t exist.

While this argument is called the “semantic argument”, Koehler and Greaves note that the label might be unhelpful. It may (inaccurately) suggest that the argument is based on some language trick.

2. Relations can’t hold unless the thing they’re relating exists

The second premise says that relations can’t hold unless the thing they’re relating exists. Since Rob doesn’t exist in World 2, the statement “World 1 is better than World 2 for Rob” can’t be true in World 2. So even if Rob exists in the actual world, World 1, that statement still can’t be true.

Why these are not good arguments against Existence Comparativism

Greaves sets out two problems with the arguments against Existence Comparativism. She points out that while the second premise (that relations can’t hold unless the thing being compared exists) looks reasonable at first sight but it leads to crazy conclusions unrelated to population ethics. It’s apparently explained in her paper but it gets quite nitty-gritty and detailed so she doesn’t get into it in the podcast.

The first problem is the transitivity problem referred to in my Population Ethics post. The issue is that even if Rob exists in two worlds being compared (A and B), we could always add a World C where he doesn’t exist. That is, the worlds would be:

  • World A – actual world, where Rob exists.
  • World B – Rob exists, but things are worse for him.
  • World C – Rob doesn’t exist.

Under the second premise, the statement that “World A is better for Rob than World B” would automatically be untrue under World C (where he doesn’t exist). And if it’s not true in World C, it can’t be true in Worlds A or B either. That’s clearly wrong.

The second problem is that we can make true statements about things even when they don’t exist. A lot of our talk deals with “modal” things, which are about things going on in other possible worlds. For example, people accept that you can say “it might’ve rained yesterday”, even if it didn’t. So if you accept that you can talk about modal things, it’s not an argument against Existence Comparativism.

[Perhaps I’m missing something here but I think “World A is better for Rob than World B” is a qualitatively different type of sentence than “It might’ve rained yesterday though”. The former includes a judgement about the “goodness” of the world that exists for a person that doesn’t exist. Whereas the latter is just saying that an event that did not happen yesterday could have happened – there’s no judgement involved, especially not a judgement “for” something that doesn’t exist in both worlds. We can say things like “It’s better that it didn’t rain yesterday” – but the implication in that sentence is that it’s better for me, and I exist in both the actual world (where it didn’t rain) and the hypothetical world (where it did rain). So I don’t think the two are really comparable.]

How to compare two worlds?

Other ways of looking at Existence Comparativism

Greaves and Cusbert’s paper suggests directions to be explored rather than outline a definitive position.

Re-analysis

For example, take a sentence like “the average woman has 2.2 children”. The grammatical structure of that sentence suggests that the “average woman” is the noun phrase. But of course the “average woman” doesn’t exist and the sentence isn’t trying to suggest that there is – instead, it’s taking an average over all the woman that exist. This example shows that in general, you can’t go straight from the grammatical structure to a story about what the world has to be like in order for the world to be true.

Similarly, the paper explores ways in which you might re-analyse the sentence “A is better for Rob than B”, rather than take it at face value. You may analyse the sentence in a way that doesn’t involve the existence of Rob.

Lives Framework

Greaves suggests that instead of focusing on a person (e.g. Rob, in the examples discussed so far), we could think of the lives that person’s led. This may make sense if you don’t think it matters morally which people had a certain level of wellbeing, only on the numbers of people had that wellbeing level.

So instead of talking about World A being better for Rob than World B, we could talk about comparing the possible life Rob lives in World A with the possible life Rob lives (if any) in World B. This avoids the objection that Rob has to be there to stand in a relation because it’s not “Rob” that stands in the relation in this framework.

Another example is comparing Rob’s life in the actual world to Rob’s life in a hypothetical world where he didn’t have any education. Under the lives framework, you’d say Rob’s actual life is better than that other possible life in the hypothetical world. The fact that the other possible life didn’t end up being lived doesn’t stop us from making that comparison, even for people who are not sympathetic to Existence Comparativism. [I can sort of see why it would get around that objection, but I think this “lives framework” only makes sense if you accept the Total View of population ethics to begin with. That is, you have to accept that a life with positive wellbeing is better than a non-existent life with no wellbeing. In which case it seems circular.]

[I haven’t summarised the last 15 mins or so which talks about Greaves’ work with the Global Priorities Institute.]

My Thoughts

This was a very dense and difficult-to-follow podcast. I think Greaves is too much of an expert on this topic, and I found it hard to follow her explanations compared to Wiblin and (especially) Koehler’s paraphrasing of her explanations.

I selected this podcast because I had been poking around the Effective Altruism Forum looking for discussions of Population Ethics and several posts referred to this podcast. I’m in mixed minds about whether I should have summarised it. On one hand, I’m glad I sat down and listened to it carefully because – at least with my current (very limited) level of experience in philosophy – there’s no way I could’ve followed it if I had listened to it like I do a “normal” podcast. At the same time, I wish I’d picked an easier podcast for my first podcast summary!

On longtermism – I feel like the podcast gave me a better understanding of some of the arguments against longtermism and the counterarguments to those. I largely accept Greaves’ argument that cluelessness is not a principled objection to longtermism, though it can certainly undermine people’s motivation to work on longtermist causes.

On Existence Comparativism – this was the main reason I chose to listen to this podcast and, honestly, it hasn’t changed my mind at all. My responses to some of Greaves’ points are outlined above, and it’s entirely possible that other parts of her arguments just flew over my head. I think I have a greater tolerance for abstract theoretical concepts than most, yet this was too much for me and I was tempted to give up partway through. It’s discussions and arguments like these that just make me want to put Population Ethics in the “too hard” basket. But at the same time, Population Ethics has some very important implications. It would be a shame if people who were more focused on practical implications (like myself) dismiss it as being “too hard” and the only people thinking about it were overly academic philosophy wonks (no offence to Greaves).

I think I will summarise other podcasts in the future, but I’ll try to pick something easier next time!

Do you have any thoughts on this summary or the podcast? Please share them below!

If you liked this summary, you may also enjoy the following:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.