CAREER GUIDE: PART 4
22 MINUTE READ
What do we mean when we say doing good? When thinking through career options, there are specific dilemmas that depend on your answer: Should you focus on helping people or animals? Should you work at a charity or earn more money in order to donate? Should you try and help people who live now or secure the future of our descendants? We’ll dive into specific options later in this guide, but first we need to clarify what doing “the most good” means.
The Probably Good team briefly considered providing impact-focused feedback to a broader audience before concluding that focusing on impact-focused advice to humans is both higher-impact and more tractable.
As you may know, the question of “what is good” has been discussed at length by many people. This chapter gives very brief introductions to some of the basic questions that may arise when deciding between career paths. There’s much more to say about these deep questions than we can fit in this guide, which is why we link to interesting and relevant articles that can serve as jumping-off points for further reading and thinking about each of these questions.
Whether or not you have thought about these issues before, you’re likely to have some intuitions already about the moral dilemmas presented here. We encourage you to read with an open mind and not to immediately accept your gut reaction, causing you to either dismiss the question or focus only on reasons for why your original position is true. This chapter won’t be very valuable if you read it as a “Which moral philosopher am I” quiz (though if somebody writes a good one, we might link it here). Take the time to think of the ways in which your current view or intuition might be wrong. Try to notice when the logic you’re following leads to conclusions that you aren’t happy with. Your moral views can make a large difference in how you evaluate the impact of your actions, career and donations, so they’re worth careful consideration.
The goal of this chapter isn’t to solve morality, or even determine your precise moral views. After reading through it, you will hopefully have a better general idea of what your personal value system considers morally good. It’s completely fine to remain unsure, and we encourage you to keep reading while still mulling over these questions. Many of the most talented and inspiring people we know have continued to evolve their views about these questions, even after considering them over long periods of time. Tackling these questions is challenging, but also critically important for prioritizing the work you do.
Doing Good: Increasing Happiness and Reducing Suffering
Many people, including us at Probably Good, believe that making people happier and reducing their suffering is important to doing good in the world. “More happiness is good and more suffering is bad” might sound redundant, but there are nuances around this claim that aren’t obvious at all. While nearly everyone believes that, in general, you should try to act in ways that increase happiness, how does this interact with other considerations? If you haven’t before, it’s not a bad idea to take a few moments to think about this: In what situations, would it be ok to lie in order to reduce someone’s suffering? Are there actions that we would never do, no matter what consequences they may have?
In the next few paragraphs, we’ll describe the components of utilitarianism - a family of moral views that only look at actions’ effects on happiness and suffering when considering morality. As you’re reading, it’s worth thinking about where you stand with regards to these principles: Do you believe they’re completely true? Mostly true but there are important exceptions? Completely misguided? When touching on aspects you have strong opinions about, try to see if some of the links include support or replies to your thoughts on the subject.
Consequentialism is the idea that when we are evaluating the moral value of an action (deciding whether it is the right thing to do) we only care about the action's consequences. Other ideals are only valuable to the extent that they might eventually promote good consequences. Thinking about this raises some difficult questions: Are there actions that are wrong (stealing, lying, hurting someone) even if they would have positive consequences overall, such as helping many others?
For example, breaking the law is usually considered a bad thing to do. From the point of view of consequentialism, breaking the law is only wrong if it leads to bad outcomes (either for the victims of the crime or for society in general). If we were completely confident that breaking the law would do more good than harm (including all of the very-difficult-to-follow indirect effects), the consequentialist view would be that it is a morally good thing to do.
If you think that’s weird, wait till you start thinking about the consequences of your actions on the long-term future….
Lying is another interesting example: Most people believe that some lies are justified (e.g. white lies) and some lies aren’t. Is lying always morally wrong regardless of consequences? Is the lie itself unimportant and should we only consider consequences? Or is there some difficult-to-draw border between good and bad lies? Can we, even generally, say where that border lies?
Not all moral theories are consequentialist. A prominent family of non-consequentialist moral theories is deontology. Deontological moral theories state the actions themselves are right or wrong according to some set of rules. These rules could include anything from “It is wrong to lie” to “Respect your parents” or “Do not drink alcohol”. These rules would determine morality first and foremost, even if they espouse actions that have negative consequences.
You don’t need to choose a single moral theory and disregard the rest, or even fully commit to a decision about whether an idea like consequentialism is completely correct. While it makes sense to give names (like ‘consequentialism’) to specific, and often absolute, ways of thinking, your own views don’t need to be absolute. There are many views and ways of thinking about morality which take consequences into account but also other considerations. In fact, your own moral beliefs can integrate multiple moral theories. They could also follow some specific moral theory in some cases but not in others. You may even be unsure about which moral theory is correct, and to take this uncertainty into account in your decisions, as we’ll discuss later in this chapter.
Welfarism is the idea that when evaluating how good the consequences of an action are, we only care about the welfare (or wellbeing, or happiness and suffering) of individuals. This would imply that actions that bring about more beauty, honor, knowledge or other things we might consider good, only matter to the extent that they eventually bring about more well-being. This is worthwhile to think about: Is there value in positive things, other than the wellbeing they create? For example, when we preserve nature, are we doing this to create better lives for humans and animals? Or is there some other value in nature, other than its importance to our happiness?
An example of an alternative to welfarism is rights-based ethics. This family of moral views claims that there are some inalienable rights, which it is immoral to take away, no matter the consequences. Different views in this family can consider different rights as protected, so may have very different implications on how you should act. As an example, most rights-based views would consider basic rights like the right to life and bodily autonomy as protected rights. This means that it is immoral to actively hurt one person’s right to bodily autonomy, even in cases where it would save many others.
Impartiality is the principle that when looking at the well-being of people and considering morality, we don’t care about the identity of the specific people. This means that we prefer creating more happiness no matter who ‘gets to enjoy’ this happiness. This view is very intuitive in the context of opposing racism, sexism, nationalism, etc. It means that we should fight against our own biases, which often give preferential treatment to those who are like us. It becomes less intuitive (and much more difficult to follow) if you follow the logic of this view to it’s further conclusions: That there is no moral justification for preferring your own family, friends or country over others.
Key & Peele comment on the quirky tendency of people to care about things
happening right next to them more than things happening in the rest of the world.
One common objection to many of these principles, and impartiality specifically, is that they introduce unrealistic moral demands that almost no one would be able to follow. For example, as long as you’re living in conditions that are considerably better than the conditions of the poorest people in the world, do you have a moral obligation to donate your money to people who need it more, until you live in similar poverty to those worst off?
We encourage you to read more about this and the various responses to this objection (including some which simply don’t see it as a reason to object). Maybe you believe in these principles to some extent but think there are relevant cases where it’s morally permissible not to follow them fully.
Regardless of your specific answers, we would also like to make a general point here about morality: No one is a perfectly moral person. It is legitimate, and potentially also a very good thing, to aspire to live in as much accordance with your moral beliefs as you can manage and no more. In almost every single case, people who believe in these principles still privilege their friends and family and live a lifestyle that they feel comfortable in. But they also do their best to help those who need it most, be it by donating 10% of their income, using their career, or doing whatever they feel comfortable with.
Combining The Three Principles Above: Utilitarianism
Depending on your views on the principles above (and many others), you may hold a wide range of moral views. The family of views that agree with all three principles above is called utilitarianism, and it's a fairly prominent view among philosophers. Utilitarianism states that we should choose our actions to create as much well-being as possible for every individual. It’s a useful concept to know when discussing morality - whether or not you consider yourself a utilitarian, somewhere close to utilitarianism, or very far from it.
Much of our research focuses on maximizing the impact of your career. You don't need to be a utilitarian to care about this research - in fact, you can disagree with all three principles above and still care about your concrete impact on the well-being of others. But the more you care about how your work affects the well-being of others, the more our research will be useful in your decision making.
Moral Patienthood: Who Do We Care About?
If we want to increase welfare, we have to ask: Whose welfare? This question can be tricky because our own biases and instincts tend to prefer whoever is more similar or closer to us.
Moral philosopher Peter Singer describes the concept of the expanding moral circle: How throughout history humans have been willing to consider more and more beings’ welfare: Earlier in human history, most people cared almost exclusively for themselves, their family or their tribe. Over time, the ‘circle’ of those we care about expanded to include more people, who are less and less like us. Today more than ever before, we’re aware of how pernicious these biases can be and how easy it can be to disregard certain communities or people when we consider morality, be it because of their gender, skin color, nationality or beliefs.
We look back at people in history who claimed that those who are different should be afforded less moral consideration (or none at all), and often call them racist or sexist, and view their lack of care as abhorrent. Of course, most people living in past centuries held these views not because they wanted to be immoral, but because they thought they were right. Some were never exposed to the harms of their views and societal structures, even more had never given it any thought, and almost all were taught these morals from a young age. These points don’t aim to excuse their behavior, quite the contrary: We should apply an equally critical view towards what we might still be missing today!
Understanding this principle should challenge us to consider others, who we don’t currently consider as individuals that we care about (what philosophers like to call “moral patients”). We should ask ourselves - are they also deserving of the same value as us, our family and others we care about? It’s incredibly unlikely that we have, quite recently, gotten exactly the right moral circle. When future generations look back at us - what views that we hold will they view as awful? If we can figure that out - we might be able to help take the step forward that we’d want our ancestors to have taken sooner.
Helping Where It’s Most Needed
When considering who we should care about, it’s important to remember the most basic and least controversial part of the impartiality principle. Our personal experiences constantly move us to see and address the local issues around us: They are the most visible and obvious to us, and they’re usually easiest for us to empathize with.
For many people, those local issues, while absolutely important in their own right, won’t be the issues that need your skills and help the most, or that can use them most effectively. We encourage you to take a global perspective and ask where your time and effort would make the biggest difference. This is particularly important because the areas where people tend to have the most resources to give (time, money, etc.) tend not to be the same areas where people need help the most. This is easier to measure when discussing donations than career choices - there are many cases where donating to a charity operating in alleviating the worst global poverty can help over 100 times more than donating the same amount to a charity alleviating poverty where most donors live - in high-income countries.
While there are very real and valid considerations that may push you to give to those closest, it’s at the very least worthwhile to introspect about whether they direct you towards the way to make the biggest positive change that you can.
Note: Sometimes there are practical differences that make it easier for you to help locally, since you understand them better and are more involved with your community. It’s also difficult (and sometimes even harmful) to try and help communities when you’re an outsider and don’t understand local complexities. We’ll discuss this in more detail later in the guide, but we’d still encourage you not to think of this as a catch-all answer to helping where it’s needed most. Well-run organizations should (and often do) work with local communities and partners who have the local knowledge necessary while respecting and following the lead of local communities. When this is done well, it allows them to address the world’s most important problems and do much more good than when only considering helping locally.
After thinking about expanding our moral circle to include all humans (and specifically, those who are different from us and need our help), we should turn to think of the moral patienthood of animals. To what extent do they need and deserve our consideration and help? Most people have the intuition that we should care about animal suffering in some situations, such as preventing the abuse of stray dogs. But many don’t have a clear picture of “How much should we care?” or “In which situations should we care?”.
Think of the variety of different living creatures: Between humans, chickens, cows, mosquitoes, fish, dogs and bald eagles - why do we care for some more than others? Having read until this point, you may have the intuition that some of our reasons might be our own biases: Do we prefer creatures that are close to us and we have more positive interactions with? What differences are there that might be genuine reasons to give more or less moral consideration?
At the very least, you should consider different creatures’ ability to feel suffering. It seems reasonable that we could prefer helping creatures who would benefit more from alleviation of suffering. The main difficulty is that animals’ ability to communicate with us is limited, which makes it very difficult to fully understand their needs and feelings. There is a lot of brilliant research being done to improve our (still very partial) understanding of animals’ experience and should probably inform your views on animal moral patienthood.
Be warned though: As has been demonstrated repeatedly in human history, it’s very easy to discount suffering that looks different from our own as ‘not really suffering’. Keep an open mind as to how animals, even very different from you, might still feel pain.
Note: Sometimes it’s useful to ask specific, abstract questions: If you could alleviate a year’s suffering from a human or a number of animals, would you do it? The answer might depend on which animal we’re helping, and definitely depends on how many animals. A human or a dog? A human or 10 chickens? A hundred chickens? One chicken? It’s not important to have a precise number, but it can be helpful to have a general idea. It’s a way of forcing you to really think through the implications of where you stand.
And in some cases, this can actually help in a specific decision, like whether to work to alleviate global poverty or to campaign against factory farming. We’ll see examples like this later in the guide.
In the end, your thoughts on this matter may have a very large effect on the paths you choose to make a larger impact. Many careers either target helping humans or helping animals. In most cases, with similar resources, we can help many more animals than humans. Your views on the extent to which we consider animals moral patients may have a decisive effect on your eventual choice.
The Long Term Future
It’s easier to care about those who are close to us not only geographically, but also in time. It’s easy to care about people who need our help right now, and much harder to imagine (and care about) the consequences of our actions ten or one hundred years down the line. In recent years we’ve seen an encouraging phenomena of people taking responsibility for the long-term consequences of their actions as part of the growing climate change movement. In addition to climate change, there are other potential risks and catastrophes such as nuclear war, pandemics or risks that arise from new technology, that could impact future generations’ welfare. If we believe future people matter and deserve our concern, there are some strong reasons to think that securing a long and positive future is an important cause area. At the very least, there are (potentially) far more people in the future than currently exist.
When thinking about future generations, there are a lot of questions ranging from the practical (how do we know if we’re making an impact if most of the impact is in the far future?) to the abstract (do we prefer a future world with few people who are extremely happy or many people who are reasonably happy?). Regardless of the specific answers, our actions today can make a big difference in the happiness of future generations even if we don’t see the effects ourselves.
Given the potentially huge impact we can have on the long term future, we recommend reading more about long-termism and averting global catastrophic risks.
You may appreciate that some of these questions are quite difficult, especially if you read further into the links provided in this chapter. For most people, there isn’t a clean way to articulate a single, coherent moral theory that they believe in fully and would want to follow in all circumstances. Even those who do have a clear, coherent moral system in mind, should be at least somewhat concerned about the question “But are you sure?”.
Career choices naturally involve the sort of decisions that can be difficult to make and become even more difficult if you’re uncertain about your moral view:
How should you prioritize helping specific animals in awful conditions when you're uncertain if those animals can feel pain?
Is it okay to participate in a lucrative but harmful industry if the money you earn is donated to highly impactful charities?
Should you spend years in order to help avert a potential disaster which might never come to pass even without your work?
If you believe in a certain cause area, but most people whose opinions you value think it’s a waste of your time - how much value do you place on their view?
It’s important to note what you’re unsure about. This can become important if other factors are clearer (indeed, in many cases, your personal fit or the importance of some causes can be more easily evaluated than questions in moral philosophy). It’s completely fine not to know your precise moral stance and it might be easier to try and clarify where you stand in a general sense. With many moral questions, it’s easier to think about what’s definitely outside of your moral view, than exactly where you stand.
This can be done by trying to make specific statements about the bounds of your uncertainty like “helping one person avoid a painful illness for one year is more important to me than helping one chicken avoid a lifetime of abusive conditions, but is still probably less important than significantly improving the lives of 1,000 chickens”. The precise numbers aren’t the point. The important part is having an idea about where your moral range is. In some cases, even a very large range can still be informative enough to be very helpful in your eventual decision making.
Recognizing your own uncertainty also has the important benefit of giving you more motivation to read, listen, think and learn more about the subject. Being uncertain, at least to some degree, helps you avoid the pitfalls of blindly defending your current view even in light of new arguments or evidence.
Allowing for Multiple Moral Views
In many cases, there isn’t a single moral view that we feel comfortable following completely. That’s okay. Take the parts that make sense to you, keep reading and thinking, keep looking to refine and improve your understanding. Keep in mind that even if your overall view isn’t fully defined, or even fully consistent, that may be a step in your own journey.
Note: An example: Eleanor has just read about utilitarianism and its logic makes sense to her. Thinking through this, she agrees with most of the principles but has some specific reservations:
Family: Eleanor takes care of herself and her family before others. She is unsure whether this is an issue where she is unwilling to live up to the principles of utilitarianism or if this is an exception where utilitarianism is simply wrong. She has the general intuition that her obligation to take care of her children is a true exception - she doesn’t see it as moral to weigh their welfare equally against strangers. In any case, she is confident she’ll continue caring for her family before others.
Basic rights: Eleanor is very unsure about basic rights. She feels intuitively that taking away some rights would be horrible, even if it truly were for the greater good. She can’t find a logical explanation for this, and she plans to look further into this and think about it more.
For now, while reading more on the subject, Eleanor will try to make decisions mostly in a way that she believes will do as much good as possible while preferring her family and, as will hopefully not be an issue, do her best not to hurt others’ basic human rights.
When thinking about moral philosophy, there’s more to read and consider than anyone could in a lifetime. Don’t let that deter you from taking time to read and think on these issues: They’re both interesting and can sometimes be important in big decisions. In many cases, it can be useful to talk about these with other people, and not only think and read by yourself.
On the other hand, don’t feel bad about moving on with this guide even while you have a lot of open questions that still need to be answered. Just remember the questions that you’re unsure about, come back to them, and take notice when you make decisions that are affected by those questions.
Philosophy and utilitarianism
Utilitatianism.net - a great introduction to utilitarianism.
Justice: What’s The Right Thing To Do - A Harvard University lecture by Professor Michael Sandel on the trolley problem and utilitarianism.
Crucial Considerations and Wise Philanthropy - Nick Bostrom’s talk on how some considerations can have an outsized impact on your decisions and practical conclusions from this (a long talk available both in audio and text format).
Purchase Fuzzies and Utilons Separately - a blog post on differentiating between doing good to help others and doing good in order to feel good and stay motivated.
Expanding circle and animal rights
The Drowning Child and the Expanding Circle - Peter Singer’s paper that was discussed in this chapter.
Famine, Affluence and Morality - Singer’s 1971 essay on the obligation to donate more to disaster and people in need.
Should animals, plants and robots have the rights as you?: A Vox article on the basics of moral patienthood and the expanding circle.
Report: Animal consciousness and moral patienthood: A very thorough and detailed report on the evidence for and against consciousness in different animals.
Wild animal suffering: An introduction: An introduction to the cause area of wild animal suffering. How much animals suffer in the wild, why it matters, and how we can help them.