The counterfactual impact of some action or decision is the additional benefit (or harm) caused by that action, compared to what would have happened if you hadn’t done it. For example, if you help out a person in need but replace someone else who would have done an equally good job, then you have not made a significant counterfactual impact in that person’s life.
When deciding what to do, it is pretty standard to think about the outcomes of our actions. For example, if I volunteer in a soup kitchen that serves dozens of people, it is reasonable in some sense to say that I have helped feed dozens of people. When estimating counterfactual impact of an action, we care both about the outcomes of the action and the outcomes of not performing that action. For example, if I applied to volunteer in my soup kitchen alongside twenty other applicants and was arbitrarily chosen to be the one to get the volunteering position, then it sounds likely that even if I had not volunteered, the same number of people would have been fed (assuming I don’t have any exceptional soup-distribution abilities relative to other applicants). In this case, my counterfactual impact - i.e. the difference between the outcomes of this action and the outcomes of what would have happened otherwise - may be nothing at all. If our goal is to make the world a better place, understanding counterfactual impact can have profound implications. One important counterfactual consideration is replaceability. Often when we only look at the direct outcomes of our actions, we can overestimate the actual difference we’re making because we’re ignoring others who might replace us if we didn’t take action. Gregory Lewis, a medical doctor, was interested in how many lives a medical doctor saves throughout their career. Counting the direct benefits doctors provide to their patients, he estimated that the average doctor saves about 90 lives throughout their career, which is absolutely incredible. However, when looking at different countries with different numbers of doctors, he found that adding more doctors doesn’t increase health at nearly the rate you’d expect based on that figure, but rather close to 25 lives (which is still amazing, even if it’s less amazing than we originally thought). This is because if a hospital has one fewer doctor their patients wouldn’t have been left on the curb - instead the smaller number of doctors would prioritize the most important cases, and take over most of the missing doctor’s important workload. If you’re an individual considering becoming a doctor and you happen to live in a country where the number of doctors trained each year is bottlenecked by the capacity of med schools or residencies, then you might make no counterfactual difference on the number of doctors in your country by becoming a doctor. The counterfactual impact your medical career would have is only the extent to which you’d be better than the other student or resident you’d replace. Another important concept that relates to counterfactual considerations is opportunity cost. Whenever you spend a resource (such as money or time), your opportunity cost is the value of what you could alternatively have spent that resource on. Imagine, for example, that you found a charity that raises a million dollars and uses it to provide medication that saves 10 lives. At face value, this seems obviously good. But imagine that you then find out that due to your excellent fundraising pitch your donors gave you this money instead of an extremely effective charity that could have saved hundreds of lives with that same money. In this hypothetical case, what can be seen as doing good directly, could actually end up doing more harm than good when we consider the opportunity costs - or the counterfactual use of the resources you use.
Ignoring opportunity costs: Superman just saved a kid’s life.
Including opportunity costs: Superman just cost the rail system $10s of millions.
Thanks, Superman. Counterfactual considerations don’t always reduce our impact - sometimes they can significantly increase it. One classic example of this is donation matching. If some organization or person is willing to donate a dollar for every dollar we donate to a cause, then by paying some amount of money (and helping to some extent directly), our actions actually provide twice the benefit counterfactually. Of course, to truly understand whether we’re doubling our counterfactual impact we need to know what the matcher would do with the money if we didn’t donate, whether others would likely take up the donation matching instead of us, and other similar questions. As can be seen even from these simple examples, estimating counterfactual impact is challenging. Just measuring (or estimating) the actual outcomes of our actions can be hard, but trying to evaluate the outcomes in some hypothetical world that will never occur is even harder. In circumstances where we have the ability to perform extensive experimentation, people often use randomized controlled trials to estimate counterfactual impact. This methodology has been used in medicine for centuries, and eventually spread out also to psychology, agriculture and even international development (for which Esther Duflo, Abhijit Banerjee and Michael Kremer won the Nobel Prize in Economics). When we can’t perform extensive experimentation to figure it out empirically, many of the counterfactual considerations need to be carefully reasoned about, and remain a source of much uncertainty in impact estimates (for example, they’re a big issue when trying to estimate carbon offsets). In our own lives, and especially when considering speculative questions like the counterfactual impact of future career paths, we will usually not be able to estimate the full counterfactual impact of our decisions reliably. However, taking into account counterfactual considerations where these are clear and significant can still help us make better career decisions - or at least avoid some common pitfalls.
Marginal impact - coming soon!
Expected value - coming soon!