## Summary

The **expected value** of something we are uncertain about is average outcome, weighing each possibility according to its likelihood. It is, in some sense, what we should “expect on average” from an uncertain risk we’re about to take. Since almost all meaningful decisions are made with some uncertainty about their outcomes, this is an important concept for decision making in the real world.

## Overview

When making decisions, we rarely know everything that is relevant to the outcomes of our choices. This is true even in small decisions we make every day, like whether to ride a bike or catch a bus in the morning. We might not know how much traffic there will be today, or whether it will rain on the way back - both of which might affect our decision. But it’s even more true (and more important) in large decisions, like __which career path to pursue__.

For example, imagine you’re considering two career paths: first, as a computer programmer. Based on some small coding projects you’ve worked on you’re fairly confident that you’d be a decent programmer: you’d be able to get a job, you’d get a very respectable salary, and even more importantly - you’d be able to contribute through your direct work to almost any cause you’d be interested in. Still, you don’t think you’re an exceptionally good programmer that would revolutionize the field you work in. The second option you’re considering is becoming a singer/songwriter. On this path, there’s a pretty good chance you’ll end up a decade later with no money and no work experience. However, there’s some small chance you’ll become a world-famous star, achieving incredible fame and enormous amounts of money (allowing you to __donate a significant portion__ of it to charity, of course). Given the uncertainty inherent to this kind of decision, you don’t *know* which option will end up having a better outcome. How can you still intelligently make such a decision? Should you choose the path with the opportunity for the best possible outcome? Or perhaps a “safe bet” that performs well even if things go badly?

**Expected values** are a __tool in statistics__ that captures what the outcome will be “on average” when we can’t predict something with certainty. To calculate it, we simply take the average across all possible outcomes, where each outcome is weighted by the probability it will happen. For example, let’s assume that buying a lottery ticket costs $2. Buying one almost always reaps no reward, but has a __one in 13,983,816__ chance of winning you a million dollars. In this case, the expected value of buying a lottery ticket is __minus one dollar and 93 cents__, and therefore __not a great investment__.

Most decisions we make are more complicated than whether or not to buy lottery tickets. It can be hard to identify or define the exact value of different outcomes (for example, how much more would you enjoy being a pop star than a programmer?), as well the exact probabilities involved (how likely are you to succeed in show business given your skills and personality?). Yet we still need to make decisions under uncertainty. That’s especially true when trying to help others through __donations__, and even more so through our career. Estimating expected values can be an extremely useful tool in doing so - especially when trying to compare large impacts or low probabilities. Our brains just aren’t very good at dealing with large numbers and probabilities, which means we can mistakenly prefer options even when other options are better by orders of magnitude when using intuition alone. We also tend to fall prey to various biases like optimism bias (overly focusing on positive outcomes) or ignoring low probability events (pandemics are a particularly relevant example these days). Expected values, even when based on rough real-world estimates that are not perfect, can provide a general guideline on what is promising (even if they can, and should, be __complemented with other modes of thinking__).

When we do know all the relevant factors, one can actually show that choosing the option that maximizes the expected value of what we care about is __the only rational thing to do__ (under certain assumptions). However, often what you can measure is only an instrumental tool for what you actually care about. For example, we often want more money - but that money only matters to us because it can make us and others happier. In these cases it might make sense to be __risk averse__ - prioritizing certainty in an outcome even at the expense of a lower expected value.

However, if we’re evaluating what we truly care about, or work on truly scalable projects with little __diminishing returns__ - there is a strong case to be made for the expected value being the primary thing you should care about when faced with uncertainty.