Method Efficacy

Tackling the problem

So we’ve defined a problem and assessed its overall significance in the world. Now, we need to assess the efficacy of our method or means of solving it. In other words, what are we going to do about this problem? And how well will this method work?

By “method,” we mean the means through which impact is made.

No matter how significant a problem may be, an ineffective method will do little to solve it. And if you’re dedicating so much of your time, energy, talents and attention to improving the world, we want it to count.

When you carefully assess a method’s efficacy, you can find ways to best utilize your career to make a much bigger impact. To do so, we’ll look at two major considerations:

  1. How efficient is the method at solving the problem?
  2. How likely is the method to work?

Then, we’ll look at a few extra considerations that could significantly affect our assessment—unintended negative consequences and uncertain estimates.

Taking the wrong approach

It’s no surprise that some methods are better than others, but it can be surprising how big these differences are. In many cases, one organization could be 10-100 times more effective than another organization tackling the same problem. In other cases, a misguided approach could lead to an unhelpful or even harmful outcome.

First established in the 1970s, the U.S. government program Scared Straight took a new approach to prevent crime: bring kids who had committed crimes to prisons, expose them to the harsh realities of confinement, and effectively scare them into better behavior. The awareness program received billions of dollars in funding and was implemented throughout the U.S. But did Scared Straight effectively prevent crime—or have any significant impact on behavior? 

It turns out their solution to juvenile crime didn’t achieve its intended impact. Numerous studies revealed that the program actually made kids more likely to commit crimes. Beyond its ineffectiveness and unintended consequences, “Scared Straight” was also very costly. A review of all studies performed on the program estimated that every $1 spent on “Scared Straight” caused roughly $200 in future crimes. Further, the billions of dollars spent on the program could be used in a more rigorously-studied crime prevention effort—or even put towards alternative solutions to the problem’s root causes.

How efficient is the method?

Extreme cases like “Scared Straight” are not the norm. But it is fairly common for charities and nonprofits to spend their resources inefficiently. In these cases, you could make a bigger impact at organizations that use less resources to solve the same problem (or use the same resources to achieve much more significant progress).

Let’s say, for example, you want to help alleviate blindness.

You could donate money to an organization that trains seeing eye dogs. It’s a credible nonprofit with inspiring stories of impact. It also spends most of its donations on helping people with little money spent on overhead. Alternatively, you could donate to an organization that tackles blindness by funding cataract surgeries in low-income countries.

Both approaches could help the problem, but one method is much more efficient and effective than the other. Yes, training a seeing eye dog could help someone overcome some of the challenges of blindness, but it could cost upwards of $40,000. Providing cataract surgery could potentially save a person from losing their vision altogether and often costs around $1000. By donating to cataract surgeries instead of seeing eye dogs, you could spend less resources and solve more of the problem.

In this scenario, it’s clear that one method of solving the problem is more cost-effective than the other. In other cases, efficiency might not come down to money. It could also be influenced by the number of employees needed or the amount of time spent solving a problem. When an organization or company lacks strategic coordination or structure, it might use a lot of people to make little impact. Not only does this reduce the organization’s efficiency, it also reduces its employees’ ability to influence the problem (which we’ll discuss a bit more in the next chapter).

Assessing efficacy

An effective method is both efficient and scalable; it uses fewer resources per impact. To gain a better sense of the effectiveness of a solution, we can ask a few questions to help clarify our uncertainties:

  • Are there studies, impact reports, or research articles available that prove this intervention works? What exactly do they prove and do they spell out explicit estimates for how well it works? What are the differences and similarities between the study and the method in practice?
  • How many people are working on this method and how much impact are they able to generate? How does this ratio scale or change when an organization or company grows?
  • Are there other solutions to the same problem that use fewer resources (time, money, talent, etc.)? Do these alternatives provide a better or worse outcome? 

After establishing some idea of how well a method could solve a problem (i.e. how efficient and impactful is it), we can get to our second major consideration.

How likely is it to work?

Even if a method is efficient, it might be unclear on how the method will work or lead to impact. This may sound a little abstract, so let’s take another hypothetical example. Imagine you want to tackle climate change. How might you go about it?

One method to approach the problem is to plant a lot of trees, which could help reduce the amount of CO2 in the atmosphere. An alternative approach is to advocate for changing U.S. emissions policies, which could significantly reduce the amount of greenhouse gasses in the atmosphere.

First let’s look at each method’s efficiency. Planting millions of trees would cost nearly $400 billion to reduce 10% of carbon emissions and alleviate climate change—and it may not be the most impactful means of doing so. Advocating for policy change might only take a few key people to make a difference—and it could have a bigger impact than planting trees. It’s safe to say that (at least in our simplified example) changing policy is a more efficient method than planting trees. Before making a final assessment, though, we need to look at another factor: how likely is it that these methods would work?

Planting trees can get complicated in practice, but let’s say that planting a lot of trees would directly solve a significant portion of the problem. We know the process through which trees can lower CO2 levels, and some studies suggest that they could be an extremely effective method of tackling climate change.

Changing U.S. policy around emissions gets a bit more complicated. It could potentially solve climate change, but how likely is it that robust policy change will happen? We could look at previous campaigns and estimate how many people and resources we’d need, but we wouldn’t be sure our efforts will work or lead to the progress we hope for. 

Ultimately, the success of changing policy depends on a lot of external factors (political leadership, budgets, public opinion, etc). We could say this method may be less likely to work than planting trees, but if it does work, changing policy will have a vastly bigger impact for the amount of time and resources spent. As a result, it is probably the more promising method of the two.

Assessing likelihood of success

While we don’t always have a convenient answer like, “this works 1 out of 5 times,” in many cases, there are questions you can ask that can help you get a better general sense of a method’s likelihood of success:

  • Is this solution’s success dependent on some other event  or change happening—like a cultural shift, a policy change, or a new technology?
  • Do a lot of external factors need to come together in order for this approach to work?
  • Are there reasons to think this method might work based on science? Or on specific evidence? Or on our own logic in a new and unexplored area?

After some initial investigating, we can zoom in to consider some of the external factors that might make a method more or less likely: 

  • What would be needed for this intervention to lead to an impactful outcome? Does it require many assumptions or a lot of luck along the way?
  • Are there examples of similar projects or approaches that have/haven’t worked before? How are they similar or different?
  • Have people been trying and failing at this approach for a long time? When possible, it can be very useful to talk to people who are doing or have done similar things.

Once we gain some sense of how likely a method is to work, we can start to make some assessments about its overall efficacy—that is, the combination of its efficiency and its likelihood of success.

Ideally, a method would score high in both of these factors. In practice, they often trade off against each other. A method might be much less likely to work, but incredibly cost-effective if it did work. Another approach may be more costly, but would almost certainly succeed. What’s important is to look out for cases where either factor is extremely low… and thus, not likely to make much of an impact at all.

Other considerations

We can’t quite conclude our efficacy assessment until we look at some extra considerations. Usually, these considerations don’t affect our whole assessment, but in some cases, they could significantly impact a method’s efficacy.

Unintended consequences

First, there are scenarios in which a method could have negative consequences that outweigh its benefits. As an example, we can think back to “Scared Straight.” Not only did this program fail to achieve its intended goal (which was to prevent crime) it actually ended up creating a bigger problem (by sparking an increase in crime). This is an extreme case, but if we’re not careful, a social intervention can lead to unintentional harm. 

It could be that providing aid to a region as an outsider creates a dependency problem, which can lead to unhealthy and inequitable power dynamics. It could also be that an organization tries to tackle a problem without sufficient understanding or collaboration, which can disempower the people it intends to help. 

Making a thorough assessment of a method’s efficacy will probably lead you to spot when a method could lead to unintended consequences. Still, it can help to ask yourself:

  • Has a similar method had unintended consequences in the past? What is this organization or company doing differently to prevent them?
  • What is the dynamic between an organization and its beneficiaries? Does this seem fair and equitable?

There are some areas where it’s extremely hard to avoid unintentional harm, as is the case with a lot of large scale policy changes. Still, it is extremely important to think through such consequences and ensure they don’t outweigh the overall impact.

In any case, we should actively look for considerations we might have missed, possible downsides, and potential harms that would change whether or not a method is a good idea.

Uncertain estimates

Another consideration to take into account is your confidence levels. There are often scenarios in which we are pretty uncertain about our own estimates. Maybe we are working on problems that haven’t happened yet. Or maybe we are trying a completely new and novel approach. In these cases, estimating chances of success is qualitatively different. It’s less an evaluation than a guesstimate (which also happens to be the name of a great online tool for doing exactly these types of estimates).

Let’s say, for example, you’re going to work on a new technology that prevents harm from solar flares, cosmic events that could have catastrophic effects on our power infrastructure. Tackling this problem will require a lot of money and talented minds, but it will also require a bit of experimentation to find what technology could work. 

Beyond our uncertain estimates about the method’s efficiency and likelihood of success, we also aren’t sure about its potential impact. What are the chances of a dangerous, massive solar flare happening? How will we know if our efforts worked?

Because this method entails so many guesstimates, we may be less certain in assessing its efficacy.

Having low certainty in our estimates has its costs, but it can be worth it for extremely important problems or impactful potential solutions (like solar flares or other more likely global catastrophic risks). 

Other times, uncertainty may not be worth it.

It can be very difficult, for instance, to measure the outcomes of some educational interventions (like reducing or eliminating school fees). Some other health based interventions (like providing deworming treatment in highly affected regions) are both more cost-effective and easier to estimate the outcomes.

While there’s some interesting debate on the matter, rigorous studies found that deworming leads to substantial increases in school attendance, improved test scores, and increases future earnings and productivity. In this case, it might not be worth the uncertain estimates involved in reducing school fees since we can be more certain in our estimates for a different method.

When we are uncertain about our estimates and assessment, the best we can do is think through the reasons why a method may be worth (or not worth) trying anyway. If we have enough information to estimate the outcome’s expected value, we may be more confident in pursuing a more speculative approach.

Summing it up

When it comes to a method’s efficacy, we are looking for any information that indicates: 

  1. How well and cost-effectively it could solve the problem
  2. How likely it is to work.
  3. How other considerations might affect this assessment.

Investing time and thought into these considerations could sharpen your intuition about a particular solution’s path to impact—even if there are some unknowns along the way. Next up in our impact assessment, we’ll look at how much leverage specific roles and jobs have to enable progress within a problem.