Why preventing negative scenarios might lead to disapproval

In March 2020 I read a tweet by Kaila Colbin. This was early in the COVID-19 pandemic and she warned:

“Here is the thing to understand about flattening the curve. It only works if we take necessary measures before they seem necessary. And if it works, people will think we over-reacted. We have to be willing to look like we over-reacted.”

The tweet really resonated with me and got me thinking about the phenomenon she was describing in general: The success of a mitigation effort might lead to the perception that the effort itself was not required in the first place.

Since then I have been looking at data around post-lockdown skepticism, trying to understand how this dynamic works. I assumed this was some form of unintended consequences, but I wasn’t really able to find information (studies, papers, ..) about the specific effect. So I reached out via LinkedIn if other people have been looking into this already

I got amazing answers and while nobody could point to a description of the actual effect, there were great impulses that made me realize this is not only about the extreme case where people in hindsight would doubt the necessity for change, but a spectrum where the perceived value of the achieved change would diminish at different levels.

I wanted to share some of my findings and thoughts here to raise awareness, compare notes and start a discussion around it. My goal is to better understand this specific dynamic to anticipate, incorporate and potentially mitigate its effect when doing longer-term futures scenarios. Let me know if this was useful to you and if you have additional thoughts, insights and links. Thank you!

Defining the effect

As a frame of reference we assume that:

a) A negative scenario will happen if no positive action is taken to avoid it.

b) The causes for this effect are identified and understood with a fair amount of certainty.

c) The causes are mitigated through positive action and the negative scenario is avoided, leading to a more positive outcome scenario.

However in some cases we can observe that after taking the positive action, the resulting scenario might disconnect itself from the original necessity for change, leading to a diminished perception of value for the new reality once its achieved.

In other words: The success of a mitigation effort might lead to the perception that the positive action was taken in isolation and not to prevent a negative scenario (diminished relationship).

Let’s look how this dynamic plays out, with examples based on flattening the COVID-19 curve:

  • It is largely agreed that action A1 will lead to scenario S1, which is undesirable: “If we ignore the COVID-19 virus (A1) , hundreds of thousands of people will likely die (S1)
  • This can be prevented by taking action A2, instead leading to scenario S2, which is preferable to S1: “We can socially distance (A2) to flatten the curve and only thousands will die (S2)
  • A2 is now seen as the cost of avoiding S1: “To avoid mass death we need to socially distance
  • Action A2 is taken, S1 is avoided and instead a version of S2 is achieved: “We have socially distanced and only thousands of people died, but not hundreds of thousands
  • In public perception however, A2 might now be seen as the price for achieving S2, ignoring A1 and S1 since neither became reality: “We socially distanced and thousands of people died
  • This might lead to a perceived diminished value of A2 and S2: “We did socially distance, but thousands of people died

In the LinkedIn discussion John McGarr wondered if the effect might be a form of recency bias where people put greater importance on the recent chain of events (positive scenario achieved through mitigation, S2) over the original scenario (predicted but averted negative scenario, S1), diminishing its perceived value.

Marko Müller mentioned the prevention paradox, which brought me to self-defeating prophecies.

Self-defeating prophecies and futures scenarios

Self-defeating prophecies are predictions that prevent their described scenario from happening by stating it (as opposed to a self-fulfilling prophecy). In the context of potential futures, these can be for example warnings to inspire action or inaction to avoid a predicted negative scenario. If action is taken to prevent the predicted negative outcome, then it is a self-defeating prophecy.

Simplified example:

  • Scenario: “Millions will die as a direct result of global warming”
  • Potential result: “People working actively to prevent this scenario”

By definition self-defeating prophecies follow the anatomy of the described effect, so the question is: Can we find some form of diminished perceived value for examples of self-defeating prophecies?

Indeed we can. Looking through examples and papers of self-defeating prophecies I noticed a couple of things.

Desired scenarios that aim to keep the current status quo are valued less in hindsight, regardless of the adverse scenarios they prevented

The year 2000 problem is seen as a self-defeating prophecy because fear of massive technology failures encouraged the changes needed to avoid those failures. So the desired scenario was “nothing happens (S2)” as an alternative to “command & control computers crash, leading to potentially catastrophic outages of essential systems (S1)”. The cost was billions of dollars (A2).

This led to the perception of: “We paid billions for nothing”. There was nothing tangible to point at and say: “See? We achieved that” and there was no reinforced learning by actually experiencing the predicted negative impact as it was avoided (almost) completely.

In extreme cases the value of preventing S1 can diminish completely, disconnecting A2 and S2 from the initial premise. So after averting the negative scenario through positive action, the results might be perceived as a single cause and effect without dependency on the original negative scenario.

This is the effect that Kaila Colbins tweet describes in the beginning: The success of a mitigation effort might lead to the perception that the effort itself was not required in the first place, leading to uncertainty and doubt: “Did we even need to spend billions of dollars when nothing even happened?

Paul Orlando writes about this in his post The Self-Defeating Prophecy (and How it Works): “If crisis situations pass without incident, will people note that their behavior was hacked in a direction one group wanted and thus be less likely to trust future worst-case scenarios? This is a risk.

He writes this in the context of using specially designed worst-case scenarios as a basis for negative scenarios to inspire action more effectively. But even if you don’t exaggerate the potential negative impact, if the best possible positive outcome is “nothing changes”, every negative scenario can be seen as unnecessary.

In contrast the results of protecting the ozone layer were perceived more favorably. Although nobody could directly see of feel the negative results for themselves on human scale, environmental agencies did a good job in the 80s and 90s to continuously publish and market the positive results of international regulations. Laura Schulte talked about this in the LinkedIn discussion to “keep what could have happened current”. Bastian Dietz mentioned something similar: “Individuals documenting their perception to mitigate these errors”.

It also helped that the negative effects could be felt indirectly. In certain regions people were discouraged (but not forbidden) to go outside in summer or at least use strong sun block to avoid skin cancer. So the result was also a return to the status quo (ability to go outside), not just retaining it. And it feels good to “fight to regain something lost”.

By creating context and putting the current efforts into the framing how the current reality tracks against the predicted negative scenario, diminishing value seems to be avoided somewhat.

The more complex the relationship between cause (the reason for the negative scenario) and effect (the negative scenario becoming a potential reality), the less valuable the perceived positive result

This observation is related to the second assumption in the frame of reference: “The causes for the negative scenario are identified and understood with a fair amount of certainty.” If there is doubt around the causality, then the entire negative scenario can be doubted as well, leading to diminished trust in urgency (“Did we need to do it now?”) or necessity (“Did we need to do it at all?”).

But even if there is consensus within a group of experts, the causality might be so complex that it is not obvious to the group that needs to take the preventive action. Sometimes this disconnects A2 and S2 from the initial premise even before the preventive measures are taken.

In the beginning of the COVID-19 crisis (February to May 2020) experts were not sure how effective different measures were to stop the spread of the virus. Example: How effective are masks in stopping the spread? How about cloth masks vs. surgical masks vs. FFP2 masks? Assumptions were made based on previous corona virus strands, which in some cases turned out to be misleading or wrong and had to be corrected as more information about the new virus became available.

Reacting to change and updating scenario predictions accordingly is not a bad thing, but the suboptimal way this was communicated led to mistrust: “Are we really sure the causes and effects have been correctly identified?” And thus it left room to doubt the predicted negative scenario: “Are we really sure hundreds of thousands of people will die?

Another current example is climate change, where 97% of experts agree that humans cause global warming and climate change. There is no disagreement here. As we enter the threshold of some of the negative scenarios, we are beginning to feel their effects. There is also data around the causes that would allow a prioritized and measured response to avoid it.

But explaining the effect and data is hard. The planet is a very complex system and even supposedly simple explanations need to use a lot of data to do so. And explaining the causes, how they relate and how they can be tracked in isolation against the overall effect is even more complicated. That doesn’t mean these models and scenarios are not true, it just means they might not be relatable enough for the group that needs to take action — in this case entire societies, working together.

So the better a frame of reference is explained and relatable to the group that needs to take action, the more willing they will be to engage in the preventive action and the more positive they will perceive a positive result once it has been achieved. Creating a strong frame of reference will also make it easier to track progress and successes against the predicted negative scenario, as described above.

The longer the time between cause (the reason for the negative scenario) and effect (the negative scenario becoming a potential reality), the less valuable the perception of the potential result

This is not so much an issue of diminished value of change, but rather predatory delay, but I think there is a relationship. Usually these scenarios require massive amounts of investment (A2) to achieve an outcome (S2) that is not felt by the ones making the investment. Not only that, but even the negative effects (S1) might not be applicable to the investors either. So by definition there is a perceived personal disconnect between A2 and S2 right from the start: “I need to invest and never get to experience the positive outcome, while not having negative consequences from the resulting inaction myself”.

Martin Wettig brought up hyperbolic discounting in the LinkedIn discussion, which I think fits very well here.

Where to go from here?

To me the effect still seems related to unintended consequences in the sense that outcomes of a purposeful action might be perceived in unintended or unforeseen ways. Many of the described types seem related and learnings are somewhat transferrable.

In terms of improving my longer-term futures scenarios, I will make some changes to the way I describe normative scenarios. While not all of them are about describing negative scenarios as warnings, by definition they describe a desired state or vision, which can be contrasted to other, undesired states.

  • I want to make sure that my frame of reference is not only relatable to the scenario audience, but also to the groups that need to take action to achieve the scenario. In case these diverge too much, I will make recommendations on how to adapt.
  • Assumptions made in the frame of reference are not only stated, but causes and effects are as isolated as possible to increase relatability for the target audience.
  • Ideally the described scenario already offers or at least recommends some way of tracking the progress of achieving the desired future or preventing the undesirable one.
  • A future nice to have is the ability to give the scenario a “risk factor” that a diminished relationship can happen. In other words: “How likely is it that the achieving the desired scenario will not be valued as intended?” Especially if the scenario is intended as warning / self-defeating prophecy.

What are your learnings? Let me know if this was useful to you and if you have additional thoughts, insights and links.

Thank you and take care!

Living in Berlin / Germany, working at Microsoft, loving technology, society, good food, well designed games and this world in general. Views are mine, k?