Examples of Mitigating Assumption Risk

One of the subskills mentioned in Eliezer's Security Mindset post is mitigating assumption risk–that is, the risk of losing utility because some of your assumptions are wrong. There are two main ways to do this:

  1. Gain more information about whether your assumptions hold
  2. Make the assumption irrelevant (such as the hashing passwords example)

Here are a bunch more examples:

  • Repeating back what someone said in your own words, to check understanding
  • Adding a margin of safety when estimating how much load a bridge can bear
  • Using Statistical models that make fewer assumptions, or have fatter tails
  • Exposing your work to attack in low-risk situations, such as comedians testing new material in small clubs, or Netflix's Chaos Monkey
  • Emphasizing fast adaption to unexpected circumstances over better forecasting
  • Putting spare capacity in steps in your process that aren't the bottleneck
  • Testing code frequently while refactoring to check functionality doesn't unintentionally change
  • Doing an analysis in different ways on different datasets, and only trusting them when the conclusions match

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 3:21 AM
Select new highlight date
All comments loaded

Keeping necessary materials and short-term inputs stored locally, to avoid exposure to supply chains or service outages.

Build and use applications that don't rely on always-on internet.

Download files, including music and videos, whenever possible, in addition to any cloud storage.

Locate resource sources and/or stored reserves close to you in physical space. Keep reserves of key materials, and of tradable resources.

Gather capital that cannot be confiscated or lost, especially human capital.

Avoid debt, especially debt that is not carefully bounded, is high interest or that could potentially balloon in size.

Stress test to trigger failure modes, so you know where those modes are, and also so you know what the consequences of failure are in context (and to remind yourself that they're not so bad).

Avoid or minimize prioritizing tasks by urgency or deadline when possible.

Avoid priortizing tasks by value beyond a certain threshold, so long as you are confident that you have sufficient resources to complete all tasks above that threshold.

Don't entirely hold off on engaging in high-value places even if you would prefer a higher-quality response. (e.g. the problem where you don't respond to an email because it deserves a good response, so it never gets one at all).

(This seems fun and useful, these were my first brainstorms)

Digging deeper on any of these can also be interesting. For instance:

Keeping necessary materials and short-term inputs stored locally, to avoid exposure to supply chains or service outages.

"Locally" can mean a few things:

  • Physically proximate
  • Available without relying on a given form of transportation
  • Available without relying on a given legal method of transfer or control

Making your production process more general and less dependent on your specific supply chain can be a good substitute here too. If I only know one recipe for bread, and it involves using a new packet of yeast each time, I might stockpile yeast. But if I have more general knowledge, I can substitute into creating my own sourdough starter, or making sodabread, or obtaining yeast via a different method (e.g. friends, offering to buy from bakeries), or making a conventional-yeast starter once I notice a supply disruption. Of course, then I'm still assuming a water source and a heat source and flour and probably salt. But, many of those can be handled multiple ways too.

There's definitely a lot of meat to dig into. Your mention of legal reminds me that having backups for when legal options fail, or the law is actively turned against you (for any reason) are also important.

What would you do if all of your accounts are frozen and you can't use any credit cards or other electronic sources of money? This could happen due to something like identity theft, so it's not even assuming legal trouble, let alone legal trouble you deserve.

What would you do if you needed to be off the grid entirely?

And the central point Benquo points to here, I think, is that in order to have security mindset your models must be made of gears. If your system does things you don't understand, there's no way to fix them when they break, or find workarounds to broken parts. If I'm going to need bread, ordinary paranoia might be having extra supplies or places to buy. To be secure, I'll need to know how to find additional places to buy, and/or what makes bread making work, and so forth. The more specific or black box my plans are, the less chance I have to adapt to change, even non-hostile change.

The most commonsense example of making assumptions irrelevant I've heard of is from weapons safety: always act as if the gun is loaded.

See also

Surprised to learn I'm not the only one who went with Mealsquares instead of Soylent, due to their use of whole foods. I also only eat Mealsquares a few times a week, and eat a variety of foods with a tendency towards "Mediterranean Diet" types of foods.

Can you provide additional details regarding eating Mealsquare instead of Soylent?

Soylent, being constructed almost entirely out of specified nutrients, is nutritionally adequate only to the extent to which the nutritional guidelines it follows are adequate. Mealsquares are mostly made of food.

Don't deceive yourself even if it seems like a really really good idea.

Don't falsify data, frame people for bad things they didn't do, or hide bad things your allies are doing even if it seems like a really really good idea.

Prepare ahead of time for disasters. Learn first aid. Know what to do in the event of nuclear war. Keep essential first aid and disaster preparedness supplies on hand.

Assume every new sex partner is fertile and has HIV, and decide your safer-sex risk tolerance based on that.

Build slack. Have fuck-you money. Build extra time into your schedule in case something goes wrong.

Charge your phone and your laptop before leaving the house. Always take more books (or whatever your preferred form of entertainment is) than you think you'll need.

When hanging out privately with a stranger, tell a friend when you expect to return and when they should start freaking out.

Always have an exit plan for your job, relationship, and intentional community.

Adding a margin of safety when estimating how much load a bridge can bear

Yes, but after a certain point, "total load" will stop being your important metric and you need to think about other things like:

  • Am I assuming a stable, evenly distributed load? (Or e.g. could an army marching across the bridge in step, or traffic only on one side, cause problems?)
  • Is "the bridge falls down" the only material way the bridge could fail to hold up the traffic on it? (E.g. maybe you want good guardrails.)
  • Are there other sources of stress other than load? (E.g. "ground is unstable" or "wind blows really fast".)
  • Is the bridge likely to stay built as-designed? (Bridges mostly don't have people trying to take them apart, but a reasonably common source of reduced automobile performance is catalytic converter theft, since it's a fairly easily accessible part containing precious metals. There are also nonagentic problems like erosion.)

I love posts of examples! This one in particular I'm interested in, as I'm writing a post about making life-plans that involve mitigating assumption risk. I've moved this to the frontpage (currently there isn't functionality for you to prevent posts from being moved to the frontpage, so do move it back if that's your preference).

Turning high-level concepts into a bunch of concrete examples is one of the best ways to make a deep insight practical, and it's also super helpful for other people to read such examples. These examples are solid, and help the community better understand a key concept currently being discussed. Also, as usual, if there's something this valuable while being short I'm much more likely to promote it to Featured. And (for all these reasons) I have.

"Statistical models with fewer assumptions" is a tricky one, because the conditions under which your inferences work is not identical to the conditions you assume when deriving your inferences.

I mostly have in mind a historical controversy in the mathematical study of evolution. Joseph Felsenstein introduced maximum likelihood methods for inferring phylogenetic trees. He assumed a probabilistic model for how DNA sequences change over time, and from that he derived maximum likelihood estimates of phylogenetic trees of species based on their DNA sequences.

Felsenstein's maximum likelihood method was an alternative to another method, the "maximum parsimony" method. The maximum parsimony tree is the tree that requires you to assume the fewest possible sequence changes when explaining the data.

Some people criticized Felsenstein's maximum likelihood method, since it assumed a statistical model, whereas the maximum parsimony method did not. Felsenstein's response was to exhibit a phylogenetic tree and model of sequence change where maximum parsimony failed. Specifically, it was a tree connecting four species. And when you randomly generate DNA sequences using this tree and the specified probability model for sequence change, maximum parsimony gives the wrong result. When you generate short sequences, it may give the right result by chance, but as you generate longer seqences, maximum parsimony will, with probability 1, converge on the wrong tree. In statistical terms, maximum parsimony is inconsistent: it fails in the infinite-data limit, at least when that is the data-generating process.

What does this mean for the criticism that maximum likelihood makes assumptions? Well, it's true that maximum likelihood works when the data-generating process matches our assumptions, and may not work otherwise. But maximum parsimony also works for a limited set of data-generating processes. Can users of maximum parsimony, then, be accused of making the assumption that the data-generating process is one on which maximum parsimony is consistent?

The field of phylogenetic inference has since become very simulation-heavy. They assume a data-generating process, and test the output of maximum likelihood, maximum parsimony, and other methods. The conceern is, therefore, not so much on how many assumptions the statistical method makes, but on what range of data-generating processes it gives correct results.

This is an important distinction because, while we can assume that the maximum likelihood method works when its assumptions are true, it may also work when its assumptions are false. We have to explore with theory and simulations what is the set of data-generating processes on which it is effective, just like we do with "assumption-free" methods like maximum parsimony.

For more info, some of this story can be found in Felsenstein's book "Inferring Phylogenies", which also contains references to many of the original papers.

Other things equal, choose the reversible alternative.

Backup your stuff: My backup process is to scan or take pictures of items. Put files on an external hard drive. Backup that external drive to another external hard drive. Then I backup up everything to the cloud through Backblaze. That way you have the physical items themselves, the items on hard drive1 that you take with you, the items on hard drive2 that you store somewhere, AND everything is on the cloud. It may seem excessive but it's easy once you have everything set up. There are more risk mitigation tips in my post on my experiences prepping for a hurricane for those interested.