Comments

sorted by
magical algorithm
Highlighting new comments since Today at 3:06 PM
Select new highlight date
All comments loaded

I think a nicer analogy are spectral gaps. Obviously, no reasonable finite model will be both correct and useful, outside of maybe particle physics; so you need to choose some cut-off of you model's complexity. The cheapest analogy is when you try to learn a linear model, e.g. PCA/SVD/LSA (all the same).

A good model is one that hits a nice spectral gap: Adding a couple of extra epicycles gives only a very moderate extra accuracy. If there are multiple nice spectral gaps, then you should keep in mind a hierarchy of successively more complex and accurate models. If there are no good spectral gaps, then there is no real preferred model (of course model accuracy is only partially ordered in real life). When someone proposes a specific model, you need to ask both "why not simpler? How much power does the model lose by simplification?", as well as "Why not more complex? Why is any enhancement of the model necessarily very complex?".

However, what constitutes a good spectral gap is mostly a matter of taste.

here's the tl;dr money quote for you:

To the extent you want to be right — to have an accurate map of the territory — you by necessity have an equal want to not be wrong. When you learn something new, as much as you want to change your thinking to integrate the new information, you have an opposite want to find the new information already accounted for so that no update is necessary. You don’t want to have metaphorical comets shattering your epicycles all the time: you mostly want them to pass cleanly between the orbits because there is space in your model for them. But if you’re lost at sea and need to navigate by the stars, and positing the existence of epicycles is the only thing that allows you to find your way home, then suppose the epicycles exist with all your heart and worry about comets later when you’re out of existential danger.