The "best" mathematically-informed topics?

Recently, I asked LessWrong about the important math of rationality. I found the responses extremely helpful, but thinking about it, I think there’s a better approach.

I come from a new-age-y background. As such, I hear a lot about “quantum physics.”


Quantum Mechanics

Accordingly, I have developed a heuristic that I have found broadly useful: If a field involves math, and you cannot do the math, you are not qualified to comment on that field. If you can’t calculate the Schrödinger equation, I discount whatever you may say about what quantum physics reveals about reality.

Instead of asking which field of math are “necessary” (or useful) to “rationality,” I think it’s more productive to ask, “what key questions or ideas, involving math, would I like to understand?” Instead of going out of my way to learn the math that I predict will be useful, I’ll just embark on trying understand the problems that I’m learning the math for, and working backwards to figure out what math I need for any particular problem. This has the advantage of never causing me to waste time on extraneous topics: I’ll come to understand the concepts I’ll need most frequently best, because I’ll encounter them most frequently (for instance, I think I’ll quickly realize that I need to get a solid understanding of calculus, and so study calculus, but there may be parts of math that don't crop up much, so I'll effectively skip those). While I usually appreciate the aesthetic beauty of abstract math, I think this sort of approach will also help keep me focused and motivated. Note, that at this point, I’m trying to fill in the gaps in my understanding and attain “mathematical literacy” instead of a complete and comprehensive mathematical understanding (a worthy goal that I would like to pursue, but which is of lesser priority to me).

I think even a cursory familiarity with these subjects is likely to be very useful: when someone mentions say, an economic concept, I suspect that the value of even just vaguely remembering having solved a basic version of the problem will give me a significant insight into what the person is talking about, instead of having a hand-wavy, non-mathematical conception.

Eliezer said in the simple math of everything:

It seems to me that there's a substantial advantage in knowing the drop-dead basic fundamental embarrassingly simple mathematics in as many different subjects as you can manage.  Not, necessarily, the high-falutin' complicated damn math that appears in the latest journal articles.  Not unless you plan to become a professional in the field.  But for people who can read calculus, and sometimes just plain algebra, the drop-dead basic mathematics of a field may not take that long to learn.  And it's likely to change your outlook on life more than the math-free popularizations or the highly technical math.

(Does anyone with more experience than me foresee problems with this approach? Has this been tired before? How did it work?)

So, I’m asking you: what are some mathematically-founded concepts that are worth learning? Feel free to suggest things for their practical utility or their philosophical insight. Keep in mind that there is a relevant cost benefit analysis to consider: there are some concepts that are really cool to understand, but require many levels of math to get to. (I think after people have responded here, I’ll put out another post for people to vote on a good order to study these things, starting with those topics that have the minimal required mathematical foundation and working up to the complex higher level topics that require calculus, linear algebra, matrices, and analysis.)

These are some things that interest me:

-       The math of natural selection and evolution

-       The Schrödinger equation

-       The math of governing the dynamics of political elections

-       Basic optimization problems of economics? Other things from economics? (I don’t know much about these. Are they interesting? Useful?)

-       The basic math of neural networks (or “the differential equations for gradient descent in a non-recurrent multilayer network with sigmoid units”) (Eliezer says it’s simper than it sounds, but he was also a literal child prodigy, so I don’t know how much that counts for.)

-       Basic statistics

-    Whatever the foundations of bayesianism are

-       Information theory?

-       Decision theory

-       Game theory (does this even involve math?)

-       Probability theory

-       Things from physics? (While I like physics, I don’t think learning more of it would significantly improve my understanding of macro-level processes that that would impact my decisions. It's not as interesting to me as some of the other things on this list, right now. Tell me if I'm wrong or what particular sub-fields of physics are most worthwhile.)

-       Some common computer science algorithms (What are these?)

-       The math that makes reddit work?

-       Is there a math of sociology?

-       Chaos theory?

-       Musical math

-       “Sacred geometry” (an old interest of mine)

-       Whatever math is used in meta analyses

-       Epidemiology

I’m posting most of these below. Please upvote and downvote to tell me how interesting or useful you think a given topic is. Please don’t vote on how difficult they are, that’s a different metric that I want to capture separately. Please do add your own suggestions and any comments on each of the topics.

Note: looking around, I fount this. If you’re interested in this post, go there. I’ll be starting with it.

Edit: I looking at the page, I fear that putting a sort of "vote" in the comments might subtlety dissuade people from commenting and responding in the usual way. Please don't be dissuaded. I want your ideas and comments and explicitly your own suggestions. Also, I have a karma sink post under Artaxerxes's comment (here). If you want to vote, but not add to my karma, you can balance the cosmic scales there.

Edit2: If you know of the specific major equations, problems, theorems, or algorithms that relate to a given subject, please list them. For instance, I just added Price's Equation as a comment to the listed "math of natural selection and evolution" and the Median Voter Theorem has been listed under "the math of politics."

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 9:47 AM
Select new highlight date
Rendering 50/113 comments  show more

If a field involves math, and you cannot do the math, you are not qualified to comment on that field.

+1 to this post.


The basic math of neural networks

Learn about first and second derivatives and finding a maximum of a function. Then think about how you might find a maximum if you can only make little hops at a time.

Learn a little linear algebra (what a matrix inverse, determinant, etc. is). Understand the relationship between solving a system of linear equations and matrix inverse. Then think about what you might want to do if you have more equations than unknowns (can't invert exactly but can find something that's "as close to an inverse as possible" in some sense). A huge chunk of stuff that falls under the heading of "statistics/machine learning/neural networks/etc" is basically variations of that idea.

Some common computer science algorithms

Read Structure and Interpretation of Computer Programs: one of the highest concept/page density for computer science books.

Important algorithmic ideas are, in my opinion: hashing, dynamic programming/memoization, divide and conquer by recursion, splitting up tasks to be done in parallel, and locality (things you want at a particular point are often close in space and time).

Locality is sort of like "a smoothness assumption on access." The reason your laptop is fast even though your hard disk is slow is due to locality being generally true.


"I will always link to my ingroup", says Scott. So it is with me: I always recommend learning about association vs causation. If you are into learning by doing, try to find some media articles that make claims of the form "scientists report that to [Y], do [X]," and look up the original study and think about if the media claim actually follows (it generally does not). This will also give you practice reading empirical papers, which is a good skill to have. Stuff the authors do in such papers isn't magic, after all: the set of statistical ideas that come up over and over again in them is fairly small.


he was also a literal child prodigy

Don't think like that. There are no wizards, just people doing sensible things.

+1 for Structure and Interpretation of Computer Programs (aka SICP, aka "the wizard book") - this is a legendary programming book. Here is an interactive version: https://xuanji.appspot.com/isicp/.

I also agree on the important algorithmic ideas, with one addition: algorithmic analysis. Just as you can describe the movement of the planets with a few simple equations, and that's beautiful, you can describe any sequence of steps to finish a task as an algorithm. And you can mathematically analyze the efficiency of that sequence: as the task gets larger, do the number of steps required to finish it grow linearly, quadratically, logarithmically (we hope)? This is a broadly applicable and powerful idea, since pretty much everything (even learning) involves a sequence of steps or process.

What do you mean by decision theory?

One of the options I would call 'decision analysis,' and doesn't require much more than algebra and a basic understanding of probabilities. I wrote an introduction to it, and my current book recommendation on the subject is Decisive by the Heath brothers, and I remember it being much more about the psychology of decision-making (and practical heuristics you can apply) than math.

A second option is "what's the difference between CDT and EDT?", but that's something you shouldn't really approach until you understand causal graphs, which you shouldn't really touch until you understand probabilities and graphical networks (like Bayes nets).

A third option is "what are those exotic things they talk about on LW like TDT and UDT?", and I don't feel qualified to tell you the right way to approach that.

I also agree with Ilya on the important algorithmic ideas, with one addition: algorithmic analysis. Just as you can describe the movement of the planets with a few simple equations, and that's beautiful, you can describe any sequence of steps to finish a task as an algorithm. And you can mathematically analyze the efficiency of that sequence: as the task gets larger, do the number of steps required to finish it grow linearly, quadratically, logarithmically (we hope)?

This is a broadly applicable and powerful idea, since pretty much everything (even learning) involves a sequence of steps or process.

I am currently enjoying Tim Roughgarden's course on algorithms: https://www.coursera.org/course/algo. Luay Nakhleh's course on Algorithmic Thinking is also excellent: https://www.coursera.org/course/algorithmicthink.

As the token epidemiologist in the Less Wrong community, I should probably comment on this.

The utility of learning epidemiology will depend critically on what you mean by the word:

If you interpret "epidemiology" as the modern theory of causal inference and causal reasoning applied to health and medicine, then learning epidemiology is very useful, so much so that I believe that a course on causal reasoning should be required in high school. If you are interested in learning this material, my advisor is writing a book on Causal Inference in Epidemiology, part of which is freely available at http://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/ . For more mathematically oriented readers, Pearl's book is also great.

If you interpret "epidemiology" to mean the material you will learn when taking a course called "Epidemiology", or to mean the methods used in most papers published in epidemiologic journals (ie endless Cox models, p-hacking, model selection algorithms and incoherent reasoning about confounding), then what you will get is a broken epistemology with negative utility. Stay far away from this - people who don't have the time to learn proper causal reasoning are better off with the heuristic "if it is not randomized, don't trust it" . This happens to be the mindset of most clinicians, and appropriately so.

[Hey, I thought I was the token epidemiologist! ;) ]

I largely agree with Anders' comment (leave Pearl be for now; it's a difficult book), but there are some interesting non-causal mathy epidemiology topics that might suit your needs.

Concretely: study networks. Specifically, pick up the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World (or download the free pdf, or take the free MOOC).

It presents a smooth slope of increasing mathematical sophistication (assuming only basic high school math at the outset), and is endlessly interesting as it gently builds and extends concepts. It eventually touches many of the topics you've indicated interest in (game theory, voting, epidemic dynamics, etc), giving you some powerful mathematical tools to reason with. Advanced sections are clearly marked as such, and can be passed over without losing coherence.

And hey, if the math in the advanced sections frustrates your understanding... that's basically what you've said you want!

If I was once employed by a Dept. of Epidemiology does that also make me the token epidemiologist? :)

Epidemiology is defined to be things done by people in Departments of Epidemiology, correct?

If I was once employed by a Dept. of Epidemiology does that also make me the token epidemiologist? :)

That makes you an expert on epidemiology, duh :-)

As Ilya recommended, a great choice for programming in general is the legendary Structure and Interpretation of Computer Programs (aka SICP, aka "the wizard book"). Here is an interactive version: https://xuanji.appspot.com/isicp/. (You can find solutions to the problems here, but of course use sparingly if at all: http://community.schemewiki.org/?sicp-solutions)

If you benefit from more instruction than a solo journey through SICP, I cannot recommend highly enough MIT's Introduction to Computer Programming course, which remains one of the best educational experiences I have ever had: https://www.edx.org/course/mitx/mitx-6-00-1x-introduction-computer-5626

Does anyone with more experience than me foresee problems with this approach? Has this been tired before? How did it work?

I foresee (minor) problems. Nothing too serious, but it might be useful to be aware of the existence of problems with this approach. Most notably:

  • Many (sub)fields use only a single model out of a larger, overarching theory. Most of the times you want to skip the grand theory to get immediate results from a single model (so that's a plus for your approach), but sometimes having someone show you the similarities between different theories explicitly can be very useful. By going for depth rather than breadth it might be hard to compare, contrast and most importantly merge pieces of knowledge from different fields. For clarity: I'm talking about really, really general mathematics here (for example learning linear algebra rather than the algorithm for the Fast Fourier Transform).

  • I personally found that in general it is quite hard to figure out which math you need to fully understand a certain result if you don't already know the math. If you start with an overly ambitious goal (for example if you start with the Einstein equations of General Relativity and say to yourself: 'Lets backtrack to see which math I need') I suspect that you will have trouble figuring out which math to learn.

All in all these points are only minor - most useful math is relatively simple (it's called the simple math for a reason), and you already seem to plan to start with pretty general mathematics (e.g. learning statistics rather than just the linear least squares algorithm). But sometimes learning the math before you have a goal can be useful.

The math of natural selection and evolution