Open thread, 11-17 August 2014

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:49 AM
Select new highlight date
All comments loaded

Economist Scott Sumner at Econlog praised heavily Yudkowsky and the quantum physics sequence, and applies lessons from it to economics. Excerpts:

I've recently been working my way through a long set of 2008 blog posts by Eliezer Yudkowsky. It starts with an attempt to make quantum mechanics seem "normal," and then branches out into some interesting essays on philosophy and science. I'm nowhere near as smart as Yudkowsky, so I can't offer any opinion on the science he discusses, but when the posts touched on epistemological issues his views hit home.

and

I used to have a prejudice against math/physics geniuses. I thought when they were brilliant at high level math and theory; they were likely to have loony opinions on complex social science issues. Conspiracy theories. Or policy views that the government should wave a magic wand and just ban everything bad. Now that I've read Robin Hanson, Eliezer Yudkowsky and David Deutsch, I realize that I've got it wrong. A substantial number of these geniuses have thought much more deeply about epistemological issues than the average economist. So when Hanson says we put far too little effort into existential risks, or even lesser but still massive threats like solar flares, and Yudkowsky says cryonics is under-appreciated, or when they say AI (or brain ems) is coming faster than we think and will have far more profound effects than we realize, I'm inclined to take them very seriously.

What sophisticated ideas did you come up with independently before encountering them in a more formal context?

I'm pretty sure that in my youth I independently came up with rudimentary versions of the anthropic principle and the Problem of Evil. Looking over my Livejournal archive, I was clearly not a fearsome philosophical mind in my late teens, (or now, frankly), so it seems safe to say that these ideas aren't difficult to stumble across.

While discussing this at the most recent London Less Wrong meetup, another attendee claimed to have independently arrived at Pascal's Wager. I've seen a couple of different people speculate that cultural and ideological artefacts are subject to selection and evolutionary pressures without ever themselves having come across memetics as a concept.

I'm still thinking about ideas we come up with that stand to reason. Rather than prime you all with the hazy ideas I have about the sorts of ideas people converge on while armchair-theorising, I'd like to solicit some more examples. What ideas of this sort did you come up with independently, only to discover they were already "a thing"?

When I was a teenager, I imagined that if you had just a tiny infinitesimally small piece of a curve - there would only be one moral way to extend it. Obviously, an extension would have to be connected to it, but also, you would want it to connect without any kinks. And just having straight-lines connected to it wouldn't be right, it would have to be curved in the same sort of way - and so on, to higher-and-higher orders. Later I realized that this is essentially what a Taylor series is.

I also had this idea when I was learning category theory that objects were points, morphisms were lines, composition was a triangle, and associativity was a tetrahedron. It's not especially sophisticated, but it turns out this idea is useful for n-categories.

Recently, I have been learning about neural networks. I was working on implementing a fairly basic one, and I had a few ideas for improving neural networks: making them more modular - so neurons in the next layer are only connected to a certain subset of neurons in the previous layer. I read about V1, and together, these led to the idea that you arrange things so they take into account the topology of the inputs - so for image processing, having neurons connected to small, overlapping, circles of inputs. Then I realized you would want multiple neurons with the same inputs that were detecting different features, and that you could reuse training data for neurons with different inputs detecting the same feature - saving computation cycles. So for the whole network, you would build up from local to global features as you applied more layers - which suggested that sheaf theory may be useful for studying these. I was planning to work out details, and try implementing as much of this as I could (and still intend to as an exercise), but the next day I found that this was essentially the idea behind convolutional neural networks. I'm rather pleased with myself since CNNs are apparently state-of-the-art for many image recognition tasks (some fun examples). The sheaf theory stuff seems to be original to me though, and I hope to see if applying Gougen's sheaf semantics would be useful/interesting.

I really wish I was better at actually implementing/working out the details of my ideas. That part is really hard.

I came up with the idea of a Basic Income by myself, by chaining together some ideas:

  • Capitalism is the most efficient economic system for fulfilling the needs of people, provided they have money.

  • The problem is that if lots of people have no money, and no way to get money (or no way to get it without terrible costs to themselves), then the system does not fulfill their needs.

  • In the future, automation will both increase economic capacity, while also increase the barrier to having a 'valuable skill' allowing you to get money. Society will have improved capacity to fulfill the needs of people with money, yet the barrier to having useful skills and being able to get money will increase. This leads to a scenario where the society could easily produce the items needed by everyone, yet does not because many of those people have no money to pay for them.

  • If X% of the benefits accrued from ownership of the capital were taken and redistributed evenly among all humans, then the problem is averted. Average people still have some source of money with which they can purchase the fulfillment of their needs, which are pretty easy to supply in this advanced future society.

  • X=100%, as in a strict socialism, is not correct, as then we get the economic failures we saw in the socialist experiments of the past century.

  • X = 0%, as in a strict libertarianism, is not correct, as then everyone whose skills are automated starve.

  • At X = some reasonable number, capitalism still functions correctly (that is, it works today with our current tax rate levels, and hopefully in our economically progressed future society, it provides sufficient money to everyone to supply basic needs).

Eventually I found out that my idea was pretty much a Basic Income system.

Once a Christian friend asked me why I cared so much about what he believed. Without thinking, I came up with, "What you think determines what you choose. If your idea of the world is inaccurate, your choices will fail."

This was years before I found LW and learned about the connection between epistemic and instrumental rationality.

P.S. My friend deconverted himself some years afterwards.

This is not a direct answer: Every time I come up with an idea in a field I am not very deeply involved in sooner or later I will realise that the phenomenon is either trivial, a misperception or very well studied. Most recently this happened with pecuniary externalities.

Came up with the RNA-world hypothesis on my own when reading about the structure and function of ribosomes in middle school.

Decided long ago that there was a conflict between the age of the universe and the existence of improvements in space travel meaning that things such as we would never be able to reach self-replicating interstellar travel. Never came to the conclusion that it meant extinction at all and am still quite confused by people who assume its interstellar metastasis or bust.

For as long as I can remember, I had the idea of a computer upgrading its own intelligence and getting powerful enough to make the world a utopia.

Oh, another thing: I remember thinking that it didn't make sense to favour either the many worlds interpretation or the copenhagen interpretation, because no empirical fact we could collect could point towards one or the other, being as we are stuck in just one universe and unable to observe any others. Whichever one was true, it couldn't possibly impact on one's life in any way, so the question should be discarded as meaningless, even to the extent that it didn't really make sense to talk about which one is true.

This seems like a basically positivist or postpositivist take on the topic, with shades of Occam's Razor. I was perhaps around twelve. (For the record, I haven't read the quantum mechanics sequence and this remains my default position to this day.)

Derivatives. I imagined tangent lines traveling along a function curve and thinking 'I wonder what it looks like when we measure that?' And so I would try to visualize the changing slopes of the tangent lines at the same time. I also remembering wondering how to reverse it. Obviously didn't get farther than that, but I remember being very surprised when I took calculus and realizing that the mind game I had been playing was hugely important and widespread, and could in fact be calculated.

At school my explanation for the existence of bullies was that it was (what I would later discover was called) a Nash equilibrium.

In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.

I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.

Edit: Please don't just upvote, try to point to similar ideas in your respective field or critique the idea.

What are some good paths toward good jobs, other than App Academy?

I've just finished the first draft of a series of posts on control theory, the book Behavior: The Control of Perception, and some commentary on its relevance to AI design. I'm looking for people willing to read the second draft next week and provide comments. Send me a PM or an email (I use the same username at gmail) if you're interested.

In particular, I'm looking for:

  • People with no engineering background.
  • People with tech backgrounds but no experience with control theory.
  • People with experience as controls engineers.

(Yes, that is basically a complete grouping of people. But somehow people are more likely to think you're looking for them if you specifically say you're looking for them, and I think I can learn different useful things about the post from people in those groups.)

The Unicorn Fallacy (warning, relates to politics)

Is there an existing name for that one? It's similar to the nirvana fallacy but looks sufficiently different to me...

I am not aware of an existing one, although it is related to Moloch, as described in SSC when applied to the state:

although from a god’s-eye-view everyone knows that eliminating corporate welfare is the best solution, each individual official’s personal incentives push her to maintain it.

What Munger describes as The State, SSC calls Moloch. What your link calls the Munger test, may as well be called the Moloch test:

The Munger test:

In debates, I have found that it is useful to describe this problem as the "unicorn problem," precisely because it exposes a fatal weakness in the argument for statism. If you want to advocate the use of unicorns as motors for public transit, it is important that unicorns actually exist, rather than only existing in your imagination. People immediately understand why relying on imaginary creatures would be a problem in practical mass transit. But they may not immediately see why "the State" that they can imagine is a unicorn. So, to help them, I propose what I (immodestly) call "the Munger test."

Go ahead, make your argument for what you want the State to do, and what you want the State to be in charge of. Then, go back and look at your statement. Everywhere you said "the State" delete that phrase and replace it with "politicians I actually know, running in electoral systems with voters and interest groups that actually exist."

If you still believe your statement, then we have something to talk about.

I wonder why we don't see more family fortunes in the U.S. in kin groups that have lived here for generations. Estate taxes tend to inhibit the transmission of wealth down the line, but enough families have figured out how to game the system that they have held on to wealth for a century or more, notably including families which supply a disproportionate number of American politicians; they provide proof of concept of the durable family fortune. Otherwise most Americans seem to live in a futile cycle where their lifetime wealth trajectory starts from zero at birth and returns to zero by death.

Steve Sailer noted on his blog a few months back that in the UK, people with Anglo-Norman surnames in our time have held on to more wealth on average than Brits with surnames suggesting manual-laborer origins. For example, Aubrey de Grey has an Anglo-Norman surname, and he reportedly inherited several million British pounds when his mother died a few years ago. I gather that this doesn't generally happen to ordinary Brits. Apparently the warriors who came over from France with William the Conqueror in 1066, and participated in the division of the spoils, started a way of handling wealth which enabled their descendants to hold on to inherited assets down through the centuries. If the Anglo-Normans could do it, and if some American families have figured out how to do it more recently, then what keeps this practice from becoming widespread in American society?

Another possibility is that Americans are more individualistic. Maintaining a family fortune means subordinating yourself enough that it isn't spent down.

"Lacking self-control" is probably what you mean :-)

Example: the Vanderbilts.