Open Thread, Jun. 8 - Jun. 14, 2015

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 6:54 PM
Select new highlight date
All comments loaded

Judging from the recent decline of LW, it seems that the initial success of LW wasn't due to rationality, but rather due to Eliezer's great writing. If we want LW to become a fun place again, we should probably focus on writing skills instead of rationality skills. Not everyone can be as good as Eliezer or Yvain, but there's probably a lot of low hanging fruit. For example, we pretty much know what kind of fiction would appeal to an LWish audience (HPMOR, Worm, Homestuck...) and writing more of it seems like an easier task than writing fiction with mass-market appeal.

Does anyone else feel that it might be a promising direction for the community? Is there a more structured way to learn writing skills?

I have noticed that many people here want LW resurrection for the sake of LW resurrection.

But why do you want it in the first place?

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

After all, if you think that Eliezer's writing constitute most of LW value, and Eliezer doesn't write here anymore, maybe the wise decision is to let it decay.

Beware the lost purposes.

But why do you want it in the first place?

Emotionally -- for the feeling that something new and great is happening here, and I can see it growing.

Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.

Okay, what exactly are the "great things" I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?

When Eliezer was writing the Sequences, merely the fact that "there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom" seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.

Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.

Internet vs real life -- things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy -- if it isn't documented, it didn't happen -- is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.

By the way, if you are unhappy about the "decline" of LW because it will make a worse impression on new people you would like to introduce to LW culture -- point them towards the book instead.

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)

Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped?

Something I feel Yudkowsky doesn't really talks about enough in the Sequences is how to be rational in a group, as part of a group and as a group. There is some material in there and HPMOR also offers some stuff, but there's very little that is as formalized as the ideas around "Politics is the Mindkiller/Spiders/Hard Mode," or "the Typical Mind Fallacy."

Something Yudkowsky also mentions is that what he writes about rationality is his path. Some things generalize (most people have the same cognitive biases, but in different amounts). From reading the final parts of the Sequences and the final moments of HPMOR I get the vibe that Yudkowsky really wants people to develop their own path. Alicorn did this and Yvain also did/does it to some extent (and I'm reading the early non-Sequence posts and I think that MBlume also did this a bit), but it's something that could be written more about. Now, I agree that this is hard, the lowest fruit probably is already picked and it's not something everyone can do. But I find it hard to believe that there are just 3 or 4 people who can actually do this. The bonobo rationalists on tumblr are, in their own, weird way, trying to find a good way to exist in the world in relation to other people. Some of this is formalized, but most of it exists in conversations on tumblr (which is an incredibly annoying medium, both to read and to share). Other people/places from the Map probably do stuff like that as well. I take this as evidence that there is still fruit low enough to pick without needing a ladder.

Something I feel Yudkowsky doesn't really talks about enough in the Sequences is how to be rational in a group, as part of a group and as a group.

I've been working on a series of posts centered around this -- social rationality, if you will. So far, the best source for such materials remains Yvain's writings on the topic on his blog; he really nails the art of having sane discussions. He popularised some ways of framing debate tactics such as motte-and-bailey, steelmanning, bravery debates and so on, which entered the SSC jargon.

I'm interested in expanding on that theme with topics such as emphasis fights ("yes, but"-ing) or arguing in bad faith, as examples of failure modes in collective truth-seeking, but in the end it all hinges on an ideally shared perception of morality, or of standards to hold oneself to. My approach relies heavily on motives and on my personal conception of morality, which is why it's difficult to teach it without looking like I preach it. (At least Eliezer didn't look too concerned about this one, though, but not everyone has the fortune to be him.) Besides, it's a very complex and murky field, one best learned through experience and examples.

Why do you prefer offline conversations to online?

Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:

  • You don't have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.

  • You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.

  • As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.

Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.

Judging from the recent decline of LW

Is this a thing? Has it been measured, however imperfectly, and found to be the case?

I think we need both rationality and improved writing. This is a crowd that isn't going to put up with entertaining writing that doesn't have significant amounts to say about rationality.

Maybe a good question is "what is the most fun (interpreted very widely) we can have with rationality? I'm not just talking about jokes and entrancing fiction and smiting the unworthy (though those are good things), but looking for emotional intensity, which can be about cracking a problem open as much as anything else.

I'm about to start being paid for a job, and I was looking at investment advice from LW. I found this thread from a while back and it seemed good, but it's also 4 years old. Can anyone confirm if the first bullet is still accurate? (get VTSMX or VFINX on vanguard, it doesn't matter too much which one.)

If you want to take one more step of complexity (and assuming you have at least $6000 to invest) you can split your money between VTSMX and VGTSX as Unnamed mentioned. In doing so you would be diversified across the global economy, instead of just across the US economy. You would want 20% to 50% of your funds that are in stocks to be in international stocks.

Vanguard Target Date funds (e.g., VFIFX) are also a good option if you want something you never have to manage, and they have a minimum investment of $1000. They allow you to invest in a pre-determined allocation of domestic and international stocks and bonds, and keep you balanced at a target allocation that gets more conservative as you get closer to retirement age.

You should also strongly consider investing in a Roth IRA if your income is not over the limit for contributions (and if it is, there are ways around that). Contributions to a Roth IRA can be withdrawn at any time, though there are restrictions on accessing the investment returns. Your employer's 401(k) plan is another good option for long-term investments.

The Bogleheads wiki and forum are excellent resources for learning about low-cost long-term investing.

But I agree with everyone else: if you want to do the simplest thing and stop thinking about it, invest in VTSMX.

My money is still in VTSMX.

(Actually, half of it is in VTSMX and half is in VGTSX, which is the non-US index fund. But putting it all into VTSMX is fine too.)

It's way better than trying to outguess the market, and way way better than doing nothing.

I recently stumbled upon the Wikipedia entry on finitism (there is even ultrafinitism). However, the article on ultrafinitism mentions that no satisfactory development in this field exists at present. I'm wondering in which way the limitation to finite mathematical objects (say a set of natural numbers with a certain largest number n) would limit 'everyday' mathematics. What kind of mathematics would we still be able do (cryptography, analysis, linear algebra …)?

Is such a long answer suitable in OT? If not, where should I move it?

tl;dr Naive ultrafinitism is based on real observations, but its proposals are a bit absurd. Modern ultrafinitism has close ties with computation. Paradoxically, taking ultrafinitism seriously has led to non-trivial developments in classical (usual) mathematics. Finally: ultrafinitism would probably be able to interpret all of classical mathematics in some way, but the details would be rather messy.

1 Naive ultrafinitism

1.1. There are many different ways of representing (writing down) mathematical objects.

The naive ultrafinitist chooses a representation, calls it explicit, and says that a number is "truly" written down only when its explicit representation is known. The prototypical choice of explicit representation is the tallying system, where 6 is written as ||||||. This choice is not arbitrary either: the foundations of mathematics (e. g. Peano arithmetic) use these tally marks by necessity.

However, the integers are a special^1 case, and in the general case, the naive ultrafinitist insistance on fixing a representation starts looking a bit absurd. Take Linear Algebra: should you choose an explicit basis of R3 that you use indiscriminately for every problem; or should you use a basis (sometimes an arbitary one) that is most appropriate for the problem at hand?

1.2. Not all representations are equally good for all purposes.

For example, enumerating the prime factors of 2*3*5 is way easier than doing the same for ||||||||||||||||||||||||||||||, even though both represent the same number.

1.3. Converting between representations is difficult, and in some cases outright impossible.

Lenstra earned $14,527 by converting the number known as RSA-100 from "positional" to "list of prime factors" representation.

Converting 3\^\^\^3 from up-arrow representation to the binary positional representation is not possible for obvious reasons.

As usual, up-arrow notation is overkill. Just writing the decimal number 100000000000000000000000000000000000000000000000000000000000000000000000000000000 would take more tally-marks than the number of atoms in the observable universe. Nonetheless, we can deduce a lot of things about this number: it is an even number, and its larger than RSA-100. Nonetheless, I can manually convert it to "list of prime factors" representation: 2\^80 * 5\^80.

2 Constructivism

The constructivists were the first to insist that algorithmic matters be taken seriously. Constructivism separates concepts that are not computably equivalent. Proofs with algorithmic content are distinguished from proofs without such content, and algorithmically inequivalent objects are separated.

For example, there is no algorithm for converting Dedekind cuts to equivalence classes of rational Cauchy sequences. Therefore, the concept of real number falls apart: constructively speaking, the set of Cauchy-real numbers is very different from the set of Dedekind-real numbers.

This is a tendency in non-classical mathematics: concepts that we think are the same (and are equivalent classically) fall apart into many subtly different concepts.

Constructivism separates concepts that are not computably equivalent. Computability is a qualitative notion, and even most constructivists stop here (or even backtrack, to regain some classicality, as in the foundational program known as Homotopy Type Theory).

3. Modern ultra/finitism

The same way constructivism distinguished qualitatively different but classically equivalent objects, one could starts distinguishing things that are constructively equivalent, but quantitatively different.

One path leads to the explicit approach to representation-awareness. For example, LNST^4 explicitly distinguishes between the set of binary natural numbers B and the set of tally natural numbers N. Since these sets have quantitatively different properties, it is not possible to define a bijection between B and N inside LNST.

Another path leads to ultrafinitism.

The most important thinker in modern ultra/finitism was probably Edward Nelson. He observed that the "set of effectively representable numbers" is not downward-closed: even though we have a very short notation for 3\^\^\^3, there are lots of numbers between 0 and 3^^^3 that have no such short representation. In fact, by elementary considerations, the overwhelming majority of them cannot ever have a short representation.

What's more, if our system of notation allows for expressing big enough numbers, then the "set of effectively representable numbers" is not even inductive because of the Berry paradox. In a sense, the growth of 'bad enough' functions can only be expressed in terms of themselves. Nelson's hope was to prove the inconsistency of arithmetic itself using a similar trick. His attempt was unsuccessful: Terry Tao pointed out why Nelson's approach could not work.

However, Nelson found a way to relate unexpressibly huge numbers to non-standard models of arithmetic^(2).

This correspondence turned out to be very powerful, leading to many paradoxical developments: including finitistic^3 extension of Set Theory; a radically elementary treatment of Probability Theory and a new ways of formalising the Infinitesimal Calculus.

4. Answering your question

What kind of mathematics would we still be able do (cryptography, analysis, linear algebra …)?

All of it; modulo translating the classical results to the subtler, ultra/finitistic language. This holds even for the silliest versions of ultrafinitism. Imagine a naive ultrafinitist mathematician, who declares that the largest number is m. She can't state the proposition R(n,2^(m)), but she can still state its translation R(log_2 n,m), which is just as good.

Translating is very difficult even for the qualitative case, as seen in this introductory video about constructive mathematics. Some theorems hold for Dedekind-reals, others for Cauchy-reals, et c. Similarly, in LNST, some theorems hold only for "binary naturals", others only for "tally naturals". It would be even harder for true ultrafinitism: the set of representable numbers is not downward-closed.

This was a very high-level overview. Feel free to ask for more details (or clarification).


^1 The integers are absolute. Unfortunately, it is not entirely clear what this means.

^2 coincidentally, the latter notion prompted my very first contribution to LW

^3 in this so-called Internal Set Theory, all the usual mathematical constructions are still possible, but every set of standard numbers is finite.

^4 Light Naive Set Theory. Based on Linear Logic. Consistent with unrestricted comprehension.

Is such a long answer suitable in OT? If not, where should I move it?

Anywhere is better than nowhere.

I think this is sufficiently good to go directly to Main article, but generally the safe option is to publish a Discussion article (which in case of success can be later moved to Main).

I would really like seeing more articles like this on LW -- articles written by people who deeply understand what they write about. (Preferably with more examples, because this was difficult to follow without clicking the hyperlinks. But that may be just my personal preference.)

So, here are the options:

  • leave it here; (the easiest)
  • repost as an article; (still very easy)
  • rewrite as a more detailed article or series of articles (difficult)

What we call "transhumanism" in 2015, people in a century or two will call "health care."

I think that some of what we call transhumanism will be folded into healthcare.

My bet is that the baseline of what's considered adequate health will go up, but there will also be a separate category for exploration of what's possible.

Now that my review of Plato's Camera has about 17 PDF pages of real content, does anyone want to proof-read/advance-read it to help avoid babbling?

I'd be happy to take a look, with the following caveats: (1) I haven't read Plato's Camera, (2) I am not a professional philosopher, and (3) I don't guarantee to respond quickly (though I might -- it depends on workload, procrastination level, etc.).

Sent to the email address listed on your profile here.

Received. I'll take a look. The caveats above haven't changed :-).

On commitment devices: I think this article: http://blog.beeminder.com/akrasia/ is essentially correct. However I am not at all convinced Beeminder is the best approach for self-binding. Texting a number to robot or get another number booked away from my bank account is far too impersonal for me. It surely has its uses, I just wish we had many different kinds of commitment devices to choose from.

In the ancestral environment it was all about physical needs and social needs. Still these are the strongest motivators. For example someone who wants to get fit might as well join the armed forces. The punishments used there target people's physical and social needs, and wanting to have the respect of other soldiers motivates too.

Wiki says honor is probably a commitment device.

Maybe that could be a good idea somehow? When wanting to do X, surround ourselves with people who respect people who do X and disrespect people who don't do X? This kind of social needs seems to be better for me...

What if someone made an app, perhaps as a Facebook plugin or something, where people with the same goals are put into groups of 12, and they constantly tell each other how they are progressing?

Full disclosure here regarding personal issues. I'm looking for advice on how to resolve them to the point where they no longer affect my life majorly. I don't expect an issue this ingrained into my psyche to ever be gotten rid of entirely. I'm sure there are other places more directly related to the subject that I could request this advice, but LWers have usually seemed to have something useful to add to things.

Recently (toward the end of 2013), I slowed, and then stopped taking Zoloft for what was purported to be emotional instability, since I was about 7 until then, when I was 21. I do not regret doing this in the slightest, as, quite frankly, while on it I was extremely flatlined emotionally and had not grown hardly at all in that regard for years. Everything was quite dull.

I have, since then, had to resort to various techniques to calm myself, as getting off of Zoloft also revealed myself to be rather anxious, and to have had latent abandonment issues resulting in clinginess to my close friends. It is the latter part that I need help with, as most literature that I've found has been rather worthless in truly actionable things, as they suggest broad things to be done and little in regards to intermediary steps, or speak to the effects, consequences, and actions that should be taken when in a romantic relationship (which I am not).

Regarding how it feels when I have an episode (for the purpose of relating to it for other people with perhaps-similar issues), I want to curl up in the corner, I get panicky, and it feels like lightning's shooting through me as a cold, heavy lump forms in my belly.

Thanks for any help you can offer.

Let's have fun offering what-have-you questions for Fermi estimates! If we are here for fun, at least partially, let's save some in quantifiable form. Here's mine:

Would a standard piece of soap (neutral pH, lens-like shape) sink to the bottom of the Mariana Trench, or will it get dissolved on the way down?

The soap keeps more or less constant density as it sinks, but the water is denser the deeper you go. And the density of soap is really close to that of water, so I expect that there is some depth at which the soap has the same density as the water, and when it gets to that level it stays there. And eventually it dissolves or gets eaten.

I’m trying to translate some material from LessWrong for a friend (interested with various subjects aborded here, but can’t read english…), and I’m struggling to find the best translation for “evidence”. I have many candidates, but every one of them is a little bit off relative to the connotation of "evidence". Since it’s a so central term in all the writings here, I figured out that it could not be bad to spend a little time finding a really good translation, rather than a just-okayish one.

English readers :

  • Could you find a few different sentences that would cover all (slighty differenrt) usages of evidence ? The objective is, if my translation fit well in all those propositions, there is good chances that it will fit well in everything i may want to translate. For example, from wiki : “Evidence for a given theory is the observation of an event that is more likely to occur if the theory is true than if it is false” “Generalization from fictional evidence” “Conservation of expected evidence”. I except that finding a translation that will cover equally well those three usages will basically cover any usage, but can you think of a 4th usage that may prove problematic even for a term that fit well for the 3 others ?
  • What would be the less bad synonym of “evidence” : clue, proof, observation, sign (that’s basically my best candidates, translated back in english). I dislike all of them, but that’s the best candidates I found, translated back in english. (substitute evidence in all the test sentences abole, and you will understand my problem. “Clue for a given theory…” is somewhat good, but “conservation of expected clue” less so…)

French readers, if any :

J’ai comme candidats : « preuve », « indice », « signe », « observation ». D’autres propositions ? Laquelle vous semble la meilleure ?

Thanks for your cooperation.

(and don’t get me started on “entangled with”, I think I will lose much hair trying to find an acceptable translation for that one. French sucks.)

What did Laplace call it? He invented a lot of this stuff, and presumably wrote in French.

Link: Complexity-Induced Mental Illness

My personal estimate is that 75% of adults are suffering from some sort of serious mental problem because the human interface to life is broken. In the year 2015, life serves up a level of complexity and unrelenting stimulation that most folks can’t handle it, and I believe it is frying our brains.

I think he is right and that's an actual insight. The typical mind fallacy would suggest that you wouldn't notice that if it doesn't apply to you and esp. on LW I'd guess that most people can deal with a lot of complexity. Can you?

[pollid:1005]


Total sidenote: Scott Adams Blog has a cute function when copying text: After copying (^C) )a whole paragraph it automatically adds a reference URL to the post in the clipboard. I couldn't quickly find out how its done but it's a nice feature to have for a blog.

Now I want a fic in which Hermione wins a goat in the MH problem, on the grounds that Ron and Harry shouldn't be trusted with a car, and they do need an antidote...