Open thread, 3-8 June 2014

Previous Open Thread:  http://lesswrong.com/r/discussion/lw/k9x/open_thread_may_26_june_1_2014/

(oops, we missed a day!)

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:49 AM
Select new highlight date
All comments loaded

I'm starting to maybe figure out why I've had such difficulties with both relaxing and working in the recent years.

It feels that, large parts of the time, my mind is constantly looking for an escape, though I'm not entirely sure what exactly it is trying to escape from. But it wants to get away from the current situation, whatever the current situation happens to be. To become so engrossed in something that it forgets about everything else.

Unfortunately, this often leads to the opposite result. My mind wants that engrossment right now, and if it can't get it, it will flinch away from whatever I'm doing and into whatever provides an immediate reward. Facebook, forums, IRC, whatever gives that quick dopamine burst. That means that I have difficulty getting into books, TV shows, computer games: if they don't grab me right away, I'll start growing restless and be unable to focus on them. Even more so with studies or work, which usually require an even longer "warm-up" period before one gets into flow.

Worse, I'm often sufficiently aware of that discomfort that my awareness of it prevents the engrossment. I go loopy: I get uncomfortable about the fact that I'm uncomfortable, and then if I have to work or study, my focus is on "how do I get rid of this feeling" rather than on "what should I do next in this project". And then my mind keeps flinching away from the project, to anything that would provide a distraction, on to Facebook, to IRC, to whatever. And I start feeling worse and worse.

Some time back, I started experimenting with teaching myself not to have any goals. That is, instead of having a bunch of stuff I try to accomplish in some given time period, simply be okay with doing absolutely nothing for all day (or all week, or all year...), until a natural motivation to do something develops. This seems to help. So does mindfulness, as well as ensuring that my basic needs have been met: enough sleep and food and having some nice real-life social interaction every few days.

Anybody else recognize this?

I recognize this in myself and it's been difficult to understand, much less get under control. The single biggest insight I've had about this flinching-away behavior (at least the way it arises in my own mind) is that it's most often a dissociative coping mechanism. Something intuitively clicked into place when I read Pete Walker's description of the "freeze type". From The 4Fs: A Trauma Typology in Complex PTSD:

Many freeze types unconsciously believe that people and danger are synonymous, and that safety lies in solitude. Outside of fantasy, many give up entirely on the possibility of love. The freeze response, also known as the camouflage response, often triggers the individual into hiding, isolating and eschewing human contact as much as possible. This type can be so frozen in retreat mode that it seems as if their starter button is stuck in the "off" position. It is usually the most profoundly abandoned child - "the lost child" - who is forced to "choose" and habituate to the freeze response (the most primitive of the 4Fs). Unable to successfully employ fight, flight or fawn responses, the freeze type's defenses develop around classical dissociation, which allows him to disconnect from experiencing his abandonment pain, and protects him from risky social interactions - any of which might trigger feelings of being reabandoned. Freeze types often present as ADD; they seek refuge and comfort in prolonged bouts of sleep, daydreaming, wishing and right brain-dominant activities like TV, computer and video games. They master the art of changing the internal channel whenever inner experience becomes uncomfortable. When they are especially traumatized or triggered, they may exhibit a schizoid-like detachment from ordinary reality.

Of course like with any other psychological condition there's a wide spectrum: some people had wonderful childhoods full of safe attachment and always had somebody to model healthy processing of emotions for them, some people were utterly abandoned as children, and many more had something between those extremes. The key understanding I've gained from Pete Walker's writing is that simply being left alone with upsetting inner experience too often as a child can lead to development of "freeze type" defenses, even in the absence of any overtly abusive treatment.

I suspect that using a combination of TV shows, games and web browsing as emotional analgesics (at various levels of awareness) is very common now in wealthy countries. This is one of the reasons I would like to see more discussion of emotional issues on Less Wrong.

I suspect that using a combination of TV shows, games and web browsing as emotional analgesics (at various levels of awareness) is very common now in wealthy countries. This is one of the reasons I would like to see more discussion of emotional issues on Less Wrong.

I would also like to see more such discussion, but, as with rationality, more from the viewpoint of rising above base level average than of recovering only to that level.

This is kind of funny because I came to this open thread to ask something very similar.

I have noticed that my mind has a "default mode" which is to aimlessly browse the internet. If I am engaged in some other activity, no matter how much I am enjoying it, a part of my brain will have the strong desire to go into default mode. Once I am in default mode, it takes active exertion to break away do anything else, no matter how bored or miserable I become. As you can imagine, this is a massive source of wasted time, and I have always wished that I could stop this tendency. This has been the case more or less ever since I got my first laptop when I was thirteen.

I have recently been experimenting with taking "days off" of the internet. These days are awesome. The day just fills up with free time, and I feel much calmer and content. I wish I could be free of the internet and do this indefinitely.

But there are obvious problems, a few of which are:

  • Most of the stuff that I wish I was doing instead of aimlessly surfing the internet involves the computer and oftentimes the internet. A few of the things that would be "good uses of my time" are reading, making digital art, producing electronic music, or coding. Three out of four of those things rely on the computer, and of those three, they oftentimes in some capacity rely on the internet.

  • I am inevitably going to be required to use the internet for school and work. Most likely in my graphic design and computer science classes next year I will have to be able to use the internet on my laptop during class.

  • If I have an important question that I could find the answer to on Google, I'm going to want to find that answer.

It's hard to find an eloquent solution to this problem. If I come up with a plan for avoiding internet use that is too loose, it will end up getting more and more flexible until it falls apart completely. If the plan is too strict, then I inevitably will not be able to follow it and will abandon it. If the plan is too intricate and complicated, then I will not be able to make myself follow it either.

The best idea I have come up with so far is to delete all the browsers from my laptop and put a copy of Chrome on a flash drive. I would never copy this instance of Chrome onto a hard drive, instead I would just run it from the flash drive every time I wanted to use it. This way, every time I wanted to use the internet, I would have to go find the flash drive. I could also give the flash drive to someone else for a while if I felt like a moment of weakness was coming on. I've been using this for exactly one day and it seems to be working pretty well so far.

The other thing I've been doing for a few days is writing a "plan" of the next day before I go to bed, then sticking to the plan. If something happens to interrupt my plan, then I will draft a new plan as soon as possible. For example, my friend called me up today inviting me over. I wasn't about to say "No, I can't hang out, I have planned out my day and it didn't include you". So when I got back, I wrote a new one. Most of these plans involve limiting internet use to some degree, so this also seems promising. I might also do something where I keep track of how many days in a row I followed the plan and try not to break the chain.

In my experience adderall can ameliorate this problem somewhat.

Stratton's perceptual adaptation experiments a century ago have shown that the brain can adapt to different kinds of visual information, e.g. if you wear glasses that turn the picture upside down, you eventually adjust and start seeing it right side up again. And recently some people have been experimenting with augmented senses, like wearing an anklet with cell phone vibrators that lets you always know which way is north.

I wonder if we can combine these ideas? For example, if you always carry a wearable camera on your wrist and feed the information to a Google Glass-like display, will your brain eventually adapt to having effectively three eyes, one of which is movable? Will you gain better depth perception, a better sense of your surroundings, a better sense of what you look like, etc.?

One aspect of perceptual adaptation I do not often hear emphasized is the role of agency. I first encountered it in this passage:

The first hours were very difficult; nobody could move freely or do anything without going very slowly and trying to figure out and make sense of what he or she saw. Then something unexpected happened: Everything about their bodies and the immediate vicinity that they were touching began to look as before, but everything which could not be touched continued to be inverted. Gradually, by groping and touching while moving around to attain the satisfaction of normal needs, participants in the experiment found that objects further a field began to appear normal to the participants in the experiment. In a few weeks, everything looked the right way up, and they could all do everything without any special attention or care. At one point in the experiment snow began to fall. Kohler looked through the window and saw the flakes rising from the earth and moving upwards. He went out, stretched out his hands, palms upwards, and felt the snow falling on them. After only a few moments of feeling the snow touch his palms, he began to see the snow falling instead of rising.

There have been other experiments with inverted spectacles. One carried out in the United States involved two people, one sitting in a wheelchair and the other pushing it, both fitted with such special glasses. The one who moved around by pushing the chair began to see normally, and after a few hours, was able to find his way without groping, while the one sitting continued to see everything the wrong way.

--- Moshe Feldenkrais, "Man and World," in "Explorers of Humankind," ed Thomas Hanna

I read about an experiment (no link, sorry) where people wore helmets that gave them a 360 degree view of their surroundings. They were apparently able to adapt quite well, and could eventually do thinks like grab a ball tossed to them from behind without turning around.

I've thought about taking this idea further.

Think of applying the anklet idea to groups of people. What if soccer teams could know where their teammates are at any time, even if they can't see them? Now apply this to firemen. or infantry. This is the startup i'd be doing if I wasn't doing what I'm doing. plugging data feeds right into the brain, and in particular doing this for groups of people, sounds like the next big frontier.

from my experience with focusing on the senses I already have the mere availability of the data is not sufficient. You really need to process it. The glass intervention works well because it also takes away the primary way of interacting with the world. If you only add sense most of it can be pretty much ignored as it doesn't bring any compelling extra value except for being cool for a while. Color TV was kinda nice improvement but not many are jumping on the 3D bandwagon.

So if you really want to go three eyed it could be a good bet it could be good from sense development perspective to go only new-eye mono for a while. Another one would be have a environment where the new capabilities are difference makingly handy. I could imagine that fixing and taking apart computers could benefit from that kind of sensing. You could also purposefully make a multilayered desk so that simply looking what is on the desk would require hand movement but many more documents could be open at any time.

Your brain is already mostly filtering out the massive amount of input it takes, making it quite expensive to make it bother paying attention to yet another sense-datum The sense would also require their own "drivers". I could imagine that managing a moveable eye would be more laboursome than eye focus micro. Having a fixed separation of your viewpoints makes the calculations easy routine. That would have to be expanded into a more general approach for variable separation. There is a camera trick where you change your zoom while simultanously moving the camera in the forward backward dimension keeping the size of the primary target fixed but stretching the perspective. Big variance to the viewpoint separation would induce similar effects. I could imagine how it could be nausea-inducing instead of only cool. Increased mental labour and confusion atleast on the short-term would press against adopting a more expanded sensory experience. Therefore if such transition is wanted it is important to bring the tempting good sides concrete in the practical experience.

I have magnets implanted into two of my fingertips, which extend my sense of touch to be able to feel electromagnetic fields. I did an AMA on reddit a while ago that answers most of the common questions, but I'd be happy to answer any others.

To touch on it briefly, alternating fields feel like buzzing, and static fields feel like bumps or divots in space. It's definitely become a seamless part of my sensory experience, although most of the time I can't tell it's there because ambient fields are pretty weak.

if you always carry a wearable camera on your wrist

Better: put it on your personal drone which normally orbits you but can be sent out to look at things...

A qucik search on Google Scholar with such queries as cryonic, cryoprotectant, cryostasis, neuropreservation confirms my suspicion that there is very little, if any, academic research on cryonics. I realize that being generally supportive of MIRI's mission, Less Wrong community is probably not very judgmental of non-academic science, and I may be biased, being from academia myself, but I believe that despite all problems, any field of study largely benefits from being a field of academic study. That makes it easier to get funding; that makes the results more likely to be noticed, verified and elaborated on by other experts, as well as taught to students; that makesit more likely to be seriously considered by the general public and governmental officials. The last point is particularly important, since on one hand, with the current quasi-Ponzi mechanism of funding, the position of preserved patients is secured by the arrival of new members, and on the other hand, a large legislative action is required to make cryonics reliable: train the doctors, give the patients more legal protection than the legal protection of graves, and eventually get it covered by health insurance policies or single payer systems.

As for the method itself, it frankly looks inadequate as well. I do believe that it's a good bet worth taking, but so did Egyptian pharaohs. And they lost, because their method of preservation turned out to be useless. I'm well aware of all the considerations about information theory, nanorobotics and brain scanning, but improving our freezing technologies to the extent that otherwise viable organisms could be brought back to life without further neural repairs seems to be the thing we should totally be doing.

Thus, I want to see this field develop. I want to see at least once a year a study concerning with cryonic preservation of neural tissue in a peer-reviewed journal with high impact factor. And before I die I want to at least see a healthy chimpanzee being cooled to the temperature of liquid nitrogen, and then brought back to life without losing any of its cognitive abilities.

What can we do about it? Is there an organization that is willing to collect donations and fund at least one academic study in this field? Can we use some crowdfunding platform and start such campaign? Can we pitch it to Calico?

I think the nearest thing is the Brain Preservation Foundation. If you want to donate money towards that purpose, they are a good address.

For those of us who for whatever reason can't make it to a CFAR workshop, what are the best ways to get more or less the equivalent? A lot of the information they teach is in the Sequences (but not all of it, from what it looks like), but my impression is that much of the value from a workshop is in (a) hands-on activities, (b) interactions with others, and (c) personalized applications of rationality principles developed in numerous one-on-one and small-group sessions.

So I'm looking for:

  • Resources for getting the information that's not covered (or at least not comprehensively covered) in the Sequences.
  • Ideas for how to simulate the activities.
  • Ideas for how to simulate the small group interactions. This is mainly what LW meetups are for, but for various reasons I can't usually make it to a meetup.
  • How to simulate the one-on-one personalized training.

That last one is probably the hardest, and I suspect it's impossible without either (a) spending an awful lot of time developing the techniques yourself, or (b) getting a tutor. So, anybody interested in being a rationality tutor?

Find & read good self-help type stuff (relevant books by psychologists, Less Wrong posts, Sebastian Marshall, Getting Things Done, etc.) and invent/experiment with your own techniques in a systematic way. Do Tiny Habits. Start meditating. Watch & apply this video. Keep a log of all the failure modes you run in to and catalogue strategies for overcoming each of them. Read about habit formation. Brainstorm & collect habits that would be useful to have and pair them with habit formation techniques. Try lots of techniques and reflect about why things are or are not working.

This seems very interesting, maybe someone can look into this in depth. The costs are much more manageable and there are hopefully fewer legal issues with preserving brain only. Not sure why they only talk about "next of kin". Anyway, "chemical preservation" of the brain only for $2500 seems like an interesting alternative to Alcor or CI. It is also more likely to go over better in an argument with people reluctant to opt for the "traditional" cryonics, such as parents of some of the regulars complaining about it.

I am not qualified to judge the quality of their "Instructions for Funeral Director":

Embalm the entire body, paying special attention to the brain. Use arterial fluid on the brain rather than cavity fluid. If 10% Neutral Buffered Formalin is readily available, that is preferred for the brain. Wait one hour and then pump more fluid through the brain again. Since the body will never be displayed at a funeral service, there is no need to perform any cosmetic procedures at all. Those would simply delay shipment.

In a rare case of actually doing something I said I would, I've started to write a utility for elevating certain words and phrases in the web browser to your attention, by highlighting them and providing a tool-tip explanation for why they were highlighted. It's still in the occasionally-blow-up-your-webpage-and-crash-the-browser phase of development, but is showing promise nonetheless.

I have my own reasons for wanting this utility ("LOOK AT THIS WORD! SUBJECT IT TO SCRUTINY OR IT WILL BE YOUR UNDOING!") but thought I would throw it out to LW to see if there are any use-cases I might not have considered.

On a related note, is there a reason why Less Wrong, and seemingly no other website, would suffer a catastrophic memory leak when I try and append a JSON-P script element to the DOM? It doesn't report any security policy conflicts; it just dies.

I just discovered a very useful way to improve my comfort and posture while sitting in chairs not my own. If you travel a lot or are constantly changing workstations or just want to improve your current set up – buy contact shelf lining, the one with no-slip grip.

The liner adds grip to chairs that either 1. do not adequately recline or 2. reclines but you may tend to slide off (slippery leather chairs). Recently I was provided with a stiff non-reclining wood chair and it was killing my back. Every time I relaxed into the back rest I started to slide down and my posture was terrible and my back hurt. I picked mine up at target

I can't believe it took me this long to discover this, it has greatly improved my comfort.

Edit: In case directions are necessary, place the liner (cut to appropriate length) on the seat not the back rest.

If you had four months to dedicate to working on a project, what would you work on?

I do not understand - and I mean this respectfully - why anyone would care about Newcomblike problems or UDT or TDT, beyond mathematical interest. An Omega is physically impossible - and if I were ever to find myself in an apparently Newcomblike problem in real life, I'd obviously choose to take both boxes.

An Omega is physically impossible

I don't think it's physically impossible for someone to predict my behavior in some situation with a high degree of accuracy.

An Omega is physically impossible

The idea that we live in a simulation is not a physical impossibility.

At the moment choices can often be predicted 7 seconds in advance by reading brain signals.

Is there any convenient way to promote interesting sub-threads to Discussion-level posts?

LessWrong's focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary. I understand that that's how the organization got its start, and it's fine. But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly (I also find the near-obsession with IQ a bit unsettling).

If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?

Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren't new inventions. Any decent therapist will also try to get people to see the "outside view" of their actions. Transhumanism and x-risk probably wouldn't belong in this book. Bayesian reasoning and cognitive fallacies have plenty of popular descriptions around them.

Effective altruism doesn't need to be tied to utilitarianism or terms like QALYs. Look at the way the Gates Foundation describes its work, for instance.

The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn't hostile and doesn't provoke instant dismissal by those who most need that alternative.

Am I missing anything here? Is there some large component of rationalism that can't be severed from the way it's packaged on this site and sites like it?

For all the emphasis on Slytherin-style interpersonal competence (not so much on the main site anymore, but it's easy to find in the archive and in Methods), LW's historically had a pretty serious blind spot when it comes to PR and other large-scale social phenomena. There's probably some basic typical-minding in this, but I'm inclined to treat it mostly as a subculture issue; American geek culture has a pretty solid exceptionalist streak to it, and treats outsiders with pity when it isn't treating them with contempt and suspicion. And we're very much tied to geek culture. I've talked to LWers who don't feel comfortable exercising because they feel like it's enemy clothing; if we can't handle something that superficial, how are we supposed to get into Joe Sixpack's head?

Ultimately I think we focus on contrarian technocrat types, consciously or not, because they're the people we know how to reach. I include myself in this, unfortunately.

A very fair assessment.

I would also note that often when people DO think about marketing LW, they speak about the act of marketing with outright contempt. Marketing is just a set of methodologies to draw attention to something. As a rationalist, one should embrace that tool for anything they care about rather than treating it as vulgar.

The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn't hostile and doesn't provoke instant dismissal by those who most need that alternative.

The hardline secularism (which might be better described as a community norm of atheism, given that some of the community favors creating community structures which take on the role of religious participation,) isn't a prerequisite so much as a conclusion, but it's one that's generally held within the community to be pretty basic.

However, so many of the lessons of epistemic rationality bear on religious belief that not addressing the matter at all would probably smack of willful avoidance.

In a sense, rationality might function as an alternative to mysticism. Eliezer has spoken for instance about how he tries to present certain lessons of rationality as deeply wise so that people will not come to it looking for wisdom, find simple "answers," and be tempted to look for deep wisdom elsewhere. But there's another very important sense where, if you treat rationality like mysticism, the result is that you'll completely fuck up at rationality, and get a group that worships some "rational" sounding buzzwords without gaining any useful insight into reasoning.

Keep in mind that insofar as Less Wrong has educational goals, it's not trying to reach as wide an audience as possible, it's trying to teach as many people as possible to get it right. If "reaching" an audience means instilling them with some memes which don't have much use in isolation, while leaving out important components of rationality, that measure has basically failed.

If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?

More simple language, many examples, many exercises.

And then the biggest problem would be that most people would just skip the exercises, remember some keywords, and think that it made them more rational.

By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn't care enough to really do the rationality stuff.

Which means that the book should start with some motivating examples. But even that has limited effect.

I believe there is a huge space for improvement, but we shouldn't expect magic even with the best materials. There is only so much even the best book can do.

Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren't new inventions.

The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time. Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.

The difficult part is to avoid self-deception. These millenia old materials seem quite bad at it. Maybe they were best of what was available at their time. But that's not enough. Archimedes could have been the smartest physicist of his time, but he still didn't invent relativity. Being "best" is not enough; you have to do things correctly.

By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn't care enough to really do the rationality stuff.

Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form. And how many people do you think the article on akrasia has actually cured of akrasia?

The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time.

First of all, I'm mainly dealing with the subset of material here that deals with self-knowledge. Even if you disagree with "millenia old", do you disagree with "any decent therapist would try to provide many/most of these tools to his/her patients"?

On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either.

Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.

I do not see what you mean here. Nobody at LW has invented rationality, become a beisutsukai master, built a Friendly AI or Started the singularity. Freud correctly realized the importance the subconscious has in shaping our behavior, and the fact that it is shaped by past experiences in ways not always clear to us. He then failed to separate this knowledge from some personal obsessions. We wouldn't expect any methods of rationality to turn Freud into a superhero, we'd expect it to help people reading him separate the wheat from the chaff.

Effective animal altruism question: I may be getting a dog. Dogs are omnivores who seem to need meat to stay healthy. What's the most ethical way to keep my hypothetical dog happy and healthy?

Edit: Answers Pet Foods appears to satisfice. I'll be going with this pending evidence that there's a better solution.