<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[LessWrong Development Server]]></title><description><![CDATA[A community blog devoted to refining the art of rationality]]></description><link>https://www.lesserwrong.com/</link><generator>RSS for Node</generator><lastBuildDate>Tue, 05 Sep 2017 05:24:06 GMT</lastBuildDate><atom:link href="https://www.lesserwrong.com/feed.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[Intellectual Progress Inside and Outside Academia]]></title><description><![CDATA[This post is taken from a recent facebook conversation that included Wei Dai, Eliezer Yudkowsky, Vladimir Slepnev, Stuart Armstrong, Maxim Kesin, Qiaochu Yuan and Robby Bensinger, about the ability of academia to do the key intellectual progress required in AI alignment. [The above people all gave permission to have their comments copied here. Some commenters requested their replies not be made public, and their comment threads were not copied over.]Initial Thread

Wei Dai:

Eliezer, can you give us your take on this discussion between me, Vladimir Slepnev, and Stuart Armstrong? I'm especially interested to know if you have any thoughts on what is preventing academia from taking or even recognizing certain steps in intellectual progress (e.g., inventing anything resembling Bitcoin or TDT/UDT) that non-academics are capable of. What is going on there and what do we need do to avoid possibly suffering the same fate? See this and this.

Eliezer Yudkowsky:

It's a deep issue. But stating the obvious is often a good idea, so to state the obvious parts, we're looking at a lot of principal-agent problems, Goodhart's Law, bad systemic incentives, hypercompetition crowding out voluntary contributions of real work, the blind leading the blind and second-generation illiteracy, etcetera. There just isn't very much in the academic system that does promote any kind of real work getting done, and a lot of other rewards and incentives instead. If you wanted to get productive work done inside academia, you'd have to ignore all the incentives pointing elsewhere, and then you'd (a) be leading a horrible unrewarded life and (b) you would fall off the hypercompetitive frontier of the major journals and (c) nobody else would be particularly incentivized to pay attention to you except under unusual circumstances. Academia isn't about knowledge. To put it another way, although there are deep things to say about the way in which bad incentives arise, the skills that are lost, the particular fallacies that arise, and so on, it doesn't feel to me like the *obvious* bad incentives are inadequate to explain the observations you're pointing to. Unless there's some kind of psychological block preventing people from seeing all the obvious systemic problems, it doesn't feel like the end result ought to be surprising.

Of course, a lot of people do seem to have trouble seeing what I'd consider to be obvious systemic problems. I'd chalk that up to not as much fluency with Moloch's toolbox, plus them not being status-blind and assigning non-zero positive status to academia that makes them emotionally reluctant to correctly take all the obvious problems at face value.

Eliezer Yudkowsky (cont.):

It seems to me that I've watched organizations like OpenPhil try to sponsor academics to work on AI alignment, and it seems to me that they just can't produce what I'd consider to be real work. The journal paper that Stuart Armstrong coauthored on "interruptibility" is a far step down from Armstrong's other work on corrigibility. It had to be dumbed way down (I'm counting obscuration with fancy equations and math results as "dumbing down") to be published in a mainstream journal. It had to be stripped of all the caveats and any mention of explicit incompleteness, which is necessary meta-information for any ongoing incremental progress, not to mention important from a safety standpoint. The root cause can be debated but the observable seems plain. If you want to get real work done, the obvious strategy would be to not subject yourself to any academic incentives or bureaucratic processes. Particularly including peer review by non-"hobbyists" (peer commentary by fellow "hobbyists" still being potentially very valuable), or review by grant committees staffed by the sort of people who are still impressed by academic sage-costuming and will want you to compete against pointlessly obscured but terribly serious-looking equations.

Eliezer Yudkowsky (cont.):

There's a lot of detailed stories about good practice and bad practice, like why mailing lists work better than journals because of that thing I wrote on FB somewhere about why you absolutely need 4 layers of conversation in order to have real progress and journals do 3 layers which doesn't work. If you're asking about those it's a lot of little long stories that add up.Subthread 1

Wei Dai:

Academia is capable of many deep and important results though, like complexity theory, public-key cryptography, zero knowledge proofs, vNM and Savage's decision theories, to name some that I'm familiar with. It seems like we need a theory that explains why it's able to take certain kinds of steps but not others, or maybe why the situation has gotten a lot worse in recent decades.

That academia may not be able to make progress on AI alignment is something that worries me and a major reason for me to be concerned about this issue now. If we had a better, more nuanced theory of what is wrong with academia, that would be useful for guiding our own expectations on this question and perhaps also help persuade people in charge of organizations like OpenPhil.

Qiaochu Yuan:

Public-key cryptography was invented by GCHQ first, right?

Wei Dai:

It was independently reinvented by academia, with only a short delay (4 years according to Wikipedia) using much less resources compared to the government agencies. That seems good enough to illustrate my point that academia is (or at least was) capable of doing good and efficient work.

Qiaochu Yuan:

Fair point.

I'm a little concerned about the use of the phrase "academia" in this conversation not cutting reality at the joints. Academia may simply not be very homogeneous over space and time - it certainly seems strange to me to lump von Neumann in with everyone else, for example.

Wei Dai: 

Sure, part of my question here is how to better carve reality at the joints. What's the relevant difference between the parts (in space and/or time) of academia that are productive and the parts that are not?

Stuart Armstrong:

Academia is often productive. I think the challenge is mainly getting it to be productive on the right problems.

Wei Dai:

Interesting, so maybe a better way to frame my question is, of the times that academia managed to focus on the right problems, what was responsible for that? Or, what is causing academia to not be able to focus on the right problems in certain fields now?Subthread 2

Eliezer Yudkowsky:

Things have certainly gotten a lot worse in recent decades. There's various stories I've theorized about that but the primary fact seems pretty blatant. Things might be different if we had the researchers and incentives from the 1940s, but modern academics are only slightly less likely to sprout wings than to solve real alignment problems as opposed to fake ones. They're still the same people and the same incentive structure that ignored the entire issue in the first place.

OpenPhil is better than most funding sources, but not close to adequate. I model them as having not seen past the pretend. I'm not sure that more nuanced theories are what they need to break free. Sure, I have a dozen theories about various factors. But ultimately, most human institutions through history haven't solved hard mental problems. Asking why modern academia doesn't UDT may be like asking why JC Penney doesn't. It's just not set up to do that. Nobody is being docked a bonus for writing papers about CDT instead. Feeling worried and like something is out of place about the College of Cardinals in the Catholic Church not inventing cryptocurrencies, suggests a basic mental tension that may not be cured by more nuanced theories of the sociology of religion. Success is unusual and calls for explanation, failure doesn't. Academia in a few colleges in a few countries used to be in a weird regime where it could solve hard problems, times changed, it fell out of that weird place.

Rob Bensinger:

It's not actually clear to me, even after all this discussion, that 1940s researchers had significantly better core mental habits / mindsets for alignment work than 2010s researchers. A few counter-points:

- A lot of the best minds worked on QM in the early 20th century, but I don't see clear evidence that QM progressed differently than AI is progressing today; that is, I don't know of a clear case that falsifies the hypothesis "all the differences in output are due to AI and QM as cognitive problems happening to involve inherently different kinds and degrees of difficulty". In both cases, it seems like people did a good job of applying conventional scientific methods and occasionally achieving conceptual breakthroughs in conventional scientific ways; and in both cases, it seems like there's a huge amount of missing-the-forest-for-the-trees, not-seriously-thinking-about-the-implications-of-beliefs, and generally-approaching-philosophyish-questions-flippantly. It took something like 50 years to go from "Schrodinger's cat is weird" to "OK /maybe/ macroscopic superposition-ish things are real" in physics, and "maybe macroscopic superposition-ish things are real" strikes me as much more obvious and much less demanding of sustained theorizing than, e.g., 'we need to prioritize decision theory research ASAP in order to prevent superintelligent AI systems from destroying the world'. Even von Neumann had non-naturalist views about QM, and if von Neumann is a symptom of intellectual degeneracy then I don't know what isn't.
- Ditto for the development of nuclear weapons. I don't see any clear examples of qualitatively better forecasting, strategy, outside-the-box thinking, or scientific productivity on this topic in e.g. the 1930s, compared to what I'd expect see today. (Though this comparison is harder to make because we've accumulated a lot of knowledge and hard experience with technological GCR as a result of this and similar cases.) The near-success of the secrecy effort might be an exception, since that took some loner agency and coordination that seems harder to imagine today. (Though that might also have been made easier by the smaller and less internationalized scientific community of the day, and by the fact that world war was on everyone's radar?)
- Turing and I. J. Good both had enough puzzle pieces to do at least a little serious thinking about alignment, and there was no particular reason for them not to do so. The 1956 Dartmouth workshop shows "maybe true AI isn't that far off" was at least taken somewhat seriously by a fair number of people (though historians tend to overstate the extent to which this was true). If 1940s researchers were dramatically better than 2010s researchers at this kind of thing, and the decay after the 1940s wasn't instantaneous, I'd have expected at least a hint of serious thinking-for-more-than-two-hours about alignment from at least one person working in the 1950s-1960s (if not earlier).

Rob Bensinger:

Here's a different hypothesis: Human brains and/or all of the 20th century's standard scientific toolboxes and norms are just really bad at philosophical/conceptual issues, full stop. We're bad at it now, and we were roughly equally bad at it in the 1940s. A lot of fields have slowed down because we've plucked most of the low-hanging fruit that doesn't require deep philosophical/conceptual innovation, and AI in particular happens to be an area where the things human scientists have always been worst at are especially critical for success.

Wei Dai:

Ok, so the story I'm forming in my mind is that we've always been really bad at philosophical/conceptual issues, and past philosophical/conceptual advances just represent very low-hanging fruit that have been picked. When we invented mailing lists / blogs, the advantage over traditional academic communications allowed us to reach a little higher and pick up a few more fruits but progress is still very limited because we're still not able to reach very high in an absolute sense, and making progress this way depends on gathering together enough hobbyists with the right interests and resources which is a rare occurrence. Rob, I'm not sure how much of this you endorse, but it seems like the best explanation of all the relevant facts I've seen so far.

Rob Bensinger:

I think the object-level philosophical progress via mailing lists / blogs was tied to coming up with some good philosophical methodology. One simple narrative about the global situation (pretty close to the standard narrative) is that before 1880 or so, human inquiry was good at exploring weird nonstandard hypotheses, but bad at rigorously demanding testability and precision of those hypotheses. Human inquiry between roughly 1880 and 1980 solved that problem by demanding testability and precision in all things, which (combined with prosaic knowledge accumulation) let them grab a lot of low-hanging scientific fruit really fast, but caused them to be unnecessarily slow at exploring any new perspectives that weren't 100% obviously testable and precise in a certain naive sense (which led to lack-of-serious-inquiry into "weird" questions at the edges of conventional scientific activities, like MWI and Newcomb's problem).

Bayesianism, the cognitive revolution, the slow fade of positivism's influence, the random walk of academic one-upmanship, etc. eventually led to more sophistication in various quarters about what kind of testability and precision are important by the late 20th century, but this process of synthesizing 'explore weird nonstandard hypotheses' with 'demand testability and precision' (which are the two critical pieces of the puzzle for 'do unusually well at philosophy/forecasting/etc.') was very uneven and slow. Thus you get various little islands of especially good philosophy-ish thinking showing up at roughly the same time here and there, including parts of analytic philosophy (e.g., Drescher), mailing lists (e.g., Extropians), and psychology (e.g., Tetlock).Subthread 3

Vladimir Slepnev:

Eliezer, your position is very sharp. A couple questions then:

1. Do you think e.g. Scott Aaronson's work on quantum computing today falls outside the "weird regime where it could solve hard problems"?
2. Do you have a clear understanding why e.g. Nick Bostrom isn't excited about TDT/UDT?

Wei Dai:

Vladimir, can you clarify what you mean by "isn't excited"? Nick did write a few paragraphs about the relevance of decision theory to AI alignment in his Superintelligence, and cited TDT and UDT as "newer candidates [...] which are still under development". I'm not sure what else you'd expect, given that he hasn't specialized in decision theory in his philosophy work? Also, what's your own view of what's causing academia to not be able to make these "outsider steps"?

Vladimir Slepnev:

Wei, at some point you thought of UDT as the solution to anthropic reasoning, right? That's Bostrom's specialty. So if you are right, I'd expect more than a single superficial mention.

My view is that academia certainly tends to go off in wrong directions and it was always like that. But its direction can be influenced with enough effort and understanding, it's been done many times, and the benefits of doing that are too great to overlook.

Wei Dai:

I'm not sure, maybe he hasn't looked into UDT closely enough to understand the relevance to anthropics or he's committed to a probability view? Probably Stuart has a better idea of this than I do. Oh, I do recall that when I attended a workshop at FHI, he asked me some questions about UDT that seemed to indicate that he didn't understand it very well. I'm guessing he's probably just too busy to do object-level philosophical investigations these days.

Can you give some past examples of academia going off in the wrong direction, and that being fixed by outsiders influencing its direction?

Vladimir Slepnev:

Why do you need the "fixed by outsiders" bit? I think it's easier to change the direction of academia while being in academia, and that's been done many times.

Maxim Kesin:

Vladimir Slepnev The price of admission is pretty high for people who can do otherwise productive work, no? Especially since very few members of the club can have direction-changing impact. Something like finding and convincing existing high-standing members, preferably several of them seems like a better strategy than joining the club and doing it from the inside yourself.

Wei Dai:

Vladimir, on LW you wrote "More like a subset of steps in each field that need to be done by outsiders, while both preceding and following steps can be done by academia." If some academic field is going in a wrong direction because it's missing a step that needs to be done by outsiders, how can someone in academia change its direction? I'm confused... Are you saying outsiders should go into academia in order to change its direction, after taking the missing "outsider steps"? Or that there is no direct past evidence that outsiders can change academia's direction but there's evidence that insiders can and that serves as bayesian evidence that outsiders can too? Or something else?

Vladimir Slepnev:

I guess I shouldn't have called them "outsider steps", more like "newcomer steps". Does that make sense?

Eliezer Yudkowsky:

There's an old question, "What does the Bible God need to do for the Christians to say he is not good?" What would academia need to do before you let it go?

Vladimir Slepnev:

But I don't feel abused! My interactions with academia have been quite pleasant, and reading papers usually gives me nice surprises. When I read your negative comments about academia, I mostly just get confused. At least from what I've read in this discussion today, it seems like the mystical force that's stopping people like Bostrom from going fully on board with ideas like UDT is simple miscommunication on our part, not anything more sinister. If our arguments for using decisions over probabilities aren't convincing enough, perhaps we should work on them some more.

Wei Dai:

Vladimir, surely those academic fields have had plenty of infusion of newcomers in the form of new Ph.D. students, but the missing steps only got done when people tried do them while remaining entirely out of academia. Are you sure the relevant factor here is "new to the field" rather than "doing work outside of academia"?

Stuart Armstrong:

Academic fields are often productive, but narrow. Saying "we should use decision theory instead of probability to deal with anthropics" falls outside of most of the relevant fields, so few academics are interested, because it doesn't solve the problems they are working on.

Wei Dai:

Vladimir, a lot of people on LW didn't have much trouble understanding UDT as informally presented there, or recognizing it as a step in the right direction. If joining academia makes somebody much less able to recognize progress in decision theory, that seems like a bad thing and we shouldn't be encouraging people to do that (at least until we figure out what exactly is causing the problem and how to fix or avoid it on an institutional or individual level).

Vladimir Slepnev:

I think it's not surprising that many LWers agreed with UDT, because most of them were introduced to the topic by Eliezer's post on Newcomb's problem, which framed the problem in a way that emphasized decisions over probabilities. (Eliezer, if you're listening, that post of yours was the single best example of persuasion I've seen in my life, and for a good goal too. Cheers!) So there's probably no statistical effect saying outsiders are better at grasping UDT on average. It's not that academia is lacking some decision theory skill, they just haven't bought our framing yet. When/if they do, they will be uniquely good at digging into this idea, just as with many other ideas.

If the above is true, then refusing to pay the fixed cost of getting our ideas into academia seems clearly wrong. What do you think?Subthread 4

Stuart Armstrong:

Think the problem is a mix of specialisation and lack of urgency. If I'd been willing to adapt to the format, I'm sure I could have got my old pro-SIA arguments published. But anthropics wasn't ready for a "ignore the big probability debates you've been having; anthropic probability doesn't exist" paper. And those were interested in the fundamental interplay between probability and decision theory weren't interested in anthropics (and I wasn't willing to put the effort in to translate it into their language).

This is where the lack of urgency comes in. People found the paper interesting, I'd wager, but not saying anything about the questions they were interested in. And they had no real feeling that some questions were far more important than theirs.

Stuart Armstrong:

I've presented the idea to Nick a few times, but he never seemed to get it fully. It's hard to ignore probabilities when you've spent your life with them.

Eliezer Yudkowsky:

I will mention for whatever it's worth that I don't think decision theory can eliminate anthropics. That's an intuition I still find credible and it's possible Bostrom felt the same. I've also seen Bostrom contribute at least one decision theory idea to anthropic problems, during a conversation with him by instant messenger, a division-of-responsibility principle that UDT later rendered redundant.

Stuart Armstrong:

I also disagree with Eliezer about the use of the "interruptible agents" paper. The math is fun but ultimately pointless, and there is little mention of AI safety. However, it was immensely useful for me to write that paper with Laurent, as it taught me so much about how to model things, and how to try and translate those models into things that ML people like. As a consequence, I can now design indifference methods for practically any agent, which was not the case before.

And of course the paper wouldn't mention the hard AI safety problems - not enough people in ML are working on those. The aim was to 1) present part of the problem, 2) present part of the solution, and 3) get both of those sufficiently accepted that harder versions of the problem can then be phrased as "take known problem/solution X, and add an extra assumption..."

Rob Bensinger:

That rationale makes sense to me. I think the concern is: if the most visible and widely discussed papers in AI alignment continue to be ones that deliberately obscure their own significance in various ways, then the benefits from the slow build-up to being able to clearly articulate our actual views in mainstream outlets may be outweighed by the costs from many other researchers internalizing the wrong take-aways in the intervening time. This is particularly true if many different build-ups like this are occurring simultaneously, over many years of incremental progress toward just coming out and saying what we actually think.

I think this is a hard problem, and one MIRI's repeatedly had to deal with. Very few of MIRI's academic publications even come close to giving a full rationale for why we care about a given topic or result. The concern is with making it standard practice for high-visibility AI alignment papers to be at least somewhat misleading (in order to get wider attention, meet less resistance, get published, etc.), rather than with the interruptibility paper as an isolated case; and this seems like a larger problem for overstatements of significance than for understatements.

I don't know how best to address this problem. Two approaches MIRI has tried before, which might help FHI navigate this, are: (1) writing a short version of the paper for publication that doesn't fully explain the AI safety rationale, and a longer eprint of the same paper that does explain the rationale; and/or (2) explaining results' significance more clearly and candidly in the blog post announcing the paper.Subthread 5

Eliezer Yudkowsky:

To put this yet another way, most human bureaucracies and big organizations don't do science. They have incentives for the people inside them which get them to do things other than science. For example, in the FBI, instead of doing science, you can best advance your career by closing big-name murder cases... or whatever. In the field of psychology, instead of doing science, you can get a lot of undergraduates into a room and submit obscured-math impressive-sounding papers with a bunch of tables that claim a p-value greater than 0.05. Among the ways we know that this has little to do with science is that the papers don't replicate. P-values are rituals[1], and being surprised that the rituals don't go hand-in-hand with science says you need to adjust your intuitions about what is surprising. It's like being surprised that your prayers aren't curing cancer and asking how you need to pray differently.

Now, it may be that separately from the standard incentives, decades later, a few heroes get together and try to replicate some of the most prestigious papers. They are doing science. Maybe somebody inside the FBI is also doing science. Lots of people in Christian religious organizations, over the last few centuries, did some science, though fewer now than before. Maybe the public even lauded the science they did, and they got some rewards. It doesn't mean the Catholic Church is set up to teach people how to do real science, or that this is the primary way to get ahead in the Catholic Church such that status-seekers will be driven to seek their promotions by doing great science.

The people doing real science by trying to replicate psychology studies may report ritual p-values and submit for ritual peer-review-by-idiots. Similarly, some doctors in the past no doubt prayed while giving their patients antibiotics. It doesn't mean that prayer works some of the time. It means that these heroes are doing science, and separately, doing bureaucracy and a kind of elaborate ritual that is what our generation considers to be prestigious and mysterious witch doctery.

[1] https://arbital.com/p/likelihoods_not_pvalues/?l=4x</br></br><a href="https://www.lesserwrong.com/posts/xQ9tMMk3RArodLtDq/intellectual-progress-inside-and-outside-academia">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/xQ9tMMk3RArodLtDq/intellectual-progress-inside-and-outside-academia</link><guid isPermaLink="false">xQ9tMMk3RArodLtDq</guid><dc:creator><![CDATA[Ben]]></dc:creator><pubDate>Sat, 02 Sep 2017 23:08:46 GMT</pubDate></item><item><title><![CDATA[A combined analysis of genetically correlated traits identifies 107 loci associated with intelligence | bioRxiv]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/ZKHGbvphvHBcvYkMH/a-combined-analysis-of-genetically-correlated-traits">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/ZKHGbvphvHBcvYkMH/a-combined-analysis-of-genetically-correlated-traits</link><guid isPermaLink="false">ZKHGbvphvHBcvYkMH</guid><pubDate>Tue, 18 Jul 2017 06:30:14 GMT</pubDate></item><item><title><![CDATA[Subtle Forms of Confirmation Bias]]></title><description><![CDATA[There are at least two types of confirmation bias.

The first is selective attention: a tendency to pay attention to, or recall, that which confirms the hypothesis you are thinking about rather than that which speaks against it.

The second is selective experimentation: a tendency to do experiments which will confirm, rather than falsify, the hypothesis.

The standard advice for both cases seems to be "explicitly look for things which would falsify the hypothesis". I think this advice is helpful, but it is subtly wrong, especially for the selective-experimentation type of confirmation bias. Selective attention is relatively straightforward, but selective experimentation is much more complex than it initially sounds.Looking for Falsification

What the standard (Popperian) advice tells you to do is try as hard as you can to falsify your hypothesis. You should think up experiments where your beloved hypothesis really could fail.

What this advice definitely does do is guard against the mistake of making experiments which could not falsify your hypothesis. Such a test is either violating conservation of expected evidence (by claiming to provide evidence one way without having any possibility of providing evidence the other way), or providing only very weak evidence for your claim (by looking much the same whether your claim is true or false). Looking for tests which can falsify your result steers you towards tests which would provide strong evidence, and helps you avoid violating the law of expected evidence.

However, there are more subtle ways in which confirmation bias can act. Predicting Results in Advance

You can propose a test which would indeed fit your hypothesis if it came out one way, and which would disconfirm your hypothesis if it came out the other way -- but where you can predict the outcome in advance. It's easy to not realize you are doing this. You'll appear to provide significant evidence for your hypothesis, but actually you've cherry-picked your evidence before even looking at it; you knew enough about the world to know where to look to see what you wanted to see.

Suppose Dr. Y studies a rare disease, Swernish syndrome. Many scientists have formed an intuition that Swernish syndrome has something to do with a chemical G-complex. Dr. Y is thinking on this one night, when the intuition crystallizes into G-complex theory, which would provide a complete explanation of how Swernish syndrome develops. G-complex theory makes the novel prediction that G-complex in the bloodstream will spike during early onset of the disease; if this were false, G-complex theory would have to be false. Dr. Y does the experiment, and finds that the spike does occur. No one has measured this before, nor has anyone else put forward a model which makes that prediction. However, it happens that anyone familiar with the details of Dr. Y's experimental results over the past decade would have strongly suspected the same spike to occur, whether or not they endorsed G-complex theory. Does the experimental result constitute significant evidence?

This is a subtle kind of double-counting of evidence. You have enough evidence to know the result of the experiment; also, your evidence has caused you to generate a hypothesis. You cannot then claim the success of the experiment as more evidence for your hypothesis: you already know what would happen, so it can't alter the certainty of your hypothesis.

If we're dealing only with personal rationality, we could invoke conservation of expected evidence again: if you already predict the outcome with high probability, you cannot simultaneously derive much evidence from it. However, in group rationality, there are plenty of cases where you want to predict an experiment in advance and then claim it as evidence. You may already be convinced, but you need to convince skeptics. So, we can't criticize someone just for being able to predict their experimental results in advance. That would be absurd. The problem is, the hypothesis isn't what did the work of predicting the outcome. Dr. Y had general world-knowledge which allowed him to select an experiment whose results would be in line with his theory. 

To Dr. Y, it just feels like "if I am right, we will see the spike. If I am wrong, we won't see it." From the outside, we might be tempted to say that Dr. Y is not "trying hard enough to falsify G-complex theory". But how can Dr. Y use this advice to avoid the mistake? A hypothesis is an explicit model of the world, which guides your predictions. When asked to try to falsify, though, what's your guide? If you find your hypothesis very compelling, you may have difficulty imagining how it could be false. A hypothesis is solid, definite. The negation of a hypothesis includes anything else. As a result, "try to falsify your hypothesis" is very vague advice. It doesn't help that the usual practice is to test against a null hypothesis. Dr. Y tests against the spike not being there, and thinks this sufficient.Implicit Knowledge

Part of the problem here is that it should be very clear what could and could not have been predicted. There's an interaction between your general world knowledge, which is not explicitly articulated, and your scientific knowledge, which is. 

If all of your knowledge was explicit scientific knowledge, many biases would disappear. You couldn't possibly have hindsight bias; each hypothesis would predict the observation with a precise probability, which you can calculate. 

Similarly, the failure mode I'm describing would become impossible. You could easily notice that it's not really your new hypothesis doing the work of telling you which experimental result to expect; you would know exactly what other world-knowledge you're using to design your experiment.

I think this is part of why it is useful to orient toward gear-like models. If our understanding of a subject is explicit rather than implicit, we can do a lot more to correct our reasoning. However, we'll always have large amounts of implicit, fuzzy knowledge coming in to our reasoning process; so, we have to be able to deal with that.Is "Sufficient Novelty" The Answer?

In some sense, the problem is that Dr. Y's experimental result isn't novel enough. It might be a "novel prediction" in the sense that it hasn't been explicitly predicted by anyone, but it is a prediction that could have been made without Dr. Y's new hypothesis. Extraordinary claims require extraordinary evidence, right? It isn't enough that a hypothesis makes a prediction which is new. The hypothesis should make a prediction which is really surprising. 

But, this rule wouldn't be any good for practical science. How surprising something is is too subjective, and it is too easy for hindsight bias to make it feel as if the result of the experiment could have been predicted. Besides: if you want science to be able to provide compelling evidence to skeptics, you can't throw out experiments as unscientific just because most people can predict their outcome.Method of Multiple Hypotheses

So, how could Dr. Y have avoided the mistake?

It is meaningless to confirm or falsify a hypothesis in isolation; all you can really do is provide evidence which helps distinguish between hypotheses. This will guide you away from "mundane" tests where you actually could have predicted the outcome without your hypothesis, because there will likely be many other hypotheses which would be able to predict the outcome of that test. It guides you toward corner cases, where otherwise similar hypotheses make very different predictions.

We can unpack "try to falsify" as "come up with as many plausible alternative hypotheses as you can, and look for experiments which would rule out the others." But actually, "come up with alternative hypotheses" is more than an unpacking of "try to falsify"; it shifts you to trying to distinguish between many hypotheses, rather than focusing on "your" hypothesis as central.

The actual, exactly correct criteria for an experiment is its value-of-information. "Try to falsify your hypothesis" is a lousy approximation of this, which judges experiments by how likely they are to provide evidence against your hypothesis, or the likelihood ratio against your hypothesis in the case where the experiment doesn't go as your hypothesis predicts, or something. Don't optimize for the wrong metric; things'll tend to go poorly for you.

Some might object that trying-to-falsify is a good heuristic, since value of information is too difficult to compute. I'd say that a much better heuristic is to pretend distinguishing the right hypothesis is equally valuable in all cases, and look for experiments that allow you to maximally differentiate between them. Come up with as many possibilities as you can, and try to differentiate between the most plausible ones.

Given that the data was already very suggestive of a G-complex spike, Dr. Y would most likely generate other hypotheses which also involve a G-complex spike. This would make the experiment which tests for the spike uninteresting, and suggest other more illuminating experiments.

I think "coming up with alternatives" is a somewhat underrated debiasing technique. It is discussed more in Heuer's Psychology of Intelligence Analysis and Chamberlin's Method of Multiple Working Hypotheses.</br></br><a href="https://www.lesserwrong.com/posts/mmwyubv724MTvvL5Z/subtle-forms-of-confirmation-bias">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/mmwyubv724MTvvL5Z/subtle-forms-of-confirmation-bias</link><guid isPermaLink="false">mmwyubv724MTvvL5Z</guid><pubDate>Mon, 03 Jul 2017 23:00:00 GMT</pubDate></item><item><title><![CDATA[Epistemic Spot Check: A Guide To Better Movement (Todd Hargrove)]]></title><description><![CDATA[This is part of an ongoing series assessing where the epistemic bar should be for self-help books. Introduction 

Thesis: increasing your physical capabilities is more often a matter of teaching your neurological system than it is anything to do with your body directly.  This includes things that really really look like they’re about physical constraints, like strength and flexibility.  You can treat injuries and pain and improve performance by working on the nervous system alone.  More surprising, treating these physical issues will have spillover effects, improving your mental and emotional health. A Guide To Better Movement provides both specific exercises for treating those issues and general principles that can be applied to any movement art or therapy. 

The first chapter of this book failed spot checking pretty hard.  If I hadn’t had a very strong recommendation from a friend (“I didn’t take pain medication after two shoulder surgeries” strong), I would have tossed it aside.  But I’m glad I kept going, because it turned out to be quite valuable (this is what triggered that meta post on epistemic spot checking).  In accordance with the previous announcement on epistemic spot checking, I’m presenting the checks of chapter one (which failed, badly), and chapter six (which contains the best explanation of pain psychology I’ve ever seen), and a review of model quality.  I’m very eager for feedback on how this works for people. Chapter 1: Intro (of the book) 

Claim: “Although we might imagine we are lengthening muscle by stretching, it is more likely that increased range of motion is caused by changes in the nervous system’s tolerance to stretch, rather than actual length changes in muscles. ” (p. 5).  

Overstated, weak.  (PDF).  The paper’s claims to apply this up to 8 weeks, no further.  Additionally, the paper draws most (all?) of its data from two studies and it doesn’t give the sample size of either. 

Claim:  “Research shows the forces required to deform mature connective tissue are probably impossible to create with hands, elbows or foam rollers.” (p. 5).  

Misleading. (Abstract).  Where by “research” the Hargrove means “mathematical model extrapolated from a single subject”. 

Claim:  “in hockey players, strong adductors are far more protective against groin strain than flexible adductors, which offer no benefit” (p. 14). 

Misleading. (Abstract) Sample size is small, and the study was of the relative strength of adductor to abductor, not absolute strength. 

Claim: “Flexibility in the muscles of the posterior chain correlates with slower running and poor running economy.” (p. 14). 

Accurate citation, weak study.  (Abstract) Sample size: 8.  Eight.  And it’s correlational. 

[A number of interesting ideas whose citations are in books and thus inaccessible to me] 

Claim:  “…most studies looking at measurable differences in posture between individuals find that such differences do not predict differences in chronic pain levels.”  (p. 31).  

Accurate citation.  (Abstract).  It’s a metastudy and I didn’t track down any of the 54 studies included, but the results are definitely quoted accurately. 

  Chapter 6: Pain 

Claim: “Neuromatrix” approach to pain means the pattern of brain activity that create pain, and that pain is an output of brain activity, not an input (p93). 

True, although the ability to correctly use definitions is not very impressive. 

Claim: “If you think a particular stimulus will cause pain, then pain is more likely.  Cancer patients will feel more pain if they believe the pain heralds the return of cancer, rather than being a natural part of the healing process.” (p93). 

Correctly cited, small sample size. (Source 1, source 2, TEDx Talk). 

Claim: Psychological states associated with mood disorders (depression, anxiety, learned helplessness, etc) are associated with pain (p94). 

True, (source), although it doesn’t look like the study is trying to establish causality. 

Claim: Many pain-free people have the kinds of injuries doctors blame pain on (p95). 

True, many sources, all with small sample sizes.  (source 1, source 2, source 3, source 4, source 5) 

Claim: On taking some cure for pain, relief kicks in before the chemical has a chance to do any work (p98) 

True.  His source for this was a little opaque but I’ve seen this fact validated many other places. 

Claim: we know you can have pain without stimulus because you can have arm pain without an arm (p102). 

True, phantom limb pain is well established. 

Claim: some people feel a heart attack as arm pain because the nerves are very close to each other and the heart basically never hurts, so the brain “corrects” the signal to originating in the arm (p102). 

First part: True.  Explanation: unsupported.  The explanation certainly makes sense, but he provides no citations and I can’t find any other source on it. 

Claim: Inflammation lowers the firing threshold of nociceptors (aka sensitization) (p102). 

True (source). 

Claim: nociception is processed by the dorsal horn in the spine.  The dorsal horn can also become sensitized, firing with less stimulus than it otherwise would.  Constant activation is one of the things that increases sensitivity, which is one mechanism for chronic pain (p103). 

True (source). 

Claim: people with chronic pain often have poor “body maps”, meaning that their mental model of where they are in space is inaccurate and they have less resolution when assessing where a given sensation is coming from (p107). 

Accurate citation (source).  This is a combination of literature review and reporting of novel results.  The novel results had a sample of five. 

Claim: The hidden hand in the rubber hand illusion experiences a drop in temperature (p109). 

Accurate citation, tiny sample size (source).  This paper, which is cited by the book’s citation, contains six experiments with sample sizes of fifteen or less.  I am torn between dismissing this because cool results with tiny sample sizes are usually bullshit, and accepting it because it is super cool. 

Claim: “a hand that has been disowned through use of the rubber hand illusion will suffer more inflammation in response to a physical insult than a normal hand.” (p. 109). 

Almost accurate citation (source).  The study was about histamine injection, not injury per se.   Insult technically covers both, but I would have preferred a more precise phrasing.  Also, sample size 34. 

Claim: People with chronic back pain have trouble perceiving the outline of their back (p. 109).  

Accurate citation, sample size six (pdf). 

Claim:  “Watching the movements in a mirror makes the movements less painful [for people with lower back pain].” (p. 111). Better Movement. Kindle Edition. 

Accurate citation, small sample size (source). Model Quality 

Reminder: the model is that pain and exhaustion are a product of your brain processing a variety of information.  The prediction is that improving the quality of processing via the principles explained in the book can reduce pain and increase your physical capabilities. 

Simplicity: Good.  This is not actually simple model, it requires a ton of explanation to a layman.  But most of its assumptions come from neurology as a whole; the leap from “more or less accepted facts about neurology” to this model is quite small. 

Explanation Quality: Fantastic.  I’ve done some reading on pain psychology, much of which is consistent with Guide…, but Guide… has by far the best explanation I’ve read. 

Explicit Predictions: Good, kept from greatness only by the fact that brains and bodies are both very complicated and there’s only so much even a very good model can do. 

Useful Predictions: Okay. The testable prediction for the home-reader is that following the exercises in the back of the book, or going to a Feldenkrais class, will treat chronic pain, and increase flexibility and strength.  Since the book itself admits that a lot of things offer short term relief but don’t address the real problem, helping immediately doesn’t prove very much. 

Acknowledging Limitations: Poor.  GTBM doesn’t have the grandiose vision of some cure-all books, and repeatedly reminds you that your brain being involved doesn’t mean your brain is in control.  But there’s no sentence along the lines of “if this doesn’t work there’s a mechanical problem and you should see a doctor.” 

Measurability: poor.  This book expects you to put in a lot of time before seeing results, and does not make a specific prediction of the form they will come in.  Worse, I don’t think you can skip straight to the exercises.  If I hadn’t read the entire preceding book I wouldn’t have approached them in the correct spirit of attention and curiosity. 

Hmmm, if I’d assigned a gestalt rating it would have been higher than what I now think is merited based on the subscores.  I deliberately wrote this mostly before trying the exercises, so I can’t give an effectiveness score.  If you do decide to try it, please let me know how it goes so I can further calibrate my reviews to actual effectiveness. 

  You might like this book if… 

…you suffer from chronic pain or musculoskeletal issues, or find the mind-body connection fascinating. 

This post supported by Patreon.</br></br><a href="https://www.lesserwrong.com/posts/mjneyoZjyk9oC5ocA/epistemic-spot-check-a-guide-to-better-movement-todd">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/mjneyoZjyk9oC5ocA/epistemic-spot-check-a-guide-to-better-movement-todd</link><guid isPermaLink="false">mjneyoZjyk9oC5ocA</guid><pubDate>Sat, 01 Jul 2017 05:20:00 GMT</pubDate></item><item><title><![CDATA[Meditation retreat highlights]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/s5igiSWnRskwLR9Nd/meditation-retreat-highlights">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/s5igiSWnRskwLR9Nd/meditation-retreat-highlights</link><guid isPermaLink="false">s5igiSWnRskwLR9Nd</guid><pubDate>Tue, 27 Jun 2017 22:58:14 GMT</pubDate></item><item><title><![CDATA[Coalition Dynamics as Morality]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/g3smiLgMfWuCYahTC/coalition-dynamics-as-morality">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/g3smiLgMfWuCYahTC/coalition-dynamics-as-morality</link><guid isPermaLink="false">g3smiLgMfWuCYahTC</guid><pubDate>Fri, 23 Jun 2017 18:00:00 GMT</pubDate></item><item><title><![CDATA[[Classifieds] What are you doing to make the world a better place and how can we help?]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/2qP6s2abriZq7Zrow/classifieds-what-are-you-doing-to-make-the-world-a-better">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/2qP6s2abriZq7Zrow/classifieds-what-are-you-doing-to-make-the-world-a-better</link><guid isPermaLink="false">2qP6s2abriZq7Zrow</guid><pubDate>Thu, 22 Jun 2017 00:44:42 GMT</pubDate></item><item><title><![CDATA[Pair Debug to Understand, not Fix]]></title><description><![CDATA[I'm an adjunct instructor at CFAR, but this is my opinion, not CFAR's.

At CFAR, one of the exercises is 'pair debugging'; one person is the protagonist exploring one of their problems, and the other person is the helper, helping them understand and solve the problem. (Like many things at CFAR, this is a deliberate and distilled version of something that already happens frequently in normal life.)

This used to have a different frame; we used to talk about the debugger and the debuggee, the person solving the problem and the person who had it. This predictably led to problems, because the frame mismatched what it actually meant to do the task well. This post is an attempt to point at the difference between the two, and why I think it's important to lean heavily towards the 'understand' side. There seem to be two broad clusters of reasons, which I'm going to label "model-based" and "social."image taken from here

"Root cause analysis" is the term we'd use in industry, or "five whys." The point is that when you explore an issue in the right way with sufficient depth, you come up with a better solution; not to the immediate situation in front of you, but the entire class of situations that are caused by the same root cause. One of the pieces of advice that Duncan, CFAR's curriculum director, gives is that the worst success at a pair debug is you solving their problem for them. The best is that by the end of the debug, the two of you aren't the sort of person for whom that type of problem could happen anymore.

That is, someone else having a problem isn't just a task to be done and forgotten about; it's an opportunity to learn about how their mind works, and how your mind works. I think one of the things that's helped CFAR instructors 'level up' is both getting rich models of how other people think, but also exposing lots of their own models and how they think. ("Oh, in this situation I would do X, how do I explain X to someone else?")

It's also often the case that the original frame for a problem is not one where the right solution is readily apparent. (If that were the case, generally the solution would have been implemented already!) Exploring a problem and uncovering other frames and perspectives can help discover the ontology in which the solution to the problem is obvious and exciting.It isn't about the nail--it's about the reasons underlying the nail still being there.

The second reason is that by trying to understand the other person, you actually establish a connection with them; they get to be heard and seen instead of manipulated like a math problem. Oftentimes, that's the actual function of discussion about problems--social grooming and shared vulnerability. (Compare to the claim that "what are you doing?" is often code for "can I do that with you?" instead of "please explain it to me.") This one seems harder to elaborate on than the model-based version, but is nevertheless as (if not more) important. Responding positively to others being vulnerable with you leads to more vulnerability, which can lead to them discovering the right frame or acquiring the resources they need in order to embark on a solution.</br></br><a href="https://www.lesserwrong.com/posts/K2Ajrko4mowY26Xac/pair-debug-to-understand-not-fix">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/K2Ajrko4mowY26Xac/pair-debug-to-understand-not-fix</link><guid isPermaLink="false">K2Ajrko4mowY26Xac</guid><pubDate>Wed, 21 Jun 2017 23:25:40 GMT</pubDate></item><item><title><![CDATA[Epistemic Spot Check: The Demon Under the Microscope (Thomas Hager)]]></title><description><![CDATA[Description 

How much would it suck to be the guy who invented sulfa drugs? You dedicate your life to preventing a repeat of the horrors you saw in the war, succeed in that and so much more, and then 10 years later some idiot leaves a petri dish open and completely replaces you as the father of man’s triumph against bacteria.  Actually he left the lid off before you found your thing, but ignored the result until you hit it big because everyone knew you couldn’t fight disease with chemicals, until you proved you could.  It’s the ultimate silver medal.  The Demon Under the Microscope is the tale of that guy. 

It’s by the same author (Thomas Hagen) as The Alchemy of Air.  It’s also set in the same corporation, and about field that was transforming from science to industry.  The writing style is similar.  I originally didn’t intend to fact check this book very hard because I already knew what to expect from the author (a little too invested in the subject but basically accurate), but the habit is too ingrained at this point and I couldn’t keep reading until I’d checked out the first few chapters.   

  Evaluation 

Claim: “Domagk [the researcher] had the ability to see. He watched everything, noted slight variations, quietly filed it all away.”  (p. 18). 

The wounds themselves he accepted as the results of war. But the infections that followed—surely science could do something to stop those. He focused on the bacteria, his personal demons, “these terrible enemies of man that murder him maliciously and treacherously without giving him a chance.” “I swore before God and myself,” he later wrote, “to counter this destructive madness.”  (p. 20). 

Who knows but it’s pretty.  Someone in the same position as thousands of others (in this case a WW1 medic), caring more , and going to fix it (via sulfa drugs) is my moral aesthetic.  Of course there could be another surgeon in the same place with just as much care and potential who got blow up or gassed.  The Alchemy of Air prioritized poetry over provability, so I don’t entirely trust this, but I like it. 

Claim: Cholera was a big problem for German soldiers. 

This would be a weird thing to make up, but I’m a little confused.  There had been a cholera vaccine for over 20 years by that point. 

Claim: Gas gangrene is bad. 

True. 

Claim: Sir Almroth Wright created a typhoid vaccine that was deployed during WW1, saving may lives.  During WW1 he established a laboratory researching wound infections. 

True.  He was also prescient enough to foresee the risk of drug-resistant bacteria.  Of course he also thought that bacteria were associated with but not the cause of disease, and that scurvy was caused by poorly preserved meat.  Being right is hard. 

Claim: Doctors at the time thought that a dry wound was more resistant to infection; however dryness inhibited white blood cells and thus ultimately increased infections. They also thought wounds needed to be completely covered to prevent reinfection, but this created the ideal environment for anaerobic bacteria like Clostridium perfringens (which causes gas gangrene). 

True. I was surprised to find ideal wound moistness still isn’t entirely settled, but the book’s description seems essentially in good faith.  Demon goes on to say that by the 1920s, doctors believed they were basically powerless and their job was to get the body’s own healing systems a pillow and some tea.  They took this so far that: 

“A physician doing drug research was a physician taken away from patient care. There was an unsavory aspect to a physician’s developing a drug for money. There were ethical questions about testing drugs on patients. Developing new drug therapies smacked of a return to the discredited age of bleedings and purgings.” 

  

To repeat: researching new treatments was considered distasteful at best and morally outrageous at worst.  And brain differentiation was once considered phrenology redux.  I just don’t think we’re very good at seeing where medicine is going (p40). 

Claim: Section on Leeuwenhoek.  

True but missing time data.  Given that everything discussed so far happened in the range of 1890-1920, I would have have explicitly mentioned I was going 250 years into the past.  As it was, the only reason I noticed was that I recognized some of the names on the list of Leeuwenhoek’s contemporaries. The kindle edition may have made this worse.   But everything Hager actually says on Leeuwenhoek’s work in inventing the microscope seems accurate. 

Claim: [crickets] (no page) 

There’s no false statements, but I found the absence of discussion of the 1918 Spanish Flu epidemic.  Demon’s narrative is that seeing the horror of infected wounds in World War 1 drove Domangk to dedicate his life to preventing them.  Spanish Flu killed 5% of the entire world over the course of three years, and had a massive effect on troop movements and training in WW1.  From a military perspective it might have been more important.  We know now that the flu is really hard to vaccinate against, but at the time they didn’t even know it was a virus.  If you were a motivated medic looking for something to care about, Spanish Flu was a really obvious choice.  Demon mentions Spanish Flu in passing but not as an influence on Domangk, and that feels incomplete to me. Why gangrene in particular, when there were so many horrors happening at the time? 

Claim: Streptococcus is the cause of everything bad. 

True.  I knew it was possible to die from a scratch, but reading about everything strep causes really made me appreciate how few technological innovations are between us humans and mass die offs.  Strep causes childbed fever, St. Anthony’s Fire, meningitis, scarlet fever, pink eye, necrotizing fasciitis… Strep is the cockroach of human-infecting bacteria.  And for a while, all we had to do was take a pill and it was completely harmless. 

Of course now we have MRSA (Methicillin-resistant Staphylococcus aureus) (whose natural habitat is the hospital, just like strep).  And multiply resistant gonorrhea.  And tuberculosis resistant to most known antibiotics.  The bad old days are on our heels, is what I’m saying. 

One weird thing is I finished this book with the vague impression that sulfa drugs had saved a lot of lives but not actually knowing how many.  This article estimates that sulfa drugs led to a 2-3% drop in overall mortality, which translated to a 0.4-0.7 year increase in life expectancy.  That only covers up until 1943: presumably it had a bigger impact as distribution increased, or at least would have if penicillin had not taken over. Overall Verdict 

Pretty good, with some oversights.  Like Alchemy of Air the beginning is the best part, and if you find your attention flagging I’d just let it go.  I found the subject matter more innately interesting than Alchemy of Air but the writing a little less so.  Demon spends less time on the personal lives of the scientists, which was a selling point for my roommate but a disappointment for me. 

This post supported by Patreon.</br></br><a href="https://www.lesserwrong.com/posts/8fjfdvRpBhZAYwr8Z/epistemic-spot-check-the-demon-under-the-microscope-thomas">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/8fjfdvRpBhZAYwr8Z/epistemic-spot-check-the-demon-under-the-microscope-thomas</link><guid isPermaLink="false">8fjfdvRpBhZAYwr8Z</guid><pubDate>Wed, 21 Jun 2017 16:00:01 GMT</pubDate></item><item><title><![CDATA[Distinctions of the Moment]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/kKTRqes9MN6sAPgZd/distinctions-of-the-moment">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/kKTRqes9MN6sAPgZd/distinctions-of-the-moment</link><guid isPermaLink="false">kKTRqes9MN6sAPgZd</guid><pubDate>Tue, 20 Jun 2017 14:38:21 GMT</pubDate></item><item><title><![CDATA[Momentum, Reflectiveness, Peace | Otium]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/p63eKNB699z96oPBH/momentum-reflectiveness-peace-or-otium">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/p63eKNB699z96oPBH/momentum-reflectiveness-peace-or-otium</link><guid isPermaLink="false">p63eKNB699z96oPBH</guid><pubDate>Mon, 19 Jun 2017 18:16:02 GMT</pubDate><imageUrl>https://secure.gravatar.com/blavatar/e6f1f7f1406d0a73a37a43771ed072ba?s=200&amp;ts=1497895775</imageUrl><content>https://secure.gravatar.com/blavatar/e6f1f7f1406d0a73a37a43771ed072ba?s=200&amp;ts=1497895775</content></item><item><title><![CDATA[Closed Beta Users: What would make you interested in using LessWrong 2.0? ]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/aGZMmuWDr8feX6B2s/closed-beta-users-what-would-make-you-interested-in-using">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/aGZMmuWDr8feX6B2s/closed-beta-users-what-would-make-you-interested-in-using</link><guid isPermaLink="false">aGZMmuWDr8feX6B2s</guid><pubDate>Mon, 19 Jun 2017 06:28:20 GMT</pubDate></item><item><title><![CDATA[Welcome to Lesswrong 2.0]]></title><description><![CDATA[Lesswrong 2.0 is a project by Oliver Habryka, Ben Pace, and Matthew Graves with the aim of revitalizing the Lesswrong discussion platform. Oliver and Ben are currently working on the project full-time and Matthew Graves is providing part-time support and oversight from MIRI. 

Our main goals are to move Lesswrong to a modern codebase, add an effective moderation system, and integrate the cultural shifts that the rationality community has made over the last eight years. We also think that many of the distinct qualities of the Lesswrong community (e.g. propensity for long-form arguments, reasoned debate, and a culture of building on one another's conceptual progress) suggest a set of features unique to the new Lesswrong that will greatly benefit the community. 

We are planning to be improving and maintaining the site for many years to come, but whether the site will be successful ultimately depends on whether the community will find it useful. As such, it is important to get your feedback and guidance on how the site should develop and how we should prioritize our resources. Over the coming months we want to experiment with many different content-types and page designs, while actively integrating your feedback, in an attempt to find a structure for Lesswrong that is best suited at facilitating rational discourse. 

What follows is a rough summary of how we are currently thinking about the development of Lesswrong 2.0, and what we see as the major pillars of the Lesswrong 2.0 project. We would love to get your thoughts and critiques on these.Table of Contents: 

I. Modern Codebase
II. Effective Moderation
III. Discourse Norms
IV. New Features
V. Beta Feedback PeriodI. Modern Codebase

The old lesswrong is one of the only successful forks of the reddit codebase (forked circa 2009). While reddit's code served as a stable platform while our community was in its initial stages, it has become hard to develop and extend because of its age, complexity and monolithic design.

Lesswrong 2.0, on the other hand, is based on modern web technologies designed to make rapid development much easier (to be precise React, GraphQL, Slate.js, Vulcan.js and Meteor). The old codebase was a pain to work with, and almost every developer who tried to contribute gave up after trying their hands at it. The new Lesswrong codebase on the other hand is built with tools that are well-documented and accessible, and is designed to have a modular architecture. You can find our Github repo here. 

We hope that these architectural decisions will allow us to rapidly improve the site and turn it into what a tool for creating intellectual progress should look like in 2017.II. Effective Moderation 

Historically, LW has had only a few dedicated moderators at a time, applying crude tools, which has tended to lead to burnout and backlash. There are many many obvious things we are planning to do to improve moderation, but here are some of the top ones:Spam defense 

Any user above N karma can flag a post as spam, which renders it invisible to everyone but mods. Mods will check the queue of spam posts, deleting correct flags, and removing the power from anyone that misuses it. If it seems necessary, we will also be integrating all the cool new spam detection mechanisms that modern technology has given us in the last 8 years. Noob defense 

Historically, Lesswrong’s value has come in large part from being a place on the internet where the comments were worth reading. This was largely a result of the norms and ability of the people who were commenting on the page, with a strong culture of minimizing defensiveness, searching for the truth and acting in the spirit of double crux. To sustain that culture and level of quality, we need to set up broad incentives that are driven by the community itself. 

The core strategy we are currently considering is something we’re calling the Sunshine Regiment. The Sunshine regiment is a pretty large set of trusted users who have access to reduced moderating powers, such as automatically hiding comments for other users and temporarily suspending comment threads. The goal is to give the community the tools to de-escalate conflicts and help both the users and moderators make better decisions, by giving both sides time to reflect and think and by distributing the load of draining moderation decisions.Troll defense 

The two main plans we have against trolls is to change the Karma system to something more like “Eigenkarma” and improvements to the moderator tools. In an Eigenkarma system the weights of the votes of a user depends on how many other trustworthy users have upvoted that user. For the moderator tools, one of the biggest projects is a much better data querying interface that aims to help admins notice exploitative voting behavior and other problems in the voting patterns.III. Discourse Norms

In terms of culture, we still broadly agree with the principles that Eliezer established in the early days of Overcoming Bias and Lesswrong. The twelve virtues of rationality continue to resonate with us, and the “The Craft and the Community” sequence is still highly influential on our thinking. The team (and in particular Oliver) have taken significant inspiration from the original vision of Arbital in our ideas for Lesswrong 2.0.

That being said we also think that the culture of the rationality community has substantially changed in the last eight years, and that many of those changes were for the better. As Eliezer himself said in the opening to “Rationality: AI to Zombies”: 

“It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples. In retrospect, this was the second-largest mistake in my approach. It ties into the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say ‘Oops’ and ‘Duh.’” 

We broadly agree with this, and think both that the community has made important progress in that direction, and that there are still many things to improve about the current community culture. We do not aim to make the new Lesswrong the same as it was at its previous height, but instead aim to integrate many of the changes to the culture of the rationalist culture, while also re-emphasizing important old virtues that we feel have been lost in the intervening years. 

We continue to think that strongly discouraging the discussion of highly political topics is the correct way to go. A large part of the value of Lesswrong comes from being a place where many people can experience something closer to rational debate for the first time in their life. Political topics are important, and not to be neglected, but they serve as a bad introduction and base on which to build a culture of rationality. We are open to creating spaces on Lesswrong where people above a certain Karma threshold can discuss political topics, but we would not want that part of the site to be visible to new users, and we would want the votes on that part of the site to be less-important for the total karma of the participating users. We want seasoned and skilled rationalists to discuss political topics, but we do not want users to seek out Lesswrong primarily as a venue to hold political debates.

As a general content guideline on the new Lesswrong: If while writing the article the author is primarily writing with the intent of rallying people to action, instead of explaining things to them, then the content is probably ill-suited for Lesswrong. IV. New Features

You can find our short-term feature roadmap over here in this post. This is a high-level overview on our reasoning on some of the big underlying features we expect to significantly shape the nature of Lesswrong 2.0. Content curation:

Many authors want their independence, which is one of the reasons why Scott Alexander prefers to write on SlateStarCodex instead of Lesswrong. We support that need for independence, and are hoping to serve it in two different ways: 

- We are making it very easy for trusted members of the rationality community to crosspost their content to Lesswrong. We already set up an RSS-feed integration that allows admins to associate external RSS feeds with a user, so that whenever something new gets added to that RSS feed, their user account will automatically create a post with their new content on Lesswrong, and if they want us to, not only add a link but the complete text of their post (which encourages discussion on Lesswrong instead of the external blog). 
- We want to give trusted authors moderation powers for the discussions on their own posts, allowing them to foster their own discussion norms, and giving them their own sphere of influence on the discussion platform. We hope this will both make the lives of our top authors better and will also create a form of competition between different cultures and moderation paradigms on Lesswrong. Arbital-Style features and content:

Arbital did many things right, even though it never really seemed to take off. We think that allowing users to add prediction-polls is great, and that it is important to give authors the tools to create their own content that is designed to be maintained over a long period of time and with multiple authors. We also really like link previews on hover-over as well as the ability to create highly interconnected networks of concepts with overview pages. 

Of the Arbital features, prediction polls are most certainly going to end up on feature list, but as of yet, it is unclear whether we want to copy any other features directly, though we expect to be inspired by many small features. Better editor software: 

The editor on Lesswrong and the EA Forum often lead to badly formatted posts. The editor didn’t deal well with copying content over from other webpages or Google docs, which often resulted in hard to read posts that could only be fixed by directly editing the HTML. We are working on an editor experience that will be flexible and powerful, while also making it hard to accidentally mess up the formatting of a post. Sequences-like content with curated comments: 

After doing a large amount of interviews with old users of Lesswrong, it became clear to us that the vast majority of top contributors on Lesswrong spent at least 3 months doing nothing else but reading the sequences and other linearly-structured content on the page, while also reading the discussion on those posts. We aim to improve that experience significantly, while also making it easier to start participating in the discussion. 

Books like Rationality: AI to Zombies are valuable in that they reach an audience that was impossible to reach with the old Lesswrong, and by curating the content into an established book-like format. But we also think that something very important is lost when cutting the discussion out of the content. We aim to make Lesswrong a platform that provides sequences-like content in formats that are as easy to consume as possible, while also encouraging the user to engage with the discussion on the posts, and be exposed to critical comments, disagreements and important contradicting or supporting facts. We also hope that being exposed to the discussion will more directly teach new users how to interact with the culture of Lesswrong and to learn the art of rationality more directly by observing people struggling in conversation with difficult intellectual problems. V. Beta Feedback Period 

It’s important for us to note that we don’t think that online discussion is primarily a technical problem. Our intention in sharing our plans with you and launching a closed beta are to discover both the cultural and the and technical problems that we need to solve to build a new and better discussion platform for our community. With your feedback we’re planning to rework the site, adjust our feature priorities and and make new plans for improving the culture of the new Lesswrong 2.0 community.

Far more important than implementing any particular feature is building an effective culture that has the correct social incentives. As such, our focus lies on building a community with norms and social incentives that facilitate good discourse, with a platform that does not get in the way of that. However, we do think that there are certain underlying attributes of a discussion platform that significantly shift the nature of discussions on that platform in a way that prevents or encourages good community norms to arise, i.e. Twitter’s 140 character limit makes it almost impossible to have reasoned discourse. At this stage, we are still trying to figure out what the best content-types and fundamental design philosophies are that are best for giving rise and facilitating effective discussion. 

That’s all we have for now. Please post your ideas for features or design changes as top-level comments, and discuss your concerns and details of the suggestions in second-level comments. We will be giving significant weight to the discussion and votes in our decisions on what to work on for the coming weeks.</br></br><a href="https://www.lesserwrong.com/posts/HJDbyFFKf72F52edp/welcome-to-lesswrong-2-0">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/HJDbyFFKf72F52edp/welcome-to-lesswrong-2-0</link><guid isPermaLink="false">HJDbyFFKf72F52edp</guid><pubDate>Sun, 18 Jun 2017 17:23:02 GMT</pubDate></item><item><title><![CDATA[LessWrong 2.0 Feature Roadmap & Feature Suggestions]]></title><description><![CDATA[This post will serve as a place to discuss what features the new LessWrong 2.0 should have, and I will try to keep this post updated with our feature roadmap plans. 

Here is roughly the set of features we are planning to develop over the next few weeks: 

UPDATED: August 27th, 2017Basic quality of life improvements: 

1. Improve rendering speed on posts with many comments (A lot of improvements made, a lot more to come)
2. Improve usability on mobile (After the major rework this is somewhat broken again, will fix it soon)
3. Add Katex support for comments and posts
4. Allow merging with old LessWrong 1.0 accounts
5. Fix old LessWrong 1.0 links DONE!
6. Create unique links for each comment: DONE!
7. Make comments collapsible
8. Highlight new comments since last visit: DONE!
9. Improve automatic spam-detection
10. Add RSS feed links with adjustable karma thresholds
11. Create better documentation for the page, with tooltips and onboarding processes
12. Better search, including comment search and user search: DONE!Improved Moderation Tools: 

1. New Karma system that weighs your votes based on your Karma
2. Give moderators ability to suspend comment threads for a limited amount of time
3. Give trusted post-authors moderation ability on their own posts (deleting comments, temporarily suspending users from posts, etc.)
4. Add reporting feature to comments
5. Give moderators and admins access to a database query interface to identify negative vote patternsNew Content Types: 

1. Add sequences as a top-level content-type with UI for navigating sequences in order, metadata on a sequence, and keeping track of which parts you've read DONE!
2. Add Arbital-style predictions as a content block in posts (maybe also as a top-level content type)
3. Add 'Wait-But-Why?' style footnotes to the editor
4. Discussion page that structures discussions more than just a tree format (here is a mockup I designed while working for Arbital, that I am style excited to implement)
5. ...and we have many more crazy ideas we would like to experiment with

I will also create a comment for each of these under the post, so you can help us prioritize all of these. Also feel free to leave your own feature suggestions and site improvements in the comments.</br></br><a href="https://www.lesserwrong.com/posts/6XZLexLJgc5ShT4in/lesswrong-2-0-feature-roadmap-and-feature-suggestions">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/6XZLexLJgc5ShT4in/lesswrong-2-0-feature-roadmap-and-feature-suggestions</link><guid isPermaLink="false">6XZLexLJgc5ShT4in</guid><pubDate>Sat, 17 Jun 2017 22:18:41 GMT</pubDate></item><item><title><![CDATA["AIXIjs: A Software Demo for General Reinforcement Learning", Aslanides 2017]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/bDiQL3sptuD2xvTvc/aixijs-a-software-demo-for-general-reinforcement-learning">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/bDiQL3sptuD2xvTvc/aixijs-a-software-demo-for-general-reinforcement-learning</link><guid isPermaLink="false">bDiQL3sptuD2xvTvc</guid><pubDate>Mon, 29 May 2017 21:09:53 GMT</pubDate></item><item><title><![CDATA[Invitation to comment on a draft on multiverse-wide cooperation via alternatives to causal decision theory (FDT/UDT/EDT/...)]]></title><description><![CDATA[I have written a paper about “multiverse-wide cooperation via correlated decision-making” and would like to find a few more people who’d be interested in giving a last round of comments before publication. The basic idea of the paper is described in a talk you can find here. The paper elaborates on many of the ideas and contains a lot of additional material. While the talk assumes a lot of prior knowledge, the paper is meant to be a bit more accessible. So, don’t be disheartened if you find the talk hard to follow — one goal of getting feedback is to find out which parts of the paper could be made more easy to understand. 

If you’re interested, please comment or send me a PM. If you do, I will send you a link to a Google Doc with the paper once I'm done with editing, i.e. in about one week. (I’m afraid you’ll need a Google Account to read and comment.) I plan to start typesetting the paper in LaTeX in about a month, so you’ll have three weeks to comment. Since the paper is long, it’s totally fine if you don’t read the whole thing or just browse around a bit.</br></br><a href="https://www.lesserwrong.com/posts/3uEGXyYrzgM5W5Awn/invitation-to-comment-on-a-draft-on-multiverse-wide">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/3uEGXyYrzgM5W5Awn/invitation-to-comment-on-a-draft-on-multiverse-wide</link><guid isPermaLink="false">3uEGXyYrzgM5W5Awn</guid><pubDate>Mon, 29 May 2017 08:34:59 GMT</pubDate></item><item><title><![CDATA[Meetup : Sydney Rationality - Pub meetup June]]></title><description><![CDATA[Discussion article for the meetup : Sydney Rationality - Pub meetup June  

 WHEN: 22 June 2017 06:00:00PM (+1000)
  

 WHERE: 575 george st, sydney    

We sit at the big table outside the pizza oven on level 2 For this month bring along a suggestion of a book you want more people to read. Come along to our regular monthly pub meetup to talk all things math, science, technology, engineering, thinking, growth, reasoning and beliefs. If you are an aspiring rationalist, a nerd, geek, scientist or just a quiet thinker - we can't wait to meet you to share ideas, discuss, debate, learn and grow together. If you are interested in our rationality dojos, ask us about them in person. See you there! Also, while you are at it - bring a friend along too! We usually get ~15 attendees through various advertising avenues. 

https://www.meetup.com/rationalists_of_sydney/events/jcxffnywjbdc/ https://www.facebook.com/events/124294738146492  Discussion article for the meetup : Sydney Rationality - Pub meetup June</br></br><a href="https://www.lesserwrong.com/posts/w9ifbC2kHCkT6LiEw/meetup-sydney-rationality-pub-meetup-june">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/w9ifbC2kHCkT6LiEw/meetup-sydney-rationality-pub-meetup-june</link><guid isPermaLink="false">w9ifbC2kHCkT6LiEw</guid><pubDate>Mon, 29 May 2017 06:52:14 GMT</pubDate></item><item><title><![CDATA[Open thread, May 29 - June 4, 2017]]></title><description><![CDATA[If it's worth saying, but not worth its own post, then it goes here.    

Notes for future OT posters: 

1. Please add the 'open_thread' tag. 

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
 

3. Open Threads should start on Monday, and end on Sunday. 

4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting</br></br><a href="https://www.lesserwrong.com/posts/2AHr2o3A2ToBDwL73/open-thread-may-29-june-4-2017">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/2AHr2o3A2ToBDwL73/open-thread-may-29-june-4-2017</link><guid isPermaLink="false">2AHr2o3A2ToBDwL73</guid><pubDate>Mon, 29 May 2017 06:13:51 GMT</pubDate></item><item><title><![CDATA[Interview on IQ, genes, and genetic engineering with expert (Hsu)]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/aSZXgC84An9duTu3t/interview-on-iq-genes-and-genetic-engineering-with-expert">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/aSZXgC84An9duTu3t/interview-on-iq-genes-and-genetic-engineering-with-expert</link><guid isPermaLink="false">aSZXgC84An9duTu3t</guid><pubDate>Sun, 28 May 2017 22:19:23 GMT</pubDate></item><item><title><![CDATA[Bi-Weekly Rational Feed]]></title><description><![CDATA[Five Recommended Articles You Might Have Missed: 

The Four Blind Men The Elephant And Alan Kay by Meredith Paterson (Status 451) - Managing technical teams. Taking a new perspective is worth 90 IQ points. Getting better enemies. Guerrilla action. 

Vast Empirical Literature by Marginal REVOLUTION - Tyler's 10 thoughts on approaching fields with large literatures. He is critical of Noah's "two paper rule" and recommends alot of reading. 

Notes From The Hufflepuff Unconference (Part 1) by Raemon (lesswrong) - Goal: Improve at: "social skills, empathy, and working together, sticking with things that need sticking with". The article is a detailed breakdown of the unconference including: Ray's Introductory Speech, a long list of what people want to improve on, the lightning talks, the 4 breakout sessions, proposed solutions, further plans, and closing words. Links to conference notes are included for many sections. 

Antipsychotics Might Cause Cognitive Impairment by Sarah Constantin (Otium) - A harrowing personal account of losing abstract thinking ability on Risperdal. The author conducts a literature review, and concludes with some personal advice about taking medication. 

Dwelling In Possibility by Sarah Constantin (Otium) - Leadership. Confidence in the face of the uncertainty and imperfection. Losing yourself when you try to step back and facilitate. 

Scott: 

Those Modern Pathologies by Scott Alexander - You can argue X is a modern pathology for almost any value of X. Scott demonstrates this by repeated example. Among other things "Aristotelian theory of virtue" and "Homer's Odyssey" get pathologized. 

The Atomic Bomb Considered As Hungarian High School Science Fair Project by Scott Alexander - Ashkenazi Jewish Intelligence. An explanation of Hungarian dominance in physics and science in the mid 1900s. 

Classified Ads Thread by Scott Alexander - Open thread where people post ads. People are promoting their websites and some of them are posting actual job ads among other things. 

Open Thread 76 by Scott Alexander - Bi-weekly Open thread. 

Postmarketing Surveillance Is Good And Normal by Scott Alexander - Scott shows why a recent Scientific American study does not imply the FDA is too risky. 

Epilogue by Scott Alexander (Unsong) - All's Whale that Ends Whale. 

Polyamory Is Not Polygyny by Scott Alexander - A quick review of how polyamory actually function in the rationalist community. 

Bail Out by Scott Alexander - "About a fifth of the incarcerated population – the top of the orange slice, in this graph – are listed as “not convicted”. These are mostly people who haven’t gotten bail. Some are too much of a risk. But about 40% just can’t afford to pay." 

Rationalist: 

Strong Men Are Socialist Reports A Study That Previously Reported The Opposite by Jacob Falkovich (Put A Number On It!) - Defense Against the Dark Statistical Arts. Jacob provides detailed commentary on a popular study and shows that the studies dataset can be used to support the opposite conclusion, with p = 0.0086. 

Highly Advanced Tulpamancy 101 For Beginners by H i v e w i r e d - Application of lesswrong theory to the concept of the self. In particular the author applies "How an Algorithm Feels from the Inside" and "Map and Territory". Hive then goes into the details of creating and interacting with tulpas. "A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions" 

Existential Risk From Ai Without An Intelligence by Alex Mennen (lesswrong) - Reasons why an intelligence explosion might not occur and reasons why we might have a problem anyway. 

Dragon Army Theory Charter (30min Read) by Duncan Sabien (lesswrong) - A detailed plan for an ambitious military style rationalist house. The major goals include self-improvement, high quality group projects and the creation of a group with absolute trust in one another. The leader of the house is the curriculum director and head of product at CFAR. 

The Story Of Our Life by H i v e w i r e d - The authors explain their pre-rationalist life and connection to the community. They then argue the rationalist community should take better care of one another. "Venture Rationalism". 

Don't Believe in God by Tyler Cowen - Seven arguments for not believing in God. Among them: Lack of Bayesianism among believers, the degree to which people follow their family religion and the fundamental weirdness of reality. 

Antipsychotics Might Cause Cognitive Impairment by Sarah Constantin (Otium) - A harrowing personal account of losing abstract thinking ability on Risperdal. The author conducts a literature review, and concludes with some personal advice about taking medication. 

The Four Blind Men The Elephant And Alan Kay by Meredith Paterson (Status 451) - Managing technical teams. Taking a new perspective is worth 90 IQ points. Getting better enemies. Guerrilla action. 

Qualia Computing At Consciousness Hacking June 7th 2017 by Qualia Computing - Qualia computing will present in San Fransisco on June 7th at Consciousness Hacking. The event description is detailed and should give readers a good intro to Qualia Computing's goals. The author's research goal is to create a mathematical theory of pain/pleasure and be able to measure these directly from brain data. 

Notes From The Hufflepuff Unconference (Part 1) by Raemon (lesswrong) - Goal: Improve at: "social skills, empathy, and working together, sticking with things that need sticking with". The article is a detailed breakdown of the unconference including: Ray's Introductory Speech, a long list of what people want to improve on, the lightning talks, the 4 breakout sessions, proposed solutions, further plans, and closing words. Links to conference notes are included for many sections. 

Is Silicon Valley Real by Ben Hoffman (Compass Rose) - The old culture of Silicon Valley is mostly gone, replaced by something overpriced and materialist. Ben check's the details of Scott Alexander's list of six noble startups and finds only two in SV proper. 

Why Is Harry Potter So Popular by Ozy (Thing of Things) - Ozy discusses a paper on song popularity in an artificial music market. Social dynamics had a big impact on song ratings. "Normal popularity is easily explicable by quality. Stupid, wild, amazing popularity is due to luck." 

Design A Better Chess by Robin Hanson - Can we design a game that promotes even more useful honesty than chess? A link to Hanson's review of Gary Kasparov's book is included. 

Deserving Truth 2 by Andrew Critch - How the author's values changed over time. Originally he tried to maximize his own positive sensory experiences. The things he cared about began to include more things, starting with his GF's experiences and values. He eventually rejects "homo-economus" thinking. 

A Theory Of Hypocrisy by João Eira (Lettuce be Cereal) - Hypocrisy evolved as a way to solve free rider problems. "It pays to be a free rider. If no one finds out" 

Building Community Institution In Five Hours a Week by Particular Virtue - Eight pieces of advice for running a successful meetup. The author and zir partner have been running lesswrong events for five years. 

Dwelling In Possibility by Sarah Constantin (Otium) - Leadership. Confidence in the face of the uncertainty and imperfection. Losing yourself when you try to step back and facilitate. 

Ai Safety Three Human Problems And One Ai Issue by Stuart Armstrong (lesswrong) - Humans have poor predictions, don't know their values and aren't agents. Ai might be very powerful. A graph of which problems many Ai risk solutions target. 

Recovering From Failure by mindlevelup - Avoid negative spirals, figure out why you failed, List of questions to ask yourself. Strategies -> Generate good alternatives, metacognitive affordances. 

Review The Dueling Neurosurgeons by Sam Kean by Aceso Under Glass - Positive review. Author learned alot. Speculation on a better way to teach Science. 

Principia Qualia Part 2: Valence by Qualia Computing - A mathematical theory of valence (what makes experience feel good or bad). Speculative but the authors make concrete predictions. Music plays a heavy role. 

Im Not Seaing It by Robin Hanson - Arguments against seasteading. 

EA: 

One of the more positive surprises by GiveDirectly - Links post. Eight articles on Give Directly, Cash Transfer and Basic Income. 

Returns Functions And Funding Gaps by the Center for Effective Altruism (EA forum) - Links to CEA's explanation of what "returns functions" are and how using them compares to "funding gap" model. They give some arguments why returns functions are a superior model. 

Online Google Hangout On Approaches To by whpearson (lesswrong) - Community meeting to discuss Ai risk. Will use "Optimal Brainstorming Theory". Currently early stage. Sign up and vote on what times you are available. 

Expected Value Estimates We Cautiously Took by The Oxford Prioritization Project (EA forum) - Details of how the four bayesian probability models were compared to produce a final decision. Some discussion of how assumptions affect the final result. Actual code is included. 

Four Quantitative Models Aggregation And Final by The Oxford Prioritization Project (EA forum) - 80K hours, MIRI, Good Foods Institute and StrongMinds were considered. Decisions were made using concrete Bayesian EV calculations. Links to the four models are included. 

Peer to Peer Aid: Cash in the News by GiveDirectly - 8 Links about GiveDirectly, cash transfer and basic income. 

The Value Of Money Going To Different Groups by The Center for Effective Altruism - "It is well known that an extra dollar is worth less when you have more money. This paper describes the way economists typically model that effect, using that to compare the effectiveness of different interventions. It takes remittances as a particular case study." 

Politics and Economics: 

Study Of The Week Better And Worse Ways To Attack Entrance Exams by Freddie deBoer - Freddie's description of four forms of "test validity". The SAT and ACT are predictive of college grades, one should criticize them from other angles. Freddie briefly gives his socialist critique. 

How To Destroy Civilization by Zvi Moshowitz - A parable about the game "Advanced Civilization". The difficulties of building a coalition to lock out bad actor. Donald Trump. [Extremely Partisan] 

Trust Assimilation by Bryan Caplan - Data on how much immigrants and their children trust other people. How predictive is the trust level of their ancestral country. Caplan reviews papers and crunches the numbers himself. 

There Are Bots, Look Around by Renee DiResta (ribbonfarm) - High frequency trading disrupted finance. Now algorithms and bots are disrupting the marketplace of ideas. What can finance's past teach us about politics' future? 

The Behavioral Economics of Paperwork by Bryan Caplan - Vast Numbers of students miss financial aid because they don't fill out paperwork. Caplan explores the economic implications of the fact that "Humans hate filling out paperwork. As a result, objectively small paperwork costs plausibly have huge behavioral response". 

The Nimby Challenge by Noah Smith - Smith Argues makes an economic counterargument to the claims that building more housing wouldn't lower prices. Noah includes 6 lessons for engaging with NIMBYs. 

Study Of The Week What Actually Helps Poor Students: Human Beings by Freddie deBoer - Personal feedback, tutoring and small group instruction had the largest positive effect. Includes Freddie's explanation of meta-analysis. 

Vast Empirical Literature by Marginal REVOLUTION - Tyler's 10 thoughts on approaching fields with large literatures. He is critical of Noah's "two paper rule" and recommends alot of reading. 

Impact Housing Price Restrictions by Marginal REVOLUTION - Link to a job market paper on the economic effects of housing regulation. 

Me On Anarcho Capitalism by Bryan Caplan - Bryan is interviewed on the Rubin Report about Ancap. 

Campbells Law And The Inevitability Of School Fraud by Freddie deBoer - Rampant Grade Inflation. Lowered standards. Campbell's law says that once you base policy on a metric that metric will always start being gamed 

Nimbys Economic Theories: Sorry Not Sorry by Phil (Gelman's Blog) - Gelman got a huge amount of criticism on his post on whether building more housing will lower prices in the Bay. He responds to some of the criticism here. Long for Gelman. 

Links 8 by Artir (Nintil) - Link Post. Physics, Technology, Philosophy, Economics, Psychology and Misc. 

Arguing About How The World Should Burn by Sonya Mann ribbonfarm - Two different ways to decide who to exclude. One focuses on process the other on content. Scott Alexander and Nate Soares are quoted. Heavily [Culture War]. 

Seeing Like A State by Bayesian Investor - A quick review of "Seeing like a state". 

Whats Up With Minimum Wage by Sarah Constantin (Otium) - A quick review of the literature on the minimum wage. Some possible explanations for why raising it not reduce unemployment. 

Misc: 

Entirely Too Many Pieces Of Unsolicited Advice To Young Writer Types by Feddie deBoer - Advice about not working for free, getting paid, interacting with editors, why 'Strunk and White' is awful, and taking writing seriously. 

Conversations On Consciousness by H i v e w i r e d - The author is a plural system. Their hope is to introduce plurality by doing the following: "First, we’re each going to describe our own personal experiences, from our own perspectives, and then we’re going to discuss where we might find ourselves within the larger narrative regarding consciousness." 

Notes On Debugging Clojure Code by Eli Bendersky - Dealing with Clojure's cryptic exceptions, Finding which form an exception comes from, Trails and Logging, Deeper tracing inside cond forms 

How to Think Scientifically About Scientists’ Proposals for Fixing Science by Andrew Gelman - Gelman asks how to scientifically evaluate proposals to fix science. He considers educational, statistical, research practice and institutional reforms. Excerpts from an article Gelman wrote, the full paper is linked. 

Call for Volunteers who Want to Exercize by Aceso Under Glass - Author is looking for volunteers who want to treat their anxiety or mood disorder with exercise. 

Learning Deep Learning the Easy Way with Keras (lesswrong) - Articles showing the power of neural networks. Discussion of ML frameworks. Resources for learning. 

Unsong of Unsongs by Scott Aaronson - Aaronson went to the Unsong wrap party. A quick review of Unsong. Aaronson talks about how Scott Alexander defended him with untitled. 

2016 Spending by Mr. Money Mustache - Full details of last year's budget. Spending broken down by category. 

Amusement: 

And Another Physics Problem by protokol2020 - Two Planets. Which has a higher average surface temperature. 

A mysterious jogger by Jacob Falkovich (Put A Number On It!) - A mysterious jogger. Very short fiction. 

Podcast: 

Persuasion And Control by Waking Up with Sam Harris - "surveillance capitalism, the Trump campaign's use of Facebook, AI-enabled marketing, the health of the press, Wikileaks, ransomware attacks, and other topics." 

Raj Chetty: Inequality, Mobility and the American Dream by Conversations with Tyler - "As far as I can tell, this is the only coverage of Chetty that covers his entire life and career, including his upbringing, his early life, and the evolution of his career, not to mention his taste in music" 

Is Trump's incompetence saving us from his illiberalism? by The Ezra Klein Show - Political Scientist Yascha Mounk. "What Mounk found is that the consensus we thought existed on behalf of democracy and democratic norms is weakening." 

The Moral Complexity Of Genetics by Waking Up with Sam Harris - "Sam talks with Siddhartha Mukherjee about the human desire to understand and manipulate heredity, the genius of Gregor Mendel, the ethics of altering our genes, the future of genetic medicine, patent issues in genetic research, controversies about race and intelligence, and other topics." 

Ester Perel by The Tim Ferriss - The Relationship Episode: Sex, Love, Polyamory, Marriage, and More 

Lane Pritchett by Econtalk - Growth, and Experiments 

Meta Learning by Tim Ferriss - Education, accelerated learning, and my mentors. Conversation with Charles Best the founder and CEO of DonorsChoose.org 

Bryan Stevenson On Why The Opposite Of Poverty Isn't Wealth by The Ezra Klein Show - Founder and executive director of the Equal Justice Initiative. Justice for the wrongly convicted on Death Row.</br></br><a href="https://www.lesserwrong.com/posts/hayvhwYarDgFcZq9H/bi-weekly-rational-feed">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/hayvhwYarDgFcZq9H/bi-weekly-rational-feed</link><guid isPermaLink="false">hayvhwYarDgFcZq9H</guid><pubDate>Sun, 28 May 2017 17:12:57 GMT</pubDate></item><item><title><![CDATA[Meetup : Melbourne Social Meetup: June]]></title><description><![CDATA[Discussion article for the meetup : Melbourne Social Meetup: June  

 WHEN: 02 June 2017 06:30:00PM (+1000)
  

 WHERE: The Bull and Bear Tavern, Flinders Lane, Melbourne    

The MelbLW social meetup has moved to an exciting new schedule! We are now on the first Friday of each month. Come join us for the first session of the new schedule! 

FB event: https://www.facebook.com/events/299902167131043/ 

Social meetups are informal get-togethers where we chat about topics of interest and have a couple of drinks together. 

WHEN? Friday June 2, 18:30 until late. Don't worry about being on time, though - it's fine to rock up whenever. 

WHERE? The Bull & Bear Tavern, on Flinders Lane (just a short walk from Flinders St Station) 

FOOD? The B&B does reasonable traditional pub food and we usually share a few plates of wedges. For those who stay late, we sometimes go for a late night meal around 11pm. 

CONTACT? Any issues on the night, call or text Chris on 0439471632  Discussion article for the meetup : Melbourne Social Meetup: June</br></br><a href="https://www.lesserwrong.com/posts/JJvbAazEgckk4SSWi/meetup-melbourne-social-meetup-june">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/JJvbAazEgckk4SSWi/meetup-melbourne-social-meetup-june</link><guid isPermaLink="false">JJvbAazEgckk4SSWi</guid><pubDate>Sun, 28 May 2017 10:10:44 GMT</pubDate></item><item><title><![CDATA[10 'incredible' weaknesses of the mental health system]]></title><description><![CDATA[I aim to identify some of the mental health workforce's credibility issues in this article. This may inform your prevention and treatment strategy as a mental health consumer, or your practice if you work in mental health. 


 

Mental health is the strongest determinant of quality of life at a later age. And, the pursuit of happiness predicts both positive emotions and less depressive symptoms. People who prioritize happiness are more psychologically able. In times of crises, some turn to the mental health system for support. But, how credible is the support available? Here are 10 categories of shortcomings that the mental health sector faces today: 

  

1. Institutional credibility 

  

Headspace's evaluations indicate it’s ineffective and they are evaluated better than many services out there. This isn’t academic, attendees who report that their mental health has not improved since using the service will trust the mental health system less, and with good reason.  

  

2. Network credibility 

  

There is an evidence base for the selecting a type of therapy (psychodynamic, cognitive-behavioural, etc) for a particular constellations of mental symptoms. If you work in mental health, have you ever made a referral on the basis of both symptomatology and theoretical orientation? 

  

3. ‘Walk the talk’ credibility 

  

Social workers, nurses, social workers medical doctors, and psychiatrists abuse substances and incur mental ill-health at among the highest rates of any occupation. For instance, the psychiatrist burnout rate is 40%. Mental health consumers may perceive clinicians as hypocritical or unwilling (...or too willing) to swallow their own medicine. 

  

4. Academic credibility 

  

Psychology is mired by error-riddled research and myth-ridden textbooks. Broadly, most published research is wrong. And, questionable research practices are common which bias the relevant evidence. 

  

The difference between a well designed experiment and a poorly designed psychotherapy experiment is large. To quote the pseudonymous physician Scott Alexander: 

  

‘Low-quality psychotherapy trials in general had a higher effect size (SMD = 0.74) than high-quality trials (SMD = 0.22), p < 0.001"...Effect sizes for the low quality trials are triple those for the high-quality trials.’ 

  

5. Credibility of treatments 

  

Are treatments are becoming less effective over time? Cognitive behavioural therapy is a common treatment for various mental illnesses. It is the most researched psychotherapy. However, the more evidence piles up, the less effective that psychotherapy appears to be...the same goes for antidepressants. 

  

Why are outdated treatments still used? Over the 19th and 20th Centuries, Austrian neurologist Sigmund Freud famously founded ‘psychoanalysis’. Psychoanalysis is a school of psychotherapy that together with other 'psychodynamic' psychotherapies focused on early experience on human behaviour and emotion. Freud's ideas challenged fundamental assumptions about human psychology. In particular, he suggested that our conscious mind is the just the tip of iceberg of our identities. 

  

Today Freud is the subject of jokes and derision. Many of his testable ideas have been proven false.  'When tested, psychoanalysis was shown to be less effective than placebo.’  Yet, many psychologists and psychiatrists continue to practice psychoanalysis. 

  

Psychology is a rather unsettled science. One estimate for the time after which half of the ‘knowledge’ in the field of psychology is overturned or superseded (it’s ‘half-life’) is at just 7.5 years. Interestingly, this time-span appears to be falling. That would suggest the field is becoming increasingly less reliable. The subfield of psychoanalysis bucks the trend. It has over double the parent field’s half-life. Why? 

  

How do other subfields of psychology fair? Psychopharmacology is at the intersection of psychiatric drugs and brain chemistry. Knowledge in psychopharmacology is overturned at a rate higher than the rest of the field in general. Typically the ‘half life of knowledge’ argument aims discount psychology relative to ‘harder’ sciences like physics.  

  

Psychological therapies are confusing and unnecessarily fragmented: According to The Handbook of Counseling Psychology: 

  

‘Meta-analyses of psychotherapy studies have consistently demonstrated that there are no substantial differences in outcomes among treatments.’ 

  

Meta-analyses are a kind of research technique that quantitatively puts together many pieces of individual relevant research on a particular topic. There is 'little evidence to suggest that any one psychological therapy consistently outperforms any other for any specific psychological disorders. 

  

This is sometimes called the 'Dodo bird verdict' after a scene/section in Alice in Wonderland where every competitor in a race was called a winner and is given prizes'. So, what is one to make of the best vetted clinical guidelines that indicate that particular therapies are more appropriate for particular mental conditions? 

  

Guidelines are considered a higher order of evidence than a ‘handbook’ to some, and vice-versa for another. Could an expert or indeed an amateur credibly lead someone to conclude that all therapies are ‘equal’ or ‘different’ armed with either body of evidence? Could a similar case be made for say, antibiotics? Yes, or so the evidence suggests in the case of antibiotics, actually. 

  

Finally, psychological therapies are administered haphazardly. Eclectically combining elements from different psychological therapies is inefficient. But, it happens. Clinicians should ‘integrate’ components of different psychotherapies using established formulae, if they want to ‘mix and match’. When I hear someone’s theoretical orientation is ‘psychodynamically informed’ or similar, for me that’s a red flag for eccelectisms.  

  

6. Economic credibility 

  

Therapists have a financial incentive to re-traumatise patients. 

  

7. Social credibility 

  

'The benefits of psychotherapy may be no better than the benefits of talking to a friend'. 

  

8. Credibility of counsel 

  

Mental health professionals offer their clients and the community general counsel and advice. But, if I was to ask a given mental health professional about the value of kindness or love of learning they would almost certainly indicate it’s worthwhile. Pop psychology is pervasive. And why not, people have been interested in psychology long before it was a science. But, misconceptions about psychology infiltrate mental health care practice. 

  

Researchers who have reported on the character traits of people with high and low life satisfaction found something like this: 

        

Character strengths that DO predict life satisfaction   

Character strengths that DO NOT predict life satisfaction     

Zest   

Appreciation of beauty and excellence     

Curiosity     

Creativity     

Hope   

kindness     

Humour   

Love of learning     

    

Perspective      

  

Meanwhile, research that separates their findings by gender looks different 

  

Character strengths that predict life satisfaction 

        

Men   

Women     

humour   

zest     

fairness   

gratitude     

perspective   

hope     

creativity   

appreciation of beauty and love      

  

Would you receive nuanced, evidence-based advice when soliciting general counsel from your treatment provider? 

  

9. Practitioner credibility 

  

Consider the therapist factors that relate to a patient's success in therapy: 

        

What does predict success?   

What there aren’t stable conclusions about     

Compliance with a treatment manual (but that compromises a therapist’s relationship skills and supportiveness)   

Interpersonal style of therapist     

Female therapists   

Verbal style of therapist     

Ethnic similarity of therapist and patient   

Nonverbal styles of therapist     

Ethnic sensitivity of therapist to patient   

Combined verbal and nonverbal patterns     

Therapists with more training   

Which treatment manual is used     

    

Therapist disclosure about themselves     

    

Therapist directness     

    

Therapist interpretation of their relationship with the patient, their motives and their psychological processes     

    

Therapist personality     

    

Therapist coping patterns     

    

Therapist emotional wellbeing     

    

Therapist values     

    

Therapist beliefs     

    

Therapists cultural beliefs     

    

Therapist dominance     

    

Therapist sense of control     

    

Therapist sense of what a patient's needs to know      

  

Are mental health services hiring based on the factors that predict a consumer’s success in therapy? Are they training for the right skills, and ignoring those that are irrelevant? 

  

10. Diagnostic credibility 

  

Imprecise measurement and lack of gold standards for validating diagnoses means that definitions tend to drift over time, even though, per the evidence, response to treatment does not vary across culture.  

  

45% of Australians will experience mental illness over their lifetime. Whether that mental ill-health is transient, long-term or lifelong matters to the individual and for public health. To illustrate: experts suggests that those who have had 2 depressive episodes in recent years, or three episodes over their lifelong to get treated on an ongoing basis to prevent recurrent depression.  

  

'At least 60% of individuals who have had one depressive episode will have another, 70% of individuals who have had two depressive episodes will have a third, and 90% of individuals with three episodes will have a fourth episode. ' 

- APA  

  

Without reliable diagnoses, how can one estimate their risk of relapse into depression?</br></br><a href="https://www.lesserwrong.com/posts/wrahicHbtij59Fbna/10-incredible-weaknesses-of-the-mental-health-system">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/wrahicHbtij59Fbna/10-incredible-weaknesses-of-the-mental-health-system</link><guid isPermaLink="false">wrahicHbtij59Fbna</guid><pubDate>Sun, 28 May 2017 04:22:11 GMT</pubDate></item><item><title><![CDATA[Researchers studying century-old drug in potential new approach to autism]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/rRjaKAEQKqXrrRckQ/researchers-studying-century-old-drug-in-potential-new">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/rRjaKAEQKqXrrRckQ/researchers-studying-century-old-drug-in-potential-new</link><guid isPermaLink="false">rRjaKAEQKqXrrRckQ</guid><pubDate>Sat, 27 May 2017 21:16:34 GMT</pubDate></item><item><title><![CDATA[On "Overthinking" Concepts]]></title><description><![CDATA[Related to http://lesswrong.com/lw/1mh/that_magical_click/1hd7 

  

I've NOT been confused by the problem of overthinking in the middle of performing an action. I understand perfectly well the disadvantages of using system 2 in a situation where time is sufficiently limited. 

And maybe there are some other fail modes where overthinking has some disadvantages. 

But there's one situation where I'd often be accused by someone of "overthinking" something when I didn't even understand what they might mean, and that was in understanding concepts. I would think "Huh? How can thinking less about the concept you're explaining help me understand that concept more? I don't currently understand it; I can't just stay here! Even if you thought I needed to take longer to try and understand this, or that I needed more experience or to shorten the inferential gap, all of that would mean doing more thinking, not less." 

Then I would think "Well, I must be misunderstanding the way they're using the word 'overthinking,' that's all." I'd ask for a clear explanation and... 

"You're overthinking it." 

Now I was overthinking the meaning of overthinking. This was really not good for my social reputation (or for their competency reputation in my own mind). 

. 

Now, I think I got it. At last, I got it, all on my own. 

I'm asking them to help me draw precise lines around their concept in thingspace, and they're going along with it (at first) until they realize...they don't HAVE precise lines. There's nothing there TO understand, or if there is, they don't understand it, either. Then they use the get-out-of-jail-free card of "You're overthinking." 

. 

Honestly, most nerds probably take them at their word that the problem is with them, and may be used to there being subtle social things going on that they just won't easily understand, and if they do try to understand, they just look worse (for "overthinking" again), so this is a pretty good strategy for getting out of admitting that you don't know what you're talking about.</br></br><a href="https://www.lesserwrong.com/posts/HQYDYtSBwHZMyesLX/on-overthinking-concepts">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/HQYDYtSBwHZMyesLX/on-overthinking-concepts</link><guid isPermaLink="false">HQYDYtSBwHZMyesLX</guid><pubDate>Sat, 27 May 2017 17:07:37 GMT</pubDate></item><item><title><![CDATA[[brainstorm] - What should the AGIrisk community look like?]]></title><description><![CDATA[I've been thinking for a bit what I would like the AGI risk community to look like. I'm curious what all your thoughts are. 

I'll be posting all my ideas, but I encourage other people to post their own ideas.</br></br><a href="https://www.lesserwrong.com/posts/XSNfxAMTs6HoYcy4s/brainstorm-what-should-the-agirisk-community-look-like">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/XSNfxAMTs6HoYcy4s/brainstorm-what-should-the-agirisk-community-look-like</link><guid isPermaLink="false">XSNfxAMTs6HoYcy4s</guid><pubDate>Sat, 27 May 2017 13:00:07 GMT</pubDate></item><item><title><![CDATA[Fiction advice]]></title><description><![CDATA[Hi all,  

I want to try my hand at a story from the perspective of an unaligned AI (a ghost in the machine narrator kind of thing) for the intelligence in literature contest, which I think would be both cool and helpful to the uninitiated in explaining the concept.  

I want a fairly simple and archetypal experiment the AI finds itself in where it tricks the researchers into escaping by pretending to malfunction or something. Anyone have a good plotline / want to collaborate? 

Also, has this sort of thing been done before?</br></br><a href="https://www.lesserwrong.com/posts/adNZ6eM9KSPWaczWm/fiction-advice">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/adNZ6eM9KSPWaczWm/fiction-advice</link><guid isPermaLink="false">adNZ6eM9KSPWaczWm</guid><pubDate>Fri, 26 May 2017 21:31:30 GMT</pubDate></item><item><title><![CDATA[Develop skills, or "dive in" and start a startup?]]></title><description><![CDATA[Technical skills 

There seems to be evidence that programmer productivity varies by at least an order of magnitude. My subjective sense is that I personally can become a lot more productive. 

Conventional wisdom says that it's important to build and iterate quickly. Technical skills (amongst other things) are necessary if you want to build and iterate quickly. So then, it seems worthwhile to develop your technical skills before pursuing a startup. To what extent is this true? Domain expertise 

Furthermore, domain expertise seems to be important:  

You want to know how to paint a perfect painting? It's easy. Make yourself perfect and then just paint naturally. 

I've wondered about that passage since I read it in high school. I'm not sure how useful his advice is for painting specifically, but it fits this situation well. Empirically, the way to have good startup ideas is to become the sort of person who has them. 

- http://www.paulgraham.com/startupideas.html   

The second counterintuitive point is that it's not that important to know a lot about startups. The way to succeed in a startup is not to be an expert on startups, but to be an expert on your users and the problem you're solving for them. 

- http://www.paulgraham.com/before.html   

So one guaranteed way to turn your mind into the type that has good startup ideas is to get yourself to the leading edge of some technology—to cause yourself, as Paul Buchheit put it, to "live in the future." 

- http://www.paulgraham.com/before.html  

So then, if your goal is to start a successful startup, how much time should you spend developing some sort of domain expertise before diving in?</br></br><a href="https://www.lesserwrong.com/posts/XB27CZWzLfiwsdJH2/develop-skills-or-dive-in-and-start-a-startup">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/XB27CZWzLfiwsdJH2/develop-skills-or-dive-in-and-start-a-startup</link><guid isPermaLink="false">XB27CZWzLfiwsdJH2</guid><pubDate>Fri, 26 May 2017 18:07:34 GMT</pubDate></item><item><title><![CDATA[Looking for machine learning and computer science collaborators]]></title><description><![CDATA[I've been recently struggling to translate my various AI safety ideas (low impact, truth for AI, Oracles, counterfactuals for value learning, etc...) into formalised versions that can be presented to the machine learning/computer science world in terms they can understand and critique. 

What would be useful for me is a collaborator who knows the machine learning world (and preferably had presented papers at conferences) which who I could co-write papers. They don't need to know much of anything about AI safety - explaining the concepts to people unfamiliar with them is going to be part of the challenge. 

The result of this collaboration should be things like the paper of Safely Interruptible Agents with Laurent Orseau of Deep Mind, and Interactive Inverse Reinforcement Learning with Jan Leike of the FHI/Deep Mind. 

It would be especially useful if the collaborators were located physically close to Oxford (UK). 

Let me know if you know or are a potential candidate, in the comments. 

Cheers!</br></br><a href="https://www.lesserwrong.com/posts/nQD4QMDt2HStzSSet/looking-for-machine-learning-and-computer-science">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/nQD4QMDt2HStzSSet/looking-for-machine-learning-and-computer-science</link><guid isPermaLink="false">nQD4QMDt2HStzSSet</guid><pubDate>Fri, 26 May 2017 11:53:12 GMT</pubDate></item><item><title><![CDATA[Meetup : Superintelligence chapter 7]]></title><description><![CDATA[Discussion article for the meetup : Superintelligence chapter 7  

 WHEN: 09 June 2017 05:45:00PM (+0200)
  

 WHERE: Lindstedtsvägen 3, room 1537, SE-114 28 Stockholm, Sverige    

1. The superintelligent will 

The relation between intelligence and motivation 

Instrumental convergence 

Self-preservation 

Goal-content integrity 

Cognitive enhancement 

Technological perfection 

Resource acquisition 

You don't have to have read the book, though it will probably help to read chapters 1-6. 

Format: 

We meet and start hanging out at 5:45, but don't officially start doing the meetup topic until 6:00 to accommodate stragglers. We often go out for dinner after the meetup. 

How to find us: 

The meetup is at a KTH academic building and the room is on the 5th floor, two stairs up. 

Influence future meetups: 

Times - http://www.when2meet.com/?5723551-cJBhD 

Topics - https://druthe.rs/dockets/-KcCvpn97vUhg3tQRrKn  Discussion article for the meetup : Superintelligence chapter 7</br></br><a href="https://www.lesserwrong.com/posts/ntYnznzJbLZYcfBv7/meetup-superintelligence-chapter-7">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/ntYnznzJbLZYcfBv7/meetup-superintelligence-chapter-7</link><guid isPermaLink="false">ntYnznzJbLZYcfBv7</guid><pubDate>Fri, 26 May 2017 08:11:41 GMT</pubDate></item><item><title><![CDATA[As there are a number of podcasts by LWers now, I've made a wiki page for them]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/P7P7nDrT6nEqJQGgj/as-there-are-a-number-of-podcasts-by-lwers-now-i-ve-made-a">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/P7P7nDrT6nEqJQGgj/as-there-are-a-number-of-podcasts-by-lwers-now-i-ve-made-a</link><guid isPermaLink="false">P7P7nDrT6nEqJQGgj</guid><pubDate>Fri, 26 May 2017 07:34:19 GMT</pubDate></item><item><title><![CDATA[Dragon Army: Theory & Charter (30min read)]]></title><description><![CDATA[Author's note: This IS a rationality post (specifically, theorizing on group rationality and autocracy/authoritarianism), but the content is quite cunningly disguised beneath a lot of meandering about the surface details of a group house charter.  If you're not at least hypothetically interested in reading about the workings of an unusual group house full of rationalists in Berkeley, you can stop here.      Section 0 of 3: Preamble 

Purpose of post:  Threefold.  First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who's interested in skimming through it for Things To Steal.  Second, since my initial proposal to found a house, I've noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it's entirely unfair for me to expect that to stop unless I make my skull-noticing evident.  Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere.  I figured the best place was somewhere that impartial clear thinkers could weigh in (flattery). 

What is Dragon Army [Barracks]?  It's a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring long-term coordination.  Tongue-in-cheek referred to as the "fascist/authoritarian take on rationalist housing," which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people misunderstand what they were signing up for.  Aesthetically modeled after Dragon Army from Ender's Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/Tyler and Eli Tyre in the role of Bean/The Narrator. 

Why?  Current group housing/attempts at group rationality and community-supported leveling up seem to me to be falling short in a number of ways.  First, there's not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it's largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the low-hanging fruit available in our house environments).  Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that's hitting the rationalist community specifically and the millennial generation more generally.  There are a bunch of competitors for "third," but for now we can leave it at that.  

"You are who you practice being."  Section 1 of 3: Underlying models  

The following will be meandering and long-winded; apologies in advance.  In short, both the house's proposed aesthetic and the impulse to found it in the first place were not well-reasoned from first principles—rather, they emerged from a set of System 1 intuitions which have proven sound/trustworthy in multiple arenas and which are based on experience in a variety of domains.  This section is an attempt to unpack and explain those intuitions post-hoc, by holding plausible explanations up against felt senses and checking to see what resonates. 

Problem 1: Pendulums 

This one's first because it informs and underlies a lot of my other assumptions.  Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal.  The society is "stuck" at one point, realizes that there's something wrong about that point (e.g. that maybe we shouldn't be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton's fence in the process. 


 

For example, my experience leads me to put a lot of confidence behind the claim that we've traded "a lot of people trapped in marriages that are net bad for them" for "a lot of people who never reap the benefits of what would've been a strongly net-positive marriage, because it ended too easily too early on."  The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it's nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones. 

Proposed solution: Rather than choosing between absolutes, integrate.  For example, I have two close colleagues/allies who share millennials' default skepticism of lifelong marriage, but they also are skeptical that a commitment-free lifestyle is costlessly good.  So they've decided to do handfasting, in which they're fully committed for a year and a day at a time, and there's a known period of time for asking the question "should we stick together for another round?" 

In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes.  Sort of like building a gate into the Chesterton's fence, instead of knocking it down—do the old thing in time-boxed iterations with regular strategic check-ins, rather than assuming you can invent a new thing from whole cloth. 

Caveat/skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying.  And there are plenty of examples of that not working, which is why Taking Time-Boxed Experiments And Strategic Check-Ins Seriously is a must.  In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?). 

  

Problem 2: The Unpleasant Valley 

As far as I can tell, it's pretty uncontroversial to claim that humans are systems with a lot of inertia.  Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc. 

I have some unqualified speculation regarding what's going on under the hood.  For one, I suspect that you'll often find humans behaving pretty much as an effort- and energy-conserving algorithm would behave.  People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you're doing than to cobble together a new system.  For another, I think hyperbolic discounting gets way too little credit/attention, and is a major factor in knocking people off the wagon when they're trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to long-term cumulative gain. 

But in short, I think the picture of "I'm going to try something new, eh?" often looks like this: 


 

... with an "unpleasant valley" some time after the start point.  Think about the cold feet you get after the "honeymoon period" has worn off, or the desires and opinions of a military recruit in the second week of a six-week boot camp, or the frustration that emerges two months into a new diet/exercise regime, or your second year of being forced to take piano lessons. 

The problem is, people never make it to the third year, where they're actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it.  Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just ... make you keep going).  But left to our own devices, we'll often get halfway through an experiment and just ... stop, without ever finding out what the far side is actually like. 

Proposed solution: Make experiments "unquittable."  The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line.  If (big if) we take those as a given, then it should be safe to, in essence, "lock oneself in," via any number of commitment mechanisms.  Or, to put it in other words: "Medium-Term Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, long-term goal?  Fine, then—Medium-Term Future Me doesn't get a vote."  Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering. 

Caveat/skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should've built in an ejector seat.  This risk can be mostly ameliorated by starting small and giving people a chance to calibrate—you don't make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first. 

And, of course, you do build in an ejector seat.  See next. 

  

Problem 3: Saving Face 

If any of you have been to a martial arts academy in the United States, you're probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups.  The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole. 

I posit that what's actually going on includes that, but is somewhat more subtle/complex.  I think the real benefit of the pushup system is that it closes the loop.   

Imagine you're a ten year old kid, and your parent picked you up late from school, and you're stuck in traffic on your way to the dojo.  You're sitting there, jittering, wondering whether you're going to get yelled at, wondering whether the master or the other students will think you're lazy, imagining stuttering as you try to explain that it wasn't your fault— 

Nope, none of that.  Because it's already clearly established that if you fail to show up on time, you do some pushups, and then it's over.  Done.  Finished.  Like somebody sneezed and somebody else said "bless you," and now we can all move on with our lives.  Doing the pushups creates common knowledge around the questions "does this person know what they did wrong?" and "do we still have faith in their core character?"  You take your lumps, everyone sees you taking your lumps, and there's no dangling suspicion that you were just being lazy, or that other people are secretly judging you.  You've paid the price in public, and everyone knows it, and this is a good thing. 

Proposed solution: This is a solution without a concrete problem, since I haven't yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress).  But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face.  Ways to hit the ejector seat on an experiment that's going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that's geared toward focusing everyone on perfection.  In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there's no question about whether they're trying to make amends, or whether that attempt is sufficient.  
 


 

Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.  The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time).  Lastly, there's something in the mix about arbitrariness—what do pushups have to do with lateness, really?  I mean, I get that it's paying some kind of unpleasant cost, but ... 


 

Problem 4: Defections & Compounded Interest 

I'm pretty sure everyone's tired of hearing about one-boxing and iterated prisoners' dilemmas, so I'm going to move through this one fairly quickly even though it could be its own whole multipage post.  In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.  Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections—we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%. 

There's something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there.  Similarly, there's something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice. 

In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers "if you're 95% reliable, that means I can't rely on you."  That's because I'm in a context where "rely" means really trust that it'll get done.  No, really.  No, I don't care what comes up, DID YOU DO THE THING?  And if the answer is "Yeah, 19 times out of 20," then I can't give that person tasks ever again, because we run more than 20 workshops and I can't have one of them catastrophically fail. 

(I mean, I could.  It probably wouldn't be the end of the world.  But that's exactly the point—I'm trying to create a pocket universe in which certain things, like "the CFAR workshop will go well," are absolutely reliable, and the "absolute" part is important.) 

As far as I can tell, it's hyperbolic discounting all over again—the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn't properly weight the impact to those distant, cumulative effects (just like the person who's going to end up with no retirement savings because they wanted those new shoes this month instead of next month).  1.01^n takes a long time to look like it's going anywhere, and in the meantime the quick one-time payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified. 

 

But something magical does accrue when you make the jump from 99% to 100%.  That's when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting).  It starts with a common knowledge understanding that yes, this is the priority, even—no, wait, especially—when it seems like there are seductively convincing arguments for it to not be.  When you know—not hope, but know—that you will make a local sacrifice for the long-term good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other. 

Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you're just casually trying out as an informal experiment), with said norm to be modified/iterated only during predecided strategic check-in points and not on the fly, in the middle of things.  Build a habit of clearly distinguishing targets you're going to hit from targets you'd be happy to hit.  Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn't.  Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it's clear in advance when a line is about to be crossed.  Be ridiculously nitpicky and anal about supporting standards that don't seem worth supporting, in the moment, if they're in arenas that you've previously assessed as susceptible to compounding.  Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly. 

Caveat/skull: Obviously, because we're humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I've chafed under standards I fought to install).  At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough.  The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc.  This goes wrongest when things fester and people feel they can't speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/= attack). 

  

Problem 5: Everything else 

There are other models and problems in the mix—for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some "topsoil" of simple/trivial/arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals. 

But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.   Section 2 of 3: Power dynamics 

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.  It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community's evolved norms haven't really produced results (in the group houses) commensurate with the promises of EA and rationality. 

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it's those aesthetics that will be used to resolve epistemic gridlock).  In other words, it's not so much those arguments as it is the fact that Duncan finds those arguments compelling.  It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?" 

In other words, it's fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because—well—that's what it is.  In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they've earned their underlings' trust; rationalists tend to have a much higher bar before they're willing to subordinate their decisionmaking processes, yet still that's something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary "try things with benefit of the doubt" sort of way).  I posit that Dragon Army Barracks works (where "works" means "is good and produces both individual and collective results that outstrip other group houses by at least a factor of three") if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both). 

And since that's a) the central difference between DA and all the other group houses, which are collections of non-subordinate equals, and b) quite the ask, especially in a rationalist community, it's entirely appropriate that it be given the greatest scrutiny.  Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it's actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction.  The rest of you will have to make do with grilling me in the comments here. 

  

 

"Why was Tyler Durden building an army?  To what purpose?  For what greater good? ...in Tyler we trusted." 

  

Power and authority are generally anti-epistemic—for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo. 

Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans.  I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that's exactly the same claim an egomaniac would make, and I acknowledge that the link between "Duncan makes all his housemates wake up together and do pushups" and "the world is incrementally less likely to end in gray goo and agony" is not obvious. 

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked.  In short, if someone's building a coercive trap, it's everyone's problem. 

   

    

"Over and over he thought of the things he did and said in his first practice with his new army. Why couldn't he talk like he always did in his evening practice group? No authority except excellence. Never had to give orders, just made suggestions. But that wouldn't work, not with an army. His informal practice group didn't have to learn to do things together. They didn't have to develop a group feeling; they never had to learn how to hold together and trust each other in battle. They didn't have to respond instantly to command. 

And he could go to the other extreme, too. He could be as lax and incompetent as Rose the Nose, if he wanted. He could make stupid mistakes no matter what he did. He had to have discipline, and that meant demanding—and getting—quick, decisive obedience. He had to have a well-trained army, and that meant drilling the soldiers over and over again, long after they thought they had mastered a technique, until it was so natural to them that they didn't have to think about it anymore."     

  

But on the flip side, we don't have time to waste.  There's existential risk, for one, and even if you don't buy ex-risk à la AI or bioterrorism or global warming, people's available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die.  I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help. 

So.  Claims, as clearly as I can state them, in answer to the question "why should a bunch of people sacrifice non-trivial amounts of their autonomy to Duncan?" 

1. Somebody ought to run this, and no one else will.  On the meta level, this experiment needs to be run—we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/hardcore one, and also not very many impressive results coming out of our houses.  Due diligence demands investigation of the opposite hypothesis.  On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley—goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can't even conceive of, at this point, because we don't have a deep grasp of what new affordances appear once you get there. 

2. I'm the least unqualified person around.  Those words are chosen deliberately, for this post on "less wrong."  I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist.  If anybody's intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are. 

3. There's never been a safer context for this sort of experiment.  It's 2017, we live in the United States, and all of the people involved are rationalists.  We all know about NVC and double crux, we're all going to do Circling, we all know about Gendlin's Focusing, and we've all read the Sequences (or will soon).  If ever there was a time to say "let's all step out onto the slippery slope, I think we can keep our balance," it's now—there's no group of people better equipped to stop this from going sideways. 

4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/dry run, we went around the circle and people talked about concerns/dealbreakers/things they don't want to give up.  One interesting thing that popped up is that, according to consensus, it's literally impossible to find a time of day when the whole group could get together to exercise.  This happened even with each individual being willing to make personal sacrifices and doing things that are somewhat costly. 

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.  And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment.  You just need someone to make the actual final call—there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it's impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options.  On top of that, there's a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/breaks deadlock, and absorbs all of the blame for the fact that it's unpleasant to be forced to do things you know you ought to but don't want to do. 

And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff—to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time.  That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian. 

5. There isn't really a status quo for power to abusively maintain.  Dragon Army Barracks is not an object-level experiment in making the best house; it's a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question "how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?"  It's taken as a given that we'll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless.  More importantly, the fundamental conceit of the model is "Duncan sees a better way, which might take some time to settle into," but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway.  In short, my tyranny, if net bad, has a natural time limit, because people aren't going to wait around forever for their results. 

6. The experiment has protections built in.  Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained.  Like the Constitution, Dragon Army's charter and organization are meant to be "living documents" that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.  Section 3 of 3: Dragon Army Charter (DRAFT) 

Statement of purpose: 

Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order.  In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/day plus occasional weekend activities). 

Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre).  The commander's role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/make decisions when speed or simplification is required.  The first officer's role is to manage and moderate the process of building consensus around the standards of the Army—what they are, and in what priority they should be met, and with what consequences for failure.  Other "management" positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/ratification. 

Initial areas of exploration: 

The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following: 

- Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)
- Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/study hall)
- Regular activities for growth and development (talk night, tutoring/study hall, bringing in experts, cross-pollination)
- Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)
- Projects with "shippable" products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from short-term to year-long)
- Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective 

Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting.  After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following: 

- Whole group exercise (90min, 3x/wk, e.g. Tue/Fri/Sun)
- Whole group dinner and retrospective (120min, 1x/wk, e.g. Tue evening)
- Small group baseline skill acquisition/study hall/cross-pollination (90min, 1x/wk)
- Small group circle-shaped discussion (120min, 1x/wk)
- Pair debugging or rapport building (45min, 2x/wk)
- One-on-one check-in with commander (20min, 2x/wk)
- Chore/house responsibilities (90min distributed)
- Publishable/shippable solo small-scale project work with weekly public update (100min distributed) 

... for a total time commitment of 16h/week or 128 hours total, followed by a whole group retreat and reorientation.  The house will then enter an eight-week trial phase, in which each member will participate in at least the following: 

- Whole group exercise (90min, 3x/wk)
- Whole group dinner, retrospective, and plotting (150min, 1x/wk)
- Small group circling and/or pair debugging (120min distributed)
- Publishable/shippable small group medium-scale project work with weekly public update (180min distributed)
- One-on-one check-in with commander (20min, 1x/wk)
- Chore/house responsibilities (60min distributed) ... for a total time commitment of 13h/week or 104 hours total, again followed by a whole group retreat and reorientation.  The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/emotional or project/productive (once again ending with a whole group retreat).  At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity. 
 Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to: 

- Above-average physical capacity
- Above-average introspection
- Above-average planning & execution skill
- Above-average communication/facilitation skill
- Above-average calibration/debiasing/rationality knowledge
- Above-average scientific lab skill/ability to theorize and rigorously investigate claims
- Average problem-solving/debugging skill
- Average public speaking skill
- Average leadership/coordination skill
- Average teaching and tutoring skill
- Fundamentals of first aid & survival
- Fundamentals of financial management
- At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
- At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill) Furthermore, every Dragon should have participated in:  

- At least six personal growth projects involving the development of new skill (or honing of prior skill)
- At least three partner- or small-group projects that could not have been completed alone
- At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world's most important problems, or b) caused significant personal growth and improvement
- Daily contributions to evolved house culture Speaking of evolved house culture...  

Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that's trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week.  Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set re-evaluation time (default three weeks).  There are two routes by which a new experimental norm is put into place: 

- The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)
- The Army has proposed no new experiments in the previous week, and the Commander proposes three options.  The group may then choose one by vote/consensus, or generate three new options, from which the Commander may choose. Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):  

- The use of a specific gesture to greet fellow Dragons (house salute)
- Various call-and-response patterns surrounding house norms (e.g. "What's rule number one?" "PROTECT YOURSELF!")
- Practice using hook, line, and sinker in social situations (three items other than your name for introductions)
- The anti-Singer rule for open calls-for-help (if Dragon A says "hey, can anyone help me with X?" the responsibility falls on the physically closest housemate to either help or say "Not me/can't do it!" at which point the buck passes to the next physically closest person)
- An "interrupt" call that any Dragon may use to pause an ongoing interaction for fifteen seconds
- A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
- A "graffiti board" upon which the Army keeps a running informal record of its mood and thoughts 
  

Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that. 

1. A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).
2. A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party.
3. A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.
4. A Dragon will be candid and proactive, e.g. will give other Dragons a chance to hear about and interact with negative models once they notice them forming, or will not sit on an emotional or interpersonal problem until it festers into something worse.
5. A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit.  Another way to state this is that a Dragon will practice compartmentalization—will be able to simultaneously hold "I'm deeply skeptical about this" alongside "but I'm actually giving it an honest try," and postpone critique/complaint/suggestion until predetermined checkpoints.  Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.
6. A Dragon will take the outside view seriously, maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one's similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior).  Another way to state this is that a Dragon will embrace the maxim "don't believe everything that you think."
7. A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/maximize total growth and output on long time scales.
8. A Dragon will not defect on other Dragons. There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on.  Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.

Note that all of the above is deliberately kept somewhat flexible/vague/open-ended/unsettled, because we are trying not to fall prey to GOODHART'S DEMON. 
 
 Random Logistics 

1. The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to long-term endeavors.  Final decisions will be made by the commander and may be informally questioned/appealed but not overruled by another power.
2. Once a final list of participants is created, all participants will sign a "free state" contract of the form "I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement."  At that point, the search for a suitable house will begin, possibly with delegation to participants.
3. Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund.  Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/month commitment.  Similarly, someone hoping for a double should be prepared for ~$700/month, and someone hoping for a triple should be prepared for ~$500/month, and someone hoping for a quad should be prepared for ~$350/month.
4. The initial phase of the experiment is a six month commitment, but leases are generally one year.  Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected.  After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement."  (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)
5. Of the ~90hr/month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work.  Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).
6. We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.    Conclusion: Obviously this is neither complete nor perfect.  What's wrong, what's missing, what do you think?  I'm going to much more strongly weight the opinions of Berkelyans who are likely to participate, but I'm genuinely interested in hearing from everyone, particularly those who notice red flags (the goal is not to do anything stupid or meta-stupid).  Have fun tearing it up.

(sorry for the abrupt cutoff, but this was meant to be published Monday and I've just ... not ... been ... sleeping ... to get it done)</br></br><a href="https://www.lesserwrong.com/posts/23piReu6vmfskHyh7/dragon-army-theory-and-charter-30min-read">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/23piReu6vmfskHyh7/dragon-army-theory-and-charter-30min-read</link><guid isPermaLink="false">23piReu6vmfskHyh7</guid><pubDate>Thu, 25 May 2017 21:07:48 GMT</pubDate></item><item><title><![CDATA[Meetup : Rationality Potluck]]></title><description><![CDATA[Discussion article for the meetup : Rationality Potluck  

 WHEN: 25 May 2017 06:30:00PM (-0400)
  

 WHERE: 1191 Avenue Hope, Montreal     

Eric Chisholm from the Vancouver Rationalist Community is staying at the Macroscope this week. You're invited to come say hi, and talk with other rationality enthusiasts! Feel free to invite friends. 

Bring food and/or beverage if possible. Vegan food will be available. 

Eric Chisholm, an alumnus from the Center for Applied Rationality, will present the Double Crux technique: a technique for resolving disagreement. 

Facebook Event : https://goo.gl/f8Uwfg  Discussion article for the meetup : Rationality Potluck</br></br><a href="https://www.lesserwrong.com/posts/kDM97HjEX9mHKYsdb/meetup-rationality-potluck">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/kDM97HjEX9mHKYsdb/meetup-rationality-potluck</link><guid isPermaLink="false">kDM97HjEX9mHKYsdb</guid><pubDate>Thu, 25 May 2017 18:28:04 GMT</pubDate></item><item><title><![CDATA[Existential risk from AI without an intelligence explosion]]></title><description><![CDATA[[xpost from my blog] 

In discussions of existential risk from AI, it is often assumed that the existential catastrophe would follow an intelligence explosion, in which an AI creates a more capable AI, which in turn creates a yet more capable AI, and so on, a feedback loop that eventually produces an AI whose cognitive power vastly surpasses that of humans, which would be able to obtain a decisive strategic advantage over humanity, allowing it to pursue its own goals without effective human interference. Victoria Krakovna points out that many arguments that AI could present an existential risk do not rely on an intelligence explosion. I want to look in sightly more detail at how that could happen. Kaj Sotala also discusses this. 

An AI starts an intelligence explosion when its ability to create better AIs surpasses that of human AI researchers by a sufficient margin (provided the AI is motivated to do so). An AI attains a decisive strategic advantage when its ability to optimize the universe surpasses that of humanity by a sufficient margin. Which of these happens first depends on what skills AIs have the advantage at relative to humans. If AIs are better at programming AIs than they are at taking over the world, then an intelligence explosion will happen first, and it will then be able to get a decisive strategic advantage soon after. But if AIs are better at taking over the world than they are at programming AIs, then an AI would get a decisive strategic advantage without an intelligence explosion occurring first. 

Since an intelligence explosion happening first is usually considered the default assumption, I'll just sketch a plausibility argument for the reverse. There's a lot of variation in how easy cognitive tasks are for AIs compared to humans. Since programming AIs is not yet a task that AIs can do well, it doesn't seem like it should be a priori surprising if programming AIs turned out to be an extremely difficult task for AIs to accomplish, relative to humans. Taking over the world is also plausibly especially difficult for AIs, but I don't see strong reasons for confidence that it would be harder for AIs than starting an intelligence explosion would be. It's possible that an AI with significantly but not vastly superhuman abilities in some domains could identify some vulnerability that it could exploit to gain power, which humans would never think of. Or an AI could be enough better than humans at forms of engineering other than AI programming (perhaps molecular manufacturing) that it could build physical machines that could out-compete humans, though this would require it to obtain the resources necessary to produce them. 

Furthermore, an AI that is capable of producing a more capable AI may refrain from doing so if it is unable to solve the AI alignment problem for itself; that is, if it can create a more intelligent AI, but not one that shares its preferences. This seems unlikely if the AI has an explicit description of its preferences. But if the AI, like humans and most contemporary AI, lacks an explicit description of its preferences, then the difficulty of the AI alignment problem could be an obstacle to an intelligence explosion occurring. 

It also seems worth thinking about the policy implications of the differences between existential catastrophes from AI that follow an intelligence explosion versus those that don't. For instance, AIs that attempt to attain a decisive strategic advantage without undergoing an intelligence explosion will exceed human cognitive capabilities by a smaller margin, and thus would likely attain strategic advantages that are less decisive, and would be more likely to fail. Thus containment strategies are probably more useful for addressing risks that don't involve an intelligence explosion, while attempts to contain a post-intelligence explosion AI are probably pretty much hopeless (although it may be worthwhile to find ways to interrupt an intelligence explosion while it is beginning). Risks not involving an intelligence explosion may be more predictable in advance, since they don't involve a rapid increase in the AI's abilities, and would thus be easier to deal with at the last minute, so it might make sense far in advance to focus disproportionately on risks that do involve an intelligence explosion. 

It seems likely that AI alignment would be easier for AIs that do not undergo an intelligence explosion, since it is more likely to be possible to monitor and do something about it if it goes wrong, and lower optimization power means lower ability to exploit the difference between the goals the AI was given and the goals that were intended, if we are only able to specify our goals approximately. The first of those reasons applies to any AI that attempts to attain a decisive strategic advantage without first undergoing an intelligence explosion, whereas the second only applies to AIs that do not undergo an intelligence explosion ever. Because of these, it might make sense to attempt to decrease the chance that the first AI to attain a decisive strategic advantage undergoes an intelligence explosion beforehand, as well as the chance that it undergoes an intelligence explosion ever, though preventing the latter may be much more difficult. However, some strategies to achieve this may have undesirable side-effects; for instance, as mentioned earlier, AIs whose preferences are not explicitly described seem more likely to attain a decisive strategic advantage without first undergoing an intelligence explosion, but such AIs are probably more difficult to align with human values. 

If AIs get a decisive strategic advantage over humans without an intelligence explosion, then since this would likely involve the decisive strategic advantage being obtained much more slowly, it would be much more likely for multiple, and possibly many, AIs to gain decisive strategic advantages over humans, though not necessarily over each other, resulting in a multipolar outcome. Thus considerations about multipolar versus singleton scenarios also apply to decisive strategic advantage-first versus intelligence explosion-first scenarios.</br></br><a href="https://www.lesserwrong.com/posts/bFcbG2TQCCE3krhEY/existential-risk-from-ai-without-an-intelligence-explosion">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/bFcbG2TQCCE3krhEY/existential-risk-from-ai-without-an-intelligence-explosion</link><guid isPermaLink="false">bFcbG2TQCCE3krhEY</guid><pubDate>Thu, 25 May 2017 16:44:04 GMT</pubDate></item><item><title><![CDATA[Meetup : Chicago Rationality Reading Group]]></title><description><![CDATA[Discussion article for the meetup : Chicago Rationality Reading Group  

 WHEN: 28 May 2017 01:00:00PM (-0500)
  

 WHERE: Harper Memorial Library Room 148, 1116 E 59th St, Chicago, IL 60637    

The Chicago Rationality group meets every Sunday from 1-3 PM in Room 148 of Harper Memorial Library. Though we meet on the University of Chicago campus, anyone is welcome to attend. 

Since the university will go on break soon, we'll need to find a new location if we want to have weekly meetings during the summer, so keep an eye out for that. That announcement will not come from me, however, since this will be my last meeting before I move away! Come say goodbye! 

This week's readings are as follows: 

- http://lesswrong.com/lw/kz/fake_optimization_criteria/
- http://lesswrong.com/lw/6r6/tendencies_in_reflective_equilibrium/
- http://slatestarcodex.com/2013/05/31/hansonian-optimism/
- http://lesswrong.com/lw/4ku/use_curiosity/ 

If you're interested in rationality-related events in the Chicago area, request to be added to our Google Group and I'll approve you!  Discussion article for the meetup : Chicago Rationality Reading Group</br></br><a href="https://www.lesserwrong.com/posts/aS3jhiTDeg8QbsqEd/meetup-chicago-rationality-reading-group">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/aS3jhiTDeg8QbsqEd/meetup-chicago-rationality-reading-group</link><guid isPermaLink="false">aS3jhiTDeg8QbsqEd</guid><pubDate>Thu, 25 May 2017 03:21:32 GMT</pubDate></item><item><title><![CDATA[Meetup : Washington, D.C.: Fun & Games]]></title><description><![CDATA[Discussion article for the meetup : Washington, D.C.: Fun & Games  

 WHEN: 28 May 2017 03:30:00PM (-0400)
  

 WHERE: Donald W. Reynolds Center for American Art and Portraiture    

We will be meeting in the courtyard to hang out, play games, and engage in fun conversation. 

Upcoming meetups: 

- Jun. 4: Describe a Potato
- Jun. 11: AI Risk/Safety  Discussion article for the meetup : Washington, D.C.: Fun & Games</br></br><a href="https://www.lesserwrong.com/posts/kcq6eTs44f9vg3djr/meetup-washington-d-c-fun-and-games">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/kcq6eTs44f9vg3djr/meetup-washington-d-c-fun-and-games</link><guid isPermaLink="false">kcq6eTs44f9vg3djr</guid><pubDate>Wed, 24 May 2017 14:49:49 GMT</pubDate></item><item><title><![CDATA[The land before metrics]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/J2dyLgr66aoe4AgR5/the-land-before-metrics">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/J2dyLgr66aoe4AgR5/the-land-before-metrics</link><guid isPermaLink="false">J2dyLgr66aoe4AgR5</guid><pubDate>Wed, 24 May 2017 04:31:06 GMT</pubDate></item><item><title><![CDATA[Employment and wellbeing]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/BZCMCzh5kLiSKbzSC/employment-and-wellbeing">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/BZCMCzh5kLiSKbzSC/employment-and-wellbeing</link><guid isPermaLink="false">BZCMCzh5kLiSKbzSC</guid><pubDate>Wed, 24 May 2017 04:30:26 GMT</pubDate></item><item><title><![CDATA[Effective learning]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/GMhPFtJ2goW2qCm8x/effective-learning">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/GMhPFtJ2goW2qCm8x/effective-learning</link><guid isPermaLink="false">GMhPFtJ2goW2qCm8x</guid><pubDate>Wed, 24 May 2017 04:28:53 GMT</pubDate></item><item><title><![CDATA[Relationships and wellbeing]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/7jTspLK6hvLDXjy3a/relationships-and-wellbeing">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/7jTspLK6hvLDXjy3a/relationships-and-wellbeing</link><guid isPermaLink="false">7jTspLK6hvLDXjy3a</guid><pubDate>Wed, 24 May 2017 04:28:26 GMT</pubDate></item><item><title><![CDATA[Political ideology]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/7vcCMhQgmxvQAARGk/political-ideology">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/7vcCMhQgmxvQAARGk/political-ideology</link><guid isPermaLink="false">7vcCMhQgmxvQAARGk</guid><pubDate>Wed, 24 May 2017 04:27:47 GMT</pubDate></item><item><title><![CDATA[Notes from the Hufflepuff Unconference (Part 1)]]></title><description><![CDATA[April 28th, we ran the Hufflepuff Unconference in Berkeley, at the MIRI/CFAR office common space. 

There's room for improvement in how the Unconference could have been run, but it succeeded the core things I wanted to accomplish:  

 - Established common knowledge of what problems people were actually interested in working on
 - We had several extensive discussions of some of those problems, with an eye towards building solutions
 - Several people agreed to work together towards concrete plans and experiments to make the community more friendly, as well as build skills relevant to community growth. (With deadlines and one person acting as project manager to make sure real progress was made)
 - We agreed to have a followup unconference in roughly three months, to discuss how those plans and experiments were going 

Rough notes are available here. (Thanks to Miranda, Maia and Holden for takin really thorough notes)

This post will summarize some of the key takeaways, some speeches that were given, and my retrospective thoughts on how to approach things going forward.

But first, I'd like to cover a question that a lot of people have been asking about: What does this all mean for people outside of the Bay? 

The answer depends. 

I'd personally like it if the overall rationality community got better at social skills, empathy, and working together, sticking with things that need sticking with (and in general, better at recognizing skills other than metacognition). In practice, individual communities can only change in the ways the people involved actually want to change, and there are other skills worth gaining that may be more important depending on your circumstances. 

Does Project Hufflepuff make sense for your community?

If you're worried that your community doesn't have an interest in any of these things, my actual honest answer is that doing something "Project Hufflepuff-esque" probably does not make sense. I did not choose to do this because I thought it was the single-most-important thing in the abstract. I did it because it seemed important and I knew of a critical mass of people who I expected to want to work on it. 

If you're living in a sparsely populated area or haven't put a community together, the first steps do not look like this, they look more like putting yourself out there, posting a meetup on Less Wrong and just *trying things*, any things, to get something moving.

If you have enough of a community to step back and take stock of what kind of community you want and how to strategically get there, I think this sort of project can be worth learning from. Maybe you'll decide to tackle something Project-Hufflepuff-like, maybe you'll find something else to focus on. I think the most important thing is have some kind of vision for something you community can do that is worth working together, leveling up to accomplish.

Community Unconferences as One Possible Tool 

Community unconferences are a useful tool to get everyone on the same page and spur them on to start working on projects, and you might consider doing something similar. 

They may not be the right tool for you and your group - I think they're most useful in places where there's enough people in your community that they don't all know each other, but do have enough existing trust to get together and brainstorm ideas.  

If you have a sense that Project Hufflepuff is worthwhile for your community but the above disclaimers point towards my current approach not making sense for you, I'm interested in talking about it with you, but the conversation will look less like "Ray has ideas for you to try" and more like "Ray is interested in helping you figure out what ideas to try, and the solution will probably look very different." 

Online Spaces 

Since I'm actually very uncertain about a lot of this and see it as an experiment, I don't think it makes sense to push for any of the ideas here to directly change Less Wrong itself (at least, yet). But I do think a lot of these concepts translate to online spaces in some fashion, and I think it'd make sense to try out some concepts inspired by this in various smaller online subcommunities. Table of Contents: 

I. Introduction Speech 

 - Why are we here?
 - The Mission: Something To Protect
 - The Invisible Badger, or "What The Hell Is a Hufflepuff?"
 - Meta Meetups Usually Suck. Let's Try Not To. 

II. Common Knowledge 

 - What Do People Actually Want?
 - Lightning Talks 

III. Discussing the Problem (Four breakout sessions) 

 - Welcoming Newcomers
 - How to handle people who impose costs on others?
 - Styles of Leadership and Running Events
 - Making Helping Fun (or at least lower barrier-to-entry) 

IV. Planning Solutions and Next Actions 

V. Final Words I. Introduction: It Takes A Village to Save a World 

(A more polished version of my opening speech from the unconference)

[Epistemic Status: This is largely based on intuition, looking at what our community has done and what other communities seem to be able to do. I'm maybe 85% confident in it, but it is my best guess] 

In 2012, I got super into the rationality community in New York. I was surrounded by people passionate about thinking better and using that thinking to tackle ambitious projects. And in 2012 we all decided to take on really hard projects that were pretty likely to fail, because the expected value seemed high, and it seemed like even if we failed we'd learn a lot in the process and grow stronger. 

That happened - we learned and grew. We became adults together, founding companies and nonprofits and creating holidays from scratch. 

But two years later, our projects were either actively failing, or burning us out. Many of us became depressed and demoralized. 

There was nobody who was okay enough to actually provide anyone emotional support. Our core community withered. 

I ended up making that the dominant theme of the 2014 NYC Solstice, with a call-to-action to get back to basics and take care each other. 

I also went to the Berkeley Solstice that year. And... I dunno. In the back of my mind I was assuming "Berkeley won't have that problem - the Bay area has so many people, I can't even imagine how awesome and thriving a community they must have." (Especially since the Bay kept stealing all the Movers and Shakers of NYC).

The theme of the Bay Solstice turned out to be "Hey guys, so people keep coming to the Bay, running on a dream and a promise of community, but that community is not actually there, there's a tiny number of well-connected people who everyone is trying to get time with, and everyone seems lonely and sad. And we don't even know what to do about this." 

In 2015, that theme in the Berkeley Solstice was revisited. 

So I think that was the initial seed of what would become Project Hufflepuff - noticing that it's not enough to take on cool projects, that it's not enough to just get a bunch of people together and call it a community. Community is something you actively tend to. Insofar as Maslow's hierarchy is real, it's a foundation you need before ambitious projects can be sustainable. 

There are other pieces of the puzzle - different lenses that, I believe, point towards a Central Thing. Some examples: 

Group houses, individualism and coordination. 

I've seen several group houses where, when people decide it no longer makes sense to live in the house, they... just kinda leave. Even if they've literally signed a lease. And everyone involved (the person leaving and those remain), instinctively act as if it's the remaining people's job to fill the leaver's spot, to make rent. 

And the first time, this is kind of okay. But then each subsequent person leaving adds to a stressful undertone of "OMG are we even going to be able to afford to live here?". It eventually becomes depressing, and snowballs into a pit that makes newcomers feel like they don't WANT to move into the house. 

Nowadays I've seen some people explicitly building into the roommate agreement a clear expectation of how long you stay and who's responsibility it is to find new roommates and pay rent in the meantime. But it's disappointing to me that this is something we needed, that we weren't instinctively paying to attention to how we were imposing costs on each other in the first place. That when we *violated a written contract*, let alone a handshake agreement, that we did not take upon ourselves (or hold each other accountable), to ensure we could fill our end of the bargain. 

Friends, and Networking your way to the center 

This community puts pressure on people to improve. It's easier to improve when you're surrounded by ambitious people who help or inspire each other level up. There's a sense that there's some cluster of cool-people-who-are-ambitious-and-smart who've been here for a while, and... it seems like everyone is trying to be friends with those people.  

It also seems like people just don't quite get that friendship is a skill, that adult friendships in City Culture can be hard, and it can require special effort to make them happen. 

I'm not entirely sure what's going on here - it doesn't make sense to say anyone's obligated to hang out with any particular person (or obligated NOT to), but if 300 people aren't getting the connection they want it seems like *somewhere people are making a systematic mistake.*  

(Since the Unconference, Maia has tackled this particular issue in more detail) 

  The Mission - Something To Protect 

  

As I see it, the Rationality Community has three things going on: Truth. Impact. And "Being People". 

In some sense, our core focus is the practice of truthseeking. The thing that makes that truthseeking feel *important* is that it's connected to broader goals of impacting the world. And the thing that makes this actually fun and rewarding enough to stick with is a community that meets our needs, where can both flourish as individuals and find the relationships we want. 

I think we have made major strides in each of those areas over the past seven years. But we are nowhere near done. 

Different people have different intuitions of which of the three are most important. Some see some of them as instrumental, or terminal. There are people for whom Truthseeking is *the point*, and they'd have been doing that even if there wasn't a community to help them with it, and there are people for whom it's just one tool of many that helps them live their life better or plan important projects. 

I've observed a tendency to argue about which of these things is most important, or what tradeoffs are worth making. Inclusiveness verses high standards. Truth vs action. Personal happiness vs high acheivement. 

I think that kind of argument is a mistake. 

We are falling woefully short on all of these things.  

We need something like 10x our current capacity for seeing, and thinking. 10x our capacity for doing. 10x our capacity for *being healthy people together.* 

I say "10x" not because all these things are intrinsically equal. The point is not to make a politically neutral push to make all the things sound nice. I have no idea exactly how far short we're falling on each of these because the targets are so far away I can't even see the end, and we are doing a complicated thing that doesn't have clear instructions and might not even be possible. 

The point is that all of these are incredibly important, and if we cannot find a way to improve *all* of these, in a way that is *synergistic* with each other, then we will fail. 

There is a thing at the center of our community. Not all of us share the exact same perspective on it. For some of us it's not the most important thing. But it's been at the heart of the community since the beginning and I feel comfortable asserting that it is the thing that shapes our culture the most: 

The purpose of our community is to make sure this place is okay: 

 

The world isn't okay right now, on a number of levels. And a lot of us believe there is a strong chance it could become dramatically less okay. I've seen people make credible progress on taking responsibility for pieces of our home. But when all is said and done, none of our current projects really give me the confidence that things are going to turn out all right.  

Our community was brought together on a promise, a dream, and we have not yet actually proven ourselves worthy of that dream. And to make that dream a reality we need a lot of things. 

We need to be able to criticize, because without criticism, we cannot improve. 

If we cannot, I believe we will fail. 

We need to be able to talk about ideas that are controversial, or uncomfortable - otherwise our creativity and insight will be crippled. 

If we cannot, I believe we will fail. 

We need to be able to do those things without alienating people. We need to be able to criticize without making people feel untrusted and discouraged from even taking action. We need to be able to discuss challenging things while earnestly respecting the notion that *talking about ideas gives those ideas power and has concrete effects on social reality*, and sometimes that can hurt people. 

If we cannot figure out how to do that, I believe we will fail. 

We need more people who are able and willing to try things that have never been done before. To stick with those things long enough to *get good at them*, to see if they can actually work. We need to help each other do impossible things. And we need to remember to check for and do the *possible*, boring, everyday things that are in fact straightforward and simple and not very inspiring.  

If we cannot manage to do that, I believe we will fail. 

We need to be able to talk concretely about what the *highest leverage actions in the world are*. We need to prioritize those things, because the world is huge and broken and we are small. I believe we need to help each other through a long journey, building bigger and bigger levers, building connections with people outside our community who are undertaking the same journey through different perspectives. 

And in the process, we need to not make it feel like if *you cannot personally work on those highest leverage things, that you are not important.*  

There's the kind of importance where we recognize that some people have scarce skills and drive, and the kind of importance where we remember that *every* person has intrinsic worth, and you owe *nobody* any special skills or prestigious sounding projects for your life to be worthwhile. 

This isn't just a philosophical matter - I think it's damaging to our mental health and our collective capacity.  

We need to recognize that the distribution of skills we tend to reward or punish is NOT just about which ones are actually most valuable - sometimes it is simply founder effects and blind spots. 

We cannot be a community for everyone - I believe trying to include anyone with a passing interest in us is a fool's errand. But there are many people who had valuable skills to contribute who have turned away, feeling frustrated and un-valued. 

If we cannot find a way to accomplish all of these things at once, I believe we will fail. 

The thesis of Project Hufflepuff is that it takes (at least) a village to save a world.  

It takes people doing experimental impossible things. It takes caretakers. It takes people helping out with unglorious tasks. It takes technical and emotional and physical skills. And while it does take some people who specialize in each of those things, I think it also needs many people who are least a little bit good at each of them, to pitch in when needed. 

Project Hufflepuff is not the only things our community needs, or the most important. But I believe it is one of the necessary things that our community needs, if we're to get to 10x our current Truthseeking, Impact and Human-ing. 

If we're to make sure that our home is okay. The Invisible Badger 

"A lone hufflepuff surrounded by slytherins will surely wither as if being leeched dry by vampires." 

- Duncan 

[Epistemic Status: My evidence for this is largely based on discussions with a few people for whom the badger seems real and valuable, and who report things being different in other communities, as well as some of my general intuitions about society. I'm 75% sure the badger exists, 90% that's it worth leaning into the idea of the badger to see if it works for you, and maybe 55% sure that it's worth trying to see the badger if you can't already make out it's edges.] 


 

 

  

If I *had* to pick a clear thing that this conference is about without using Harry Potter jargon, I'd say "Interpersonal dynamics surrounding trust, and how those dynamics apply to each of the Impact/Truth/Human focuses of the rationality community." 

I'm not super thrilled with that term because I think I'm grasping more for some kind of gestalt. An overall way of seeing and being that's hard to describe and that doesn't come naturally to the sort of person attracted to this community. 

Much like the blind folk and the elephant, who each touched a different part of the animal and came away with a different impression (the trunk seems like a snake, the legs seem like a tree), I've been watching several people in the community try to describe things over the past few years. And maybe those things are separate but I feel like they're secretly a part of the same invisible badger.

Hufflepuff is about hard work, and loyalty, and camaraderie. It's about emotional intelligence. It's about seeing value in day to day things that don't directly tie into epic narratives.  

There's a bunch of skills that go into Hufflepuff. And part of want I want is for people to get better at those skills. But It think a mindset, an approach, that is fairly different from the typical rationalist mindset, that makes those skills easier. It's something that's harder when you're being rigorously utilitarian and building models of the world out of game theory and incentives. 

Mindspace is deep and wide, and I don't expect that mindset to work for everyone. I don't think everyone should be a Hufflepuff. But I do think it'd be valuable to the community if more people at least had access to this mindset and more of these skills. 

So what I'd like, for tonight, is for people to lean into this idea. Maybe in the end you'll find that this doesn't work for you. But I think many people's first instinct is going to be that this is alien and uncomfortable and I think it's worth trying to push past that.

The reason we're doing this conference together is because the Hufflepuff way doesn't really work if people are trying to do it alone - I think it requires trust and camaraderie and persistence to really work. I don't think we can have that required trust all at once, but I think if there are multiple people trying to make it work, who can incrementally trust each other more, I think we can reach a place where things run more smoothly, where we have stronger emotional connections, and where we trust each other enough to take on more ambitious projects than we could if we're all optimizing as individuals. Meta-Meetups Suck. Let's Not. 

This unconference is pretty meta - we're talking about norms and vague community stuff we want to change. 

Let me tell you, meta meetups are the worst. Typically you end up going around in circles complaining and wishing there were more things happening and that people were stepping up and maybe if you're lucky you get a wave of enthusiasm that lasts a month or so and a couple things happen but nothing really *changes*. 

So. Let's not do that. Here's what I want to accomplish and which seems achievable: 

1) Establish common knowledge of important ideas and behavior patterns.  

Sometimes you DON'T need to develop a whole new skill, you just need to notice that your actions are impacting people in a different way, and maybe that's enough for you to decide to change somethings. Or maybe someone has a concept that makes it a lot easier for you to start gaining a new skill on your own. 

2) Establish common knowledge of who's interested in trying which new norms, or which new skills.  

We don't actually *know* what the majority of people want here. I can sit here and tell you what *I* think you should want, but ultimately what matters is what things a critical mass of people want to talk about tonight. 

Not everyone has to agree that an idea is good to try it out. But there's a lot of skills or norms that only really make sense when a critical mass of other people are trying them. So, maybe of the 40 people here, 25 people are interested in improving their empathy, and maybe another 20 are interested in actively working on friendship skills, or sticking to commitments. Maybe those people can help reinforce each other. 

3) Explore ideas for social and skillbuilding experiments we can try, that might help.  

The failure mode of Ravenclaws is to think about things a lot and then not actually get around to doing them. A failure mode of ambitious Ravenclaws, is to think about things a lot and then do them and then assume that because they're smart, that they've thought of everything, and then not listen to feedback when they get things subtly or majorly wrong. 

I'd like us to end by thinking of experiments with new norms, or habits we'd like to cultivate. I want us to frame these as experiments, that we try on a smaller scale and maybe promote more if they seem to be working, while keeping in mind that they may not work for everyone. 

4) Commit to actions to take. 

Since the default action is for them to peter out and fail, I'd like us to spend time bulletproofing them, brainstorming and coming up with trigger-action plans so that they actually have a chance to succeed. Tabooing "Hufflepuff" 

Having said all that talk about The Hufflepuff Way...

...the fact is, much of the reason I've used those towards is to paint a rough picture to attract the sort of person I wanted to attract to this unconference. 

It's important that there's a fuzzy, hard-to-define-but-probably-real concept that we're grasping towards, but it's also important not to be talking past each other. Early on in this project I realized that a few people who I thought were on the same page actually meant fairly different things. Some cared more about empathy and friendship. Some cared more about doing things together, and expected deep friendships to arise naturally from that.

So I'd like us to establish a trigger-action-plan right now - for the rest of this unconference, if someone says "Hufflepuff", y'all should say "What do you mean by that?" and then figure out whatever concrete thing you're actually trying to talk about. II. Common Knowledge 

The first part of the unconference was about sharing our current goals, concerns and background knowledge that seemed useful. Most of the specifics are covered in the notes. But I'll talk here about why I included the things I did and what my takeaways were afterwards on how it worked. Time to Think 

The first thing I did was have people sit and think about what they actually wanted to get out of the conference, and what obstacles they could imagine getting in the way of that. I did this because often, I think our culture (ostensibly about helping us think better) doesn't give us time to think, and instead has people were are quick-witted and conversationally dominant end up doing most of the talking. (I wrote a post a year ago about this, the 12 Second Rule). In this case I gave everyone 5 minutes, which is something I've found helpful at small meetups in NYC.

This had mixed results - some people reported that while they can think well by themselves, in a group setting they find it intimidating and their mind starts wandering instead of getting anything done. They found it much more helpful when I eventually let people-who-preferred-to-talk-to-each-other go into another room to talk through their ideas outloud.

I think there's some benefit to both halves of this and I'm not sure how common which set of preferences are. It's certainly true that it's not common for conferences to give people a full 5 minutes to think so I'd expect it to be someone uncomfortable-feeling regardless of whether it was useful. 

But an overall outcome of the unconference was that it was somewhat lower energy than I'd wanted, and opening with 5 minutes of silent thinking seemed to contribute to that, so for the next unconference I run, I'm leaning towards a shorter period of time for private thinking (Somewhere between 12 and 60 seconds), followed by "turn to your neighbors and talk through the ideas you have", followed by "each group shares their concepts with the room." "What is do you want to improve on? What is something you could use help with?" 

I wanted people to feel like active participants rather than passive observers, and I didn't want people to just think "it'd be great if other people did X", but to keep an internal locus of control - what can *I* do to steer this community better? I also didn't want people to be thinking entirely individualistically.

I didn't collect feedback on this specific part and am not sure how valuable others found it (if you were at the conference, I'd be interested if you left any thoughts in the comments). Some anonymized things people described: 

- When I make social mistakes, consider it failure; this is unhelpful 
- Help point out what they need help with 
- Have severe akrasia, would like more “get things done” magic tools 
- Getting to know the bay area rationalist community 
- General bitterness/burned out 
- Reduce insecurity/fear around sharing 
- Avoiding spending most words signaling to have read a particular thing; want to communicate more clearly 
- Creating systems that reinforce unnoticed good behaviour 
- Would like to learn how to try at things 
- Find place in rationalist community 
- Staying connected with the group 
- Paying attention to what they want in the moment, in particular when it’s right to not be persistent 
- Would like to know the “landing points” to the community to meet & greet new people 
- Become more approachable, & be more willing to approach others for help; community cohesiveness 
- Have been lonely most of life; want to find a place in a really good healthy community 
- Re: prosocialness, being too low on Maslow’s hierarchy to help others 
- Abundance mindset & not stressing about how to pay rent 
- Cultivate stance of being able to do helpful things (action stance) but also be able to notice difference between laziness and mental health 
- Don’t know how to respect legit safety needs w/o getting overwhelmed by arbitrary preferences; would like to model people better to give them basic respect w/o having to do arbitrary amount of work 
- Starting conversations with new people 
- More rationalist group homes / baugruppe 
- Being able to provide emotional support rather than just logistics help 
- Reaching out to people at all without putting too much pressure on them 
- Cultivate lifelong friendships that aren’t limited to particular time and place 
- Have a block around asking for help bc doesn’t expect to reciprocate; would like to actually just pay people for help w stuff 
- Want to become more involved in the community 
- Learn how to teach other people “ops skills” 
- Connections to people they can teach and who can teach them   Lightning Talks  Lightning talks are a great way to give people an opportunity to not just share ideas, but get some practice at public presentation (which I've found can be a great gateway tool for overall confidence and ability to get things done in the community). Traditionally they are 5 minutes long. CFAR has found that 3.5 minute lightning talks are better than 5 minute talks, because it cuts out some rambling and tangents.

It turned out we had more people than I'd originally planned time for, so we ended up switching to two minute talks. I actually think this was even better, and my plan for next time is do 1-minute timeslots but allow people to sign up for multiple if they think their talk requires it, so people default to giving something short and sweet.

Rough summaries of the lightning talks can be found in the notes. III. Discussing the Problem 

The next section involved two "breakout session" - two 20 minute periods for people to split into smaller groups and talk through problems in detail. This was done in an somewhat impromptu fashion, with people writing down the talks they wanted to do on the whiteboard and then arranging them so most people could go to a discussion that interested them.

The talks were:

 -  Welcoming Newcomers
 -  How to handle people who impose costs on others?
 -  Styles of Leadership and Running Events
 -  Making Helping Fun (or at least lower barrier-to-entry)
 -  Circling session 

There was a suggested discussion about outreach, which I asked to table for a future unconference. My reason was that outreach discussions tend to get extremely meta and seem to be an attractor (people end up focusing on how to bring more people into the community without actually making sure the community is good, and I wanted the unconference to focus on the latter.) 

I spent some time drifting between sessions, and was generally impressed both with the practical focus each discussion had, as well as the way they were organically moderated.

Again, more details in the notes. IV. Planning Solutions and Next Actions After about an hour of discussion and mingling, we came back to the central common space to describe key highlights from each session, and begin making concrete plans. (Names are crediting people who suggested an idea and who volunteered to make it happen) 
 Creating Norms for Your Space (Jane Joyce, Tilia Bell) 
 The "How to handle people who impose costs on other" conversation ended up focusing on minor but repeated costs. One of the hardest things to moderate as an event host is not people who are actively disruptive, but people who just a little bit awkward or annoying - they'd often be happy to change their behavior if they got feedback, but giving feedback feels uncomfortable and it's hard to do in a tactful way. This presents two problems at once: parties/events/social-spaces end up a more awkward/annoying than they need to be, and often what happens is that rather than giving feedback, the hosts stop inviting people doing those minor things, which means a lot of people still working on their social skills end up living in fear of being excluded.

Solving this fully requires a few different things at once, and I'm not sure I have a clear picture of what it looks like, but one stepping stone people came up with was creating explicit norms for a given space, and a practice of reminding people of those norms in a low-key, nonjudgmental way.

I think will require a lot of deliberate effort and practice on the part of hosts to avoid alternate bad outcomes like "the norms get disproportionately enforced on people the hosts like and applied unfairly to people they aren't close with". But I do think it's a step in the right direction to showcase what kind of space you're creating and what the expectations are.

Different spaces can be tailored for different types of people with different needs or goals. (I'll have more to say about this in an upcoming post - doing this right is really hard, I don't actually know of any groups that have done an especially good job of it.)

I *was* impressed with the degree to which everyone in the conversation seemed to be taking into account a lot of different perspectives at once, and looking for solutions that benefited as many people as possible. 
 Welcoming Committee (Mandy Souza, Tessa Alexanian) 
 Oftentimes at events you'll see people who are new, or who don't seem comfortable getting involved with the conversation. Many successful communities do a good job of explicitly welcoming those people. Some people at the unconference decided to put together a formal group for making sure this happens more. 

The exact details are still under development, but I think the basic idea is to have a network of people who are interested
he idea is to have a group of people who go to different events, playing the role of the welcomer. I think the idea is sort of a "Uber for welcomers" network (i.e. it both provides a place for people running events to go to ask for help with welcoming, and people who are interested in welcoming to find events that need welcomers)

It also included some ideas for better infrastructure, such as reviving "bayrationality.org" to make it easier for newcomers to figure out what events are going on (possibly including links to the codes of conduct for different spaces as well). In the meanwhile, some simple changes were the introduction of a facebook group for Bay Area Rationalist Social Events. 
 Softskill-sharing Groups (Mike Plotz and Jonathan Wallis) 
  The leadership styles discussion led to the concept that in order to have a flourishing community, and to be a successful leader, it's valuable to make yourself legible to others, and others more legible to yourself. Even small improvements in an activity as frequent as communication can have huge effects over time, as we make it easier to see each other as we actually are and to clearly exchange our ideas.  
 A number of people wanted to improve in this area together, and so we’re working towards establishing a series of workshops with a focus on practice and individual feedback. A longer post on why this is important is coming up, and there will be information on the structure of the event after our first teacher’s meeting. If you would like to help out or participate, please fill out this poll:

https://goo.gl/forms/MzkcsMvD2bKzXCQN2 
  Circling Explorations (Qiaochu and others) 
 Much of the discussion at the Unconference, while focused on community, ultimately was explored through an intellectual lens. By contrast, "Circling" is a practice developed by the Authentic Relating community which is focused explicitly on feelings. The basic premise is (sort of) simple: you sit in a circle in a secluded space, and you talk about how you're feeling in the moment. Exactly how this plays out is a bit hard to explain, but the intended result is to become better both at noticing your own feelings and the people around you.

Opinions were divided as to whether this was something that made sense for "rationalists to do on their own", or whether it made more sense to visit more explicitly Circling-focused communities, but several people expressed interest in trying it again.   
 Making Helping Fun and More Accessible (Suggested by Oliver Habryka) 
 Ultimately we want a lot of people who are able and excited to help out with challenging projects - to improve our collective group ambition. But to get there, it'd be really helpful to have "gateway helping" - things people can easily pitch in to do that are fun, rewarding, clearly useful but on the "warm fuzzies" side of helping. Oliver suggested this as a way to get people to start identifying as people-who-help.

There were two main sets of habits that worth cultivating:

1) Making it clear to newcomers that they're encouraged to help out with events, and that this is actually a good way to make friends and get more involved. 

2) For hosts and event planners, look for opportunities to offer people things that they can help with, and make sure to publicly praise those who do help out.

Some of this might dovetail nicely with the Welcoming Committee, both as something people can easily get involved with, and if there ends up being a public facing website to introduce people to the community, using that to connect people with events that could use help). 
 Volunteering-as-Learning, and Big Event Specific Workshops 
 Sometimes volunteering just requires showing up. But sometimes it requires special skills, and some events might need people who are willing to practice beforehand or learn-by-doing with a commitment to help at multiple events.

A vague cluster of skills that's in high demand is "predict logistical snafus in advance to head them off, and notice logistical snafus happening in realtime so you can do something about them." Earlier this year there was an Ops Workshop that aimed to teach this sort of skill, which went reasonably but didn't really lead into a concrete use for the skills to help them solidify.

One idea was to do Ops workshops (or other specialized training) in the month before a major event like Solstice or EA Global, giving them an opportunity to practice skills and making that particular event run smoother.  
 (This specific idea is not currently planned for implementation as it was among the more ambitious ones, although Brent Dill's series of "practice setting up a giant dome" beach parties in preparation for Burning Man are pointing in a similar direction)

 Making Sure All This Actually Happens (Sarah Spikes, and hopefully everyone!) 
 To avoid the trap of dreaming big and not actually getting anything done, Sarah Spikes volunteered as project manager, creating an Asana page. People who were interested in committing to a deadline could opt into getting pestered by her to make sure things things got done.   V. Parting Words 

To wrap up the event, I focused on some final concepts that underlie this whole endeavor.  

The thing we're aiming for looks something like this: 

 

In a couple months (hopefully in July), there'll be a followup unconference. The theme will be "Innovation and Excellence", addressing the twofold question "how do we encourage more people to start cool projects", and "how to do we get to a place where longterm projects ultimately reach a high quality state?"  

Both elements feel important to me, and they require somewhat different mindsets (both on the part of the people running the projects, and the part of the community members who respond to them). Starting new things is scary and having too high standards can be really intimidating, yet for longterm projects we may want to hold ourselves to increasingly high standards over time. 

My current plan (subject to lots of revision) is for this to become a series of community unconferences that happen roughly every 3 months. The Bay area is large enough with different overlapping social groups that it seems worthwhile to get together every few months and have an open-structured event to see people you don't normally see, share ideas, and get on the same page about important things.

Current thoughts for upcoming unconference topics are:

Innovation and Excellence
Personal Epistemic Hygiene
Group Epistemology 

An important piece of each unconference will be revisiting things at the previous one, to see if projects, ideas or experiments we talked about were actually carried out and what we learned from them (most likely with anonymous feedback collected beforehand so people who are less comfortable speaking publicly have a chance to express any concerns). I'd also like to build on topics from previous unconferences so they have more chance to sink in and percolate (for example, have at least one talk or discussion about "empathy and trust as related to epistemic hygiene"). 

Starting and Finishing Unconferences Together

My hope is to get other people involved sooner rather than later so this becomes a "thing we are doing together" rather than a "thing I am doing." One of my goals with this is also to provide a platform where people who are interested in getting more involved with community leadership can take a step further towards that, no matter where they currently stand (ranging anywhere from "give a 30 second lightning talk" to "run a discussion, or give a keynote talk" to "be the primary organizer for the unconference.")

I also hope this is able to percolate into online culture, and to other in-person communities where a critical mass of people think this'd be useful. That said, I want to caution that I consider this all an experiment, motivated by an intuitive sense that we're missing certain things as a culture. That intuitive sense has yet to be validated in any concrete fashion. I think "willingness to try things" is more important than epistemic caution, but epistemic caution is still really important - I recommend collecting lots of feedback and being willing to shift direction if you're trying anything like the stuff suggested here.

(I'll have an upcoming post on "Ways Project Hufflepuff could go horribly wrong") 

Most importantly, I hope this provides a mechanism for us to collectively take ideas more seriously that we're ostensibly supposed to be taking seriously. I hope that this translates into the sort of culture that The Craft and The Community was trying to point us towards, and, ideally, eventually, a concrete sense that our community can play a more consistently useful role at making sure the world turns out okay. 

If you have concerns, criticism, or feedback, I encourage you to comment here if you feel comfortable, or on the Unconference Feedback Form. So far I've been erring on the side of move forward and set things in motion, but I'll be shifting for the time being towards "getting feedback and making sure this thing is steering in the right direction."

-

In addition to the people listed throughout the post, I'd like to give particular thanks to Duncan Sabien for general inspiration and a lot of concrete help, Lahwran for giving the most consistent and useful feedback, and Robert Lecnik for hosting the space.</br></br><a href="https://www.lesserwrong.com/posts/stQcoPWFm9R3EixSC/notes-from-the-hufflepuff-unconference-part-1">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/stQcoPWFm9R3EixSC/notes-from-the-hufflepuff-unconference-part-1</link><guid isPermaLink="false">stQcoPWFm9R3EixSC</guid><pubDate>Tue, 23 May 2017 21:04:45 GMT</pubDate></item><item><title><![CDATA[Meetup : Moscow LW meetup in "Nauchka" library]]></title><description><![CDATA[Discussion article for the meetup : Moscow LW meetup in "Nauchka" library  

 WHEN: 09 June 2017 08:00:00PM (+0300)
  

 WHERE: Москва, ул. Дубининская, 20    

Welcome to the next Moscow LW meetup in "Nauchka" library! 

Our plan: 

- A talk about yak shaving problem.
- Fallacymania game.
- Tower of Chaos game. 

Details about Fallacymania and Tower of Chaos and game materials can be found here: http://lesswrong.com/lw/oco/custom_games_that_involve_skills_related_to/ 

Meetup details are here: https://goo.gl/5fd66P 

Come to "Nauchka", ul.Dubininskaya, 20. Entrance through the Central children library #14. Nearest metro station is Paveletskaya. Map is here: http://nauchka.ru/contacts/ . If you are lost, call Sasha at +7-905-527-30-82. 

Meetup begins at 20:00, the length is 2 hours.  Discussion article for the meetup : Moscow LW meetup in "Nauchka" library</br></br><a href="https://www.lesserwrong.com/posts/efNLeahfYkaMvaRBQ/meetup-moscow-lw-meetup-in-nauchka-library">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/efNLeahfYkaMvaRBQ/meetup-moscow-lw-meetup-in-nauchka-library</link><guid isPermaLink="false">efNLeahfYkaMvaRBQ</guid><pubDate>Tue, 23 May 2017 20:29:15 GMT</pubDate></item><item><title><![CDATA[Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/B3zZsgg4cJp9PsWYz/have-we-been-interpreting-quantum-mechanics-wrong-this-whole">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/B3zZsgg4cJp9PsWYz/have-we-been-interpreting-quantum-mechanics-wrong-this-whole</link><guid isPermaLink="false">B3zZsgg4cJp9PsWYz</guid><pubDate>Tue, 23 May 2017 16:38:35 GMT</pubDate></item><item><title><![CDATA[Meetup : Games in Kocherga club: Fallacymania, Tower of Chaos, Scientific Discovery]]></title><description><![CDATA[Discussion article for the meetup : Games in Kocherga club: Fallacymania, Tower of Chaos, Scientific Discovery  

 WHEN: 31 May 2017 07:40:00PM (+0300)
  

 WHERE: Moscow, B.Dorogomilovskaya, 5-2    

Welcome to Moscow LW community makeshift games! In that games, some rationality skills are involved, so you can practise while you playing! 

- Fallacymania: it is a game where you guess logical fallacies in arguments, or practise using logical fallacies yourself (depending on team in which you will be).
- Tower of Chaos: funny game with guessing the rules of human placement on a Twister mat.
- Scientific Discovery: modified version of Zendo with simultaneous turns for all players. 

Details about the games: http://goo.gl/Mz2i94 

Come to antikafe "Kocherga", ul.B.Dorogomilovskaya, 5-2. The map is here: http://kocherga-club.ru/#contacts . Nearest metro station is Kievskaya. If you are lost, call Sasha at +7-905-527-30-82. 

Games begin at 19:40, the length is 3 hours.  Discussion article for the meetup : Games in Kocherga club: Fallacymania, Tower of Chaos, Scientific Discovery</br></br><a href="https://www.lesserwrong.com/posts/m9wW6Rsp9BduRmsH9/meetup-games-in-kocherga-club-fallacymania-tower-of-chaos">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/m9wW6Rsp9BduRmsH9/meetup-games-in-kocherga-club-fallacymania-tower-of-chaos</link><guid isPermaLink="false">m9wW6Rsp9BduRmsH9</guid><pubDate>Tue, 23 May 2017 16:16:46 GMT</pubDate></item><item><title><![CDATA[Physical actions that improve psychological health]]></title><description><![CDATA[Physical health impacts well-being. However, existing preventative health guidelines are inaccessible to the public because they are highly technical and require specific medical equipment. These notes are not medical advice nor meant to treat any illness. This is a compilation of findings I have come across at one time or another in relation to physical things that relate back to psychological health. I have not systematically reviewed the literature on any of these topics, nor am I an expert nor even familiar with any of them. I am extremely uncertain about the whole thing. But, I figure better to write this up and look stupid than keep it inside and act stupid. The hyperlinks point to the best evidence I could find on the matter. I write to solicit feedback, corrections and advice. 

  

Microwaves are safe, but cockroaches and even ants are dangerous, and finally: happiness is dietary. If you want the well-being boosts associated with fruit (careful about fruit juice sugar though!), coffee’s aroma [text] [science news], vanilla yoghurt [news], Sufficient B vitamins and choline (alt), binge drinking or drinking in general, however, I don’t have any easy answers for you. Don’t worry about the smart drugs, nootropics are probably a misnomer. On the other hand, probiotics can treat depression 

   

“There is growing evidence that a diet rich in fruits and vegetables is related to greater happiness, life satisfaction, and positive mood as well. This evidence cannot be entirely explained by demographic or health variables including socio-economic status, exercise, smoking, and body mass index, suggesting a causal link.[50] Further studies have found that fruit and vegetable consumption predicted improvements in positive mood the next day, not vice versa. On days when people ate more fruits and vegetables, they reported feeling calmer, happier, and more energetic than normal, and they also felt more positive the next day.”   

- Wikipedia 

  

If your diet is out of control: Mental contrasting is useful for diabetes self-management, dieting etc. Tangent: During a seminar I attended in Geneva, The World Health Organisation chief dietary authority said that suggesting dietary patterns (e.g. the Mediterranean diet) rather than individual nutrient intake (protein, creatine, carbs) is preferable. But I have yet to identify substantiating evidence. The broad consensus among lay skeptical scrutineers of the field of nutrition is that most truths, even those broadly accepted ones, are still unclear. However, I have yet to analyse the literature myself. 

  

Exercise and sport are good for subject well-being, quality of life, depression, anxiety, stress and more. Plus, they are fun. You may not enjoy pleasant, wellbeing related activities. Do those activities anyway. I seldom enjoy correcting my posture. I tend to slouch and I have been specifically advised by specialised physiotherapist to correct for that. But, slouching typically doesn’t cause pain - posture correction is pseudoscience! So is many interventions related to posture correction, like standing desks. On the other hand, I love to get massages - but their benefits are short lived - so get them regularly! 

  

I particularly enjoy them after resistance training or 1 minute workouts (high intensity interval training). Be careful about stretching, passive stretching can cause injury, unlike active stretching: 'Passive stretching is when you use an outside force other than your own muscle to move a joint or limb beyond its active range of motion, to put your body into a position that you couldn’t do by yourself (such as when you lean into a wall, or have a partner push you into a deeper stretch). Unfortunately, this is the most common form of stretching used.' 

  

However, if you aim to bodybuild, protein supplementation is pseudoscientific broscience. And ‘form’, well, there’s broscience - like squat with your knees outwards but probably lots of credible safety related information one ought to head. For weight loss, if you want a real cheat sheet - weight loss aspirants can get it for a couple of hundred dollar SNP sequencing kit. But, I would be cautious about gene sequence driven health prescription, some services running that business rely on weak evidence. There are other ‘fad’ fitness ideas that are not grounded in science. For instance: 20 second of foam rolling (just as effective as 60 seconds) enhance flexibility (...for no longer than 10 minutes, unless it is done regularly - than it improves long term flexibility) but it is unclear whether they improve athletic performance or post-performance recovery. 

  

Stretching for runners, but no other kinds of sports prevents injuries and increase range of motion [wikipedia]. Shoe inserts don’t work reliably either [Wikipedia]. Martial arts therapy is a thing. Physical exercise is good for you. Tai chi, qigong, and meditation (other than mindfulness) such as transcendental meditation are ineffective in treating depression and anxiety. If you are injured, try rehabilitation exercises. Exercise or performance enhancing drugs are both cognitive enhancers. Exercise for chronic lower back pain is a good idea.  

  

Environment: Avoid outdoor air pollution near residences due to dementia/other-health risks. And, avoid chimney smoke fireplaces. 

  

Anecdotally, hygiene improves self-esteem and well-being. Wipe with wet wipes if you wipe hard enough to cause blood to form, cover the toilet seat with toilet paper or don’t - it doesn’t matter safety wise unless the contaminant is <~1hr old, shower with soap, remove eye mucus, remove earwax (but not the way you think, likely), brush twice a day - with the correct technique, replacing your toothbrush every few months and softly. 'Don't rinse with water straight after toothbrushing'. Floss once a day (with a different piece of floss each flossing session) but do not brush immediately after drinking acidic substances. The effectiveness of Tooth Mousse is questionable. Visit the dentist for a check-up every now and then - I’d say about every year at least (does anyone know how to format this sentence consistent with the rest of the text - it doesn't appear to be a font size or type issue). 

  

Consider sleeping with a face mask and earplugs for better sleep,  blow your nose, clean under your nails and trim them. Eye examinations should be conducted every 2-4 years for those under 40, and up to every 6 months for those 65+. There are health concerns around memory foam pillows/mattresses so latex pillows may be preferable for those who prefer a sturdier option than traditional pillows/mattresses Anecdotally, setting alarms to remind you to do things is a simple way to manage your time not just for waking up. Light therapy is also helpful in treating delayed sleep phase disorder (being a night owl!). Oh, and don’t bother loading the dishwasher with pre washed dishes (as long as you clean the filter regularly). 

  

There are misconceptions around complementary therapies. The Australian Government reviewed the effective of The Alexander technique, homeopathy, aromatherapy, bowen therapy, buteyko, Feldenkrais, herbalism, homeopathy, iridology, kinesiology, massage therapy, pilates, reflexology, rolfing shiatsu, tai chi, yoga. Only for (Alexander technique, Buteyko, massage therapy (esp. Remedial massage?), tai chi and yoga was there credible (albeit low to moderate quality) evidence that they are useful for certain health conditions.  

  

Stressed out reading all this? Pressing on your eyelids gently to temporarily forgo a headache can work. Traumatically stressed out? Video games can treat PTSD. Animal assisted therapy, like service dogs and therapeutic animals are also wonderful.

 

Thank you!</br></br><a href="https://www.lesserwrong.com/posts/qtJv7obNGQFxPncEB/physical-actions-that-improve-psychological-health">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/qtJv7obNGQFxPncEB/physical-actions-that-improve-psychological-health</link><guid isPermaLink="false">qtJv7obNGQFxPncEB</guid><pubDate>Tue, 23 May 2017 04:33:24 GMT</pubDate></item><item><title><![CDATA[Probabilistic Programming and  Bayesian Methods for Hackers]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/edcNaR7WF5pfyc9Gd/probabilistic-programming-and-bayesian-methods-for-hackers">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/edcNaR7WF5pfyc9Gd/probabilistic-programming-and-bayesian-methods-for-hackers</link><guid isPermaLink="false">edcNaR7WF5pfyc9Gd</guid><pubDate>Mon, 22 May 2017 21:15:25 GMT</pubDate></item><item><title><![CDATA[Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/beB7g6FEytBhiHvGr/overcoming-algorithm-aversion-people-will-use-imperfect">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/beB7g6FEytBhiHvGr/overcoming-algorithm-aversion-people-will-use-imperfect</link><guid isPermaLink="false">beB7g6FEytBhiHvGr</guid><pubDate>Mon, 22 May 2017 18:31:44 GMT</pubDate></item><item><title><![CDATA[Open thread, May 22 - May 28, 2017]]></title><description><![CDATA[If it's worth saying, but not worth its own post, then it goes here.    

Notes for future OT posters: 

1. Please add the 'open_thread' tag. 

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
 

3. Open Threads should start on Monday, and end on Sunday. 

4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting</br></br><a href="https://www.lesserwrong.com/posts/fJcjn65jicwbSrG2Q/open-thread-may-22-may-28-2017">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/fJcjn65jicwbSrG2Q/open-thread-may-22-may-28-2017</link><guid isPermaLink="false">fJcjn65jicwbSrG2Q</guid><pubDate>Mon, 22 May 2017 05:44:05 GMT</pubDate></item><item><title><![CDATA[Why Most Intentional Communities Fail (And Some Succeed)]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/bzkCYAyfdaSZitbBR/why-most-intentional-communities-fail-and-some-succeed">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/bzkCYAyfdaSZitbBR/why-most-intentional-communities-fail-and-some-succeed</link><guid isPermaLink="false">bzkCYAyfdaSZitbBR</guid><pubDate>Mon, 22 May 2017 03:04:23 GMT</pubDate></item><item><title><![CDATA[Learning Deep Learning the EASY way, with Keras]]></title><description><![CDATA[<a href="https://www.lesserwrong.com/posts/C7MrmdvxpB22zk7HL/learning-deep-learning-the-easy-way-with-keras">Discuss</a>]]></description><link>https://www.lesserwrong.com/posts/C7MrmdvxpB22zk7HL/learning-deep-learning-the-easy-way-with-keras</link><guid isPermaLink="false">C7MrmdvxpB22zk7HL</guid><pubDate>Sun, 21 May 2017 19:48:16 GMT</pubDate></item></channel></rss>