The ideas you're not ready to post

I've often had half-finished LW post ideas and crossed them off for a number of reasons, mostly they were too rough or undeveloped and I didn't feel expert enough. Other people might worry their post would be judged harshly, or feel overwhelmed, or worried about topicality, or they just want some community input before adding it.

So: this is a special sort of open thread. Please post your unfinished ideas and sketches for LW posts here as comments, if you would like constructive critique, assistance and checking from people with more expertise, etc. Just pile them in without worrying too much. Ideas can be as short as a single sentence or as long as a finished post. Both subject and presentation are on topic in replies. Bad ideas should be mined for whatever good can be found in them. Good ideas should be poked with challenges to make them stronger. No being nasty!

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:37 AM
Select new highlight date
All comments loaded

The Dilbert Challenge: you are working in a company in the world of Dilbert. Your pointy-haired boss comes to you with the following demand:

"One year from today, our most important customer will deliver us a request for a high-quality reliable software system. Your job and the fate of the company depends on being able to develop and deploy that software system within two weeks of receipt of the specifications. Unfortunately we don't currently know any of the requirements. Get started now."

I submit that this preposterous demand is really a deep intellectual challenge, the basic form of which arises in many different endeavors. For example, it's reasonable to believe that at some point in the future, humanity will face an existential threat. Given that we will not know the exact nature of that threat until it's almost upon us, how can we prepare for it today?

Wow. I'm a relatively long-time participant, but never really "got" the reasons why we need something like rationality until I read your comment. Here's thanks and an upvote.

On the Care and Feeding of Rationalist Hardware

Many words have been spent here in improving rationalist software -- training patterns of thought which will help us to achieve truth, and reliably reach our goals.

Assuming we can still remember so far back, Eliezer once wrote:

But if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly. If you're a fast learner, you might learn faster - but the art of rationality isn't about that; it's about training brain machinery we all have in common

Rationality does not require big impressive brains any more than the martial arts require big bulging muscles. Nonetheless, I think it would be rare indeed to see a master of the martial arts willfully neglecting the care of his body. Martial artists of the wisest schools strive to improve their bodies. They jog, or lift weights. They probably do not smoke, or eat unhealthily. They take care of their hardware so that the things they do will be as easy as possible.

So, what hacks exist which enable us to improve and secure the condition of our mental hardware? Some important areas that come to mind are:

  • sleep
  • diet
  • practice

I'd definitely want to read about a good brain-improving diet (I have no problems with weight, so I'd prefer not to mix these two issues).

I agree. LW doesn't have many posts about maintaining and improving the brain.

I would also add aerobic exercise to your list, and possibly drugs. For example, caffeine or modafinil can help improve concentration and motivation. Unfortunately they're habit-forming and have various health effects, so it's not a simple decision.

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.

Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.

Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.

In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors. When they reason on their compressed data, they will reach different conclusions, even if they are using the same reasoning algorithms and are executing them flawlessly. Futhermore, it would be very hard for them to discover this, since the compression scheme is unconscious. They would be more likely to believe that the other person is lying, nefarious, or stupid.

If you're going to write about this, be sure to account for the fact that many people report successful communication in many different ways. People say that they have found their soul-mate, many of us have similar reactions to particular works of literature and art, etc. People often claim that someone else's writing expresses an experience or an emotion in fine detail.

Yeah. I thought about this a lot in the context of the Hanson/Yudkowsky debate about the unmentionable event. As was frequently pointed out, both parties aspired to rationality and were debating in good faith, with the goal of getting closer to the truth.

Their belief was that two rationalists should be able to assign roughly the same probability to the same sequence of events X. That is, if the event X is objectively defined, then the problem of estimating p(X) is an objective one and all rational persons should obtain roughly the same value.

The problem is that we don't - maybe can't - estimate probabilities in isolation of other data. All estimates we make are really of conditional probabilities p(X|D), where D is a person's unique huge background dataaset. The background dataset primes our compression/inference system. To use the Solomonoff idea, our brains construct a reasonably short code for D, and then use the same set of modules that were helpful in compressing D to compress X.

There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:

  • Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.

  • Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)

  • Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

  • Inner conflict is, literally, a conflict between control systems that are trying to hold the same variable in two different states.

  • How control systems behave is not intuitively obvious, until one has studied control systems.

This is the only approach to the study of human nature I have encountered that does not appear to me to mistake what it looks like from the inside for the underlying mechanism.

What say you all? Vote this up or down if you want, but comments will be more useful to me.

We are Eliza: A whole lot of what we think is reasoned debate is pattern-matching on other people's sentences, without ever parsing them.

I wrote a bit about this in 1998.

But I'm not as enthused about this topic as I was then, because then I believed that parsing a sentence was reasonable. Now I believe that humans don't parse sentences even when reading carefully. The bird the cat the dog chased chased flew. Any linguist today would tell you that's a perfectly fine English sentence. It isn't. And if people don't parse grammatic structures to just 2 levels of recursion, I doubt recursion, and generative grammars, are involved at all.

I'm kind of thinking of doing a series of posts gently spelling out step by step the arguments for Bayesian decision theory. Part of this is for myself: I've read a while back Omohundro's vulnerability argument, but felt there were missing bits that I had to personally fill in, assumptions I had to sit and think on before I could really say "yes, obviously that has to be true". Some things that I think I can generalize a bit or restate a bit, etc.

So as much as for myself, to organize and clear that up, as for others, I want to do a short series of "How not to be stupid (given unbounded computational power)" In which in each each post I focus on one or a small number of related rules/principles of Bayesian Decision theory and epistemic probabilities, and gently derive those from the "don't be stupid" principle. (Again, based on Omohundro's vulnerability arguments and the usual dutch book arguments for Bayesian stuff, but stretched out and filled in with the details that I personally felt the need to work out, that I felt were missing.)

And I want to do it as a series, rather than a single blob post so I can step by step focus on a small chunk of the problem and make it easier to reference related rules and so on.

Would this be of any use to anyone here though? (maybe a good sequence for beginners, to show one reason why Bayes and Decision Theory is the Right Way?) Or would it be more clutter than anything else?

This doesn't even have an ending, but since I'm just emptying out the drafts folder

Memetic Parasitism

I heard a rather infuriating commercial on the radio today. There's no need for me to recount it directly -- we've all heard the type. The narrator spoke of the joy a woman feels in her husband's proposal, of how long she'll remember its particulars, and then, for no apparent reason, transitioned from this to a discussion of shiny rocks, and where we might think of purchasing them.

I hardly think I need to belabor the point, but there is no natural connection between shiny rocks and promises of monogamy. There was not even any particularly strong empirical connection between the two until about a hundred years ago, when some men who made their fortunes selling shiny rocks decided to program us to believe there was.

What we see here is what I shall call memetic parasitism. We carry certain ideas, certain concepts, certain memes to which we attach high emotional valence. In this case, that meme is romantic love, expressed through monogamy. An external agent contrives to derive some benefit by attaching itself to that meme.

Now, it is important to note when describing a Dark pattern that not everything which resembles this parttern is necessarily dark. Carnation attempts to connect itself in our minds to the Burns and Allen show. Well, on reflection, it seems this is right. Carnation did bring us the Burns and Allen show. It paid the salary of each actor, each writer, each technician, who created the show each week. Carnation deserves our gratitude, and any custom which may result from it. Romantic love existed for many centuries before the shiny-rock-sellers came along, and they have done nothing to enhance it.

Of course, I think most of us have seen this pattern before. This comic makes the point rather well, I think.

So, right now, I know that the shiny-rock-sellers want to exploit me, this outrages me, and I choose to have nothing to do with them. How do we excite people's shock and outrage at the way the religions have tried to exploit them?

Aumann agreements are pure fiction; they have no real-world applications. The main problem isn't that no one is a pure Bayesian. There are 3 bigger problems:

  • The Bayesians have to divide the world up into symbols in exactly the same way. Since humans (and any intelligent entity that isn't a lookup table) compress information based on their experience, this can't be contemplated until the day when we derive more of our mind's sensory experience from others than from ourselves.
  • Bayesian inference is slow; pure Bayesians would likely be outcompeted by groups that used faster, less-precise reasoning methods, which are not guaranteed to reach agreement. It is unlikely that this limitation can ever be overcome.
  • In the name of efficiency, different reasoners would be highly orthogonal, having different knowledge, different knowledge compression schemes and concepts, etc.; reducing the chances of reaching agreement. (In other words: If two reasoners always agree, you can eliminate one of them.)

This would probably have to wait until May.

Buddhism.

What it gets wrong. Supernatural stuff - rebirth, karma in the magic sense, prayer. Thinking Buddha's cosmology was ever meant as anything more than an illustrative fable. Renunciation. Equating positive and negative emotions with grasping. Equating the mind with the chatty mind.

What it gets right. Meditation. Karma as consequences. There is no self, consciousness is a brain subsystem, emphasis on the "sub" (Cf. Drescher's "Cartesian Camcorder" and psychology's "system two"). The chatty mind is full of crap and a huge waste of time, unless used correctly. Correct usage includes noticing mostly-subconscious thought loops (Cf. cognitive behavioral therapy). A lot of everyday unreason does stem from grasping, which roughly equates to "magical thinking" or the idea that non-acknowledgment of reality can change it. This includes various vices and dark emotions, including the ones that screw up attempted rationality.

What rationalists should do. Meditate. Notice themselves thinking. Recognize grasping as a mechanism. Look for useful stuff in Buddhism.

Why I can't post. Not enough of an expert. Not able to meditate myself yet.

It actually strikes me that a series of posts on "What can we usefully learn from X tradition" would be interesting. Most persistent cultural institutions have at least some kind of social or psychological benefit, and while we've considered some (cf. the martial arts metaphors, earlier posts on community building, &c.) there are probably others that could be mined for ideas as well.

Willpower building as a fundamental art. And some of the less obvious pit falls. Including the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

I need to hunt back down some of the cognitive science research on this before I feel comfortable posting it.

...the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

Easy answer: don't use willpower. Ever.

I quit it cold turkey in late 2007, and can count on one hand the number of times I've been tempted to use it since.

(Edit to add: I quit it in order to force myself to learn to understand the things that blocked me, and to learn more effective ways to accomplish things than by pushing through resistance. It worked.)

don't use willpower. Ever.

Could you do a post on that?

I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:

Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for that was to just blurt out the first technological idea that occurred to me when presented with a technological problem, but to do so in a non-serious tone of voice. Often enough the idea is one that nobody else has thought of, or automatically dismissed for what, in retrospect, were insufficient reasons. Other times its so obviously stupid an idea that everyone thinks I'm making a joke. It doesn't hurt that often I do deliberately joke.

I don't know if this is a technique others should adopt or not, but I've found it has made me far less afraid of appearing stupid when presenting ideas.

I'm vaguely considering doing a post about skeptics. It seems to me they might embody a species of pseudo-rationality, like Objectivists and Spock. (Though it occurs to me that if we define "S-rationality" as "being free from the belief distortions caused by emotion", then "S-rationality" is both worthwhile and something that Spock genuinely possesses.) If their supposed critical thinking skills allow them to disbelieve in some bad ideas like ghosts, Gods, homeopathy, UFOs, and Bigfoot, but also in some good ideas like cryonics and not in other bad ideas like extraterrestrial contact, ecological footprints, p-values, and quantum collapse, then how does the whole thing differ from loyalty to the scientific community? Loyalty to the scientific community isn't the worst thing, but there's no need to present it as independent critical thinking.

I'm sure there are holes in this line of thought, so all criticism is welcome.

Yet another post from me about theism?

This time, pushing for a more clearly articulated position. Yes, I realize that I am not endearing myself by continuing this line of debate. However, I have good reasons for pursuing.

  • I really like LW and the idea of a place where objective, unbiased truth is The Way. Since I idealistically believe in Aumann’s Agreement theorem, I think that we are only a small number of debates away from agreement.

  • To the extent to which LW aligns itself with a particular point of view, it must be able to defend that view. I don’t want LW to be wrong, and am willing to be a nuisance to make sure.

  • If defending atheism is not a first priority, can we continue using religion as a convenient example of irrationality, even as the enemy of rationality?

  • There is a definite sense that theism is not worth debating, that the case is "open-and-shut". If so, it should be straight-forward to draft a master argument. (Five separate posts of analogies is not strong evidence in my Bayesian calculation that the case is open-and-shut.)

  • A clear and definitive argument against theism would make it possible for theists (and yourselves, as devil's advocates) to debate specific points that are not covered adequately in the argument. (If you are about to downvote me on this comment, think about how important it would be to permit debate on an ideology that is important to this group. Right now it is difficult to debate whether religion is rational because there is no central argument to argue with.)

  • Relative to the ‘typical view’, atheism is radical. How does a religious person visiting this site become convinced that you’re not just a rationality site with a high proportion of atheists?

(Um, this started as a reply to your comment but quickly became its own "idea I'm not ready to post" on deconversions and how we could accomplish them quickly.)

Upvoted. It took me months of reading to finally decide I was wrong. If we could put that "aha" moment in one document... well, we could do a lot of good.

Deconversions are tricky though. Did anyone here ever read Kissing Hank's Ass? It's a scathing moral indictment of mainline Christianity. I read it when I was 15 and couldn't sleep for most of a night.

And the next day, I pretty much decided to ignore it. I deconverted seven years later.

I believe the truth matters, and I believe you do a person a favor by deconverting them. But if you've been in for a while, if you've grown dependent on, for example, believing in an eternal life... there's a lot of pain in deconversion, and your mind's going to work hard to avoid it. We need to be prepared for that.

If I were to distill the reason I became an atheist into a few words, it would look something like:

Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds. But mental things are complicated. In order to understand them you have to break them down into parts, something we're still working hard to do. If you say "the universe exists because someone created it," it feels like you've explained something, because agents are part of the most fundamental building blocks from which you build your world. But agency, intelligence, desire, and all the rest, are complicated properties which have a specific history here on earth. Sort of like cheesecake. Or the foxtrot. Or socialism.

If somebody started talking about the earth starting because of cheesecake, you'd wonder where the cheesecake came from. You'd look in a history book or a cook book and discover that the cheesecake has its origins in the roman empire, as a result of, well, people being hungry, and as a result of cows existing, and on and on, and you'd wonder how all those complex causes could produce a cheesecake predating the universe, and what sense it would make cut off from the rich causal net in which we find cheesecakes embedded today. Intelligence should not be any different. Agency trips up Occam's rasor, because humans are wired to expect there to always be agents about. But an explanation of the universe which contains an agent is an incredibly complicated theory, which only presents itself to us for consideration because of our biases.

A complicated theory that you never would have thought of in the first place had you been less biased is not a theory that might still be right -- it's just plain wrong. In the same sense that, if you're looking for a murderer in New York city, and you bring a suspect in on the advice of one of your lieutenants, and then it turns out the lieutenant picked the suspect by reading a horoscope, you have the wrong guy. You don't keep him there because he might be the murderer after all, and you may as well make sure. With all of New York to canvas, you let him go, and you start over. So too with agency-based explanations of the universe's beginning.

I've rambled terribly, and were that a top-level post, or a "master argument" it would have to be cleaned up considerably, but what I have just said is why I am an atheist, and not a clever argument I invented to support it.

If somebody started talking about the earth starting because of cheesecake, you'd wonder where the cheesecake came from. You'd look in a history book or a cook book and discover that the cheesecake has its origins in the roman empire, as a result of, well, people being hungry, and as a result of cows existing, and on and on, and you'd wonder how all those complex causes could produce a cheesecake predating the universe, and what sense it would make cut off from the rich causal net in which we find cheesecakes embedded today. Intelligence should not be any different. Agency trips up Occam's rasor, because humans are wired to expect there to always be agents about. But an explanation of the universe which contains an agent is an incredibly complicated theory, which only presents itself to us for consideration because of our biases.

You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things."

Here is something I think would be useful: A careful information-theoretic explanation of why God must be complicated. When you explain, to Christians, that it doesn't make sense to say complexity originated because God created it and God must be complicated, Christians reply (and I'm generalizing here because I've heard these replies so many times) one of 2 things:

  • God is outside of space and time, so causality doesn't apply. (I don't know how to respond to this.)
  • God is not complicated. God is simple. God is the pure essence of being, the First Cause. Think of a perfect circle. That's what God is like.

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia Brittanica, God has at least enough complexity to store that information.

Of course, putting this explanation on LW might do no good to anybody.

really like LW and the idea of a place where objective, unbiased truth is The Way.

Something about this phrase bothers me. I think you may be confused as to what is meant by The Way. It isn't about any specific truth, much less Truth. It is about rationality, ways to get at the truth and update when it turns out that truth was incomplete, or facts change, and so on.

Promoting an abstract truth is very much -not- the point. I think it will help your confusion if you can wrap your head around this. My apologies if these words don't help.

I would prefer us not to talk about theism all that much. We should be testing ourselves against harder problems.

Theism is the first, and oldest problem. We have freed ourselves from it, yes, but that does not mean we have solved it. There are still churches.

If we really intend to make more rationalists, theism will be the first hurdle, and there will be an art to clearing that hurdle quickly, cleanly, and with a minimum of pain for the deconverted. I see no reason not to spend time honing that art.

First, the subject is discussed to death. Second, our target audience at this stage is almost entirely atheists; you start on the people who are closest. Insofar as there are theists we could draw in, we will probably deconvert them more effectively by raising the sanity waterline and having them drown religion without our explicit guidance on the subject; this will also do more to improve their rationality skills than explicit deconversion.

Some bad ideas on the theme "living to win":

  • Murder is okay. There are consequences, but it's a valid move nonetheless.
  • Was is fun. In fact, it's some of the best fun you can have as long as you don't get disabled or killed permanently.
  • Being a cult leader is a winning move.
  • Learn and practice the so called dark arts!

What would a distinctively rationalist style of government look like? Cf. Dune's Bene Gesserit government by jury, what if a quorum of rationalists reaching Aumann Agreement could make a binding decision?

What mechanisms could be put in place to stop politics being a mind-killer?

Why not posted: undeveloped idea, and I don't know the math.

A criticism of practices on LW that are attractive now but which will hinder "the way" to truth in the future; that lead to a religious idolatry of ideas (a common fate of many "in-groups") rather than objective detachment. For example,

(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer

(2) Use of analogies without formally defining the ideas behind them leads to content not only saying more than it intends to (or more than it strictly should) but also having meta-meanings that are attractive but dangerous because they're not explicit. [edit: "formally" was a poor choice of words, "clearly" is my intended meaning]

And any other examples people think of, now and as LW develops.

Winning Interpersonally

cousin_it would like to know how rationality has actually helped us win. However, in his article, he completely gives up on rationality in one major area, admitting that "interpersonal relationships are out."

Alex strenuously disagrees, asking "why are interpersonal relationships out? I think rationality can help a great deal here."

(And, of course, I suppose everone knows my little sob-story by now.)

I'd like to get a read from the community on this question.

Is rationality useless -- or worse, a liability when dealing with other human beings? How much does it matter if those human beings are themselves self-professed rationalists? It's been noted that Less Wrong is incredibly male. I have no idea whether this represents an actual gender differential in desire for epistemic rationality, but if it does, it means most male Less Wrongers should not expect to wind up dating rationalists. Does this mean that it is necessary for us to embrace less than accurate beliefs about, eg, our own desirability, that of our partner, various inherently confused concepts of romantic fate, or whatever supernatural beliefs our partners wish do defend? Does this mean it is necessary to make the world more rational, simply so that we can live in it?

(note: this draft was written a while before Gender and Rationality, so there's probably some stuff I'd rewrite to take that into account)

Is rationality useless -- or worse, a liability when dealing with other human beings?

Only if you translate this into meaning you've got to communicate like Spock, or talk constantly about things that bore, depress, or shock people, or require them to think when they want to relax. etc.

(That article, btw, is by a guy who figured out how to stop being so "rational" in his personal relationships. Also, as it's a pickup artist's blog, there may be some images or language that may be offensive or NSFW. YMMV.)