Open thread, 21-27 April 2014

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Thread started before the end of the last thread to ecourage Monday as first day.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:23 AM
Select new highlight date
Rendering 50/349 comments  show more

If not rationality, then what?

LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.

Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.

Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?


Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.

Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren't so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.

There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft's world, Qoheleth's world, or Nietzsche's world.


Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are in Cassandra's or Buddha's world?


Edited lightly — In the first couple of paragraphs, I've clarified that I'm talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.

Pure curiousity question: What is the general status of UDT vs. TDT among yall serious FAI research people? MIRI's publications seem to exclusively refer to TDT; people here on LW seem to refer pretty much exclusively to UDT in serious discussion, at least since late 2010 or so; I've heard it reported variously that UDT is now standard because TDT is underspecified, and that UDT is just an uninteresting variant of TDT so as to hardly merit its own name. What's the deal? Has either one been fully specified/formalized? Why is there such a discrepancy between MIRI's official work and discussion here in terms of choice of theory?

MIRI's publications seem to exclusively refer to TDT

Why do you say that? If I do a search for "UDT" or "TDT" on intelligence.org, I seem to get about an equal number of results.

people here on LW seem to refer pretty much exclusively to UDT in serious discussion

This seems accurate to me. I think what has happened is that UDT has attracted a greater "mindshare" on LW, to the extent that it's much easier to get a discussion about UDT going than about TDT. Within MIRI it's probably more equal between the two.

that UDT is just an uninteresting variant of TDT so as to hardly merit its own name

As I recall, Eliezer was actually the one who named UDT. (Here's the comment where he called it "updateless", which everyone else then picked up. In my original post I never gave it a name but just referred to "this decision theory".)

Has either one been fully specified/formalized?

There has been a number of attempts to formalize UDT, which you can find by searching for variations on "formal UDT" on LW. I'm not aware of a similar attempt to formalize TDT, although this paper gives some hints about how it might be done. It's not really possible to "fully" specify either one at this time because both need to interface with a to-be-discovered solution to the problem of logical uncertainty, and at this point we don't even know the type signature of such a solution. In the attempts to formalize UDT, people either make a guess as to what the type signature is, or side-step the problem by assuming that all relevant logical facts can be deduced by the agent.

I was feeling lethargic and unmotivated today, but as a way of not-doing-anything, I got myself to at least read a paper on the computational architecture of the brain and summarize the beginning of it. Might be interest to people, also briefly touches upon meditation.

Whatever next? Predictive brains, situated agents, and the future of cognitive science (Andy Clark 2013, Behavioral and Brain Sciences) is an interesting paper on the computational architecture of the brain. It’s arguing that a large part of the brain is made up of hierarchical systems, where each system uses an internal model of the lower system in an attempt to predict the next outputs of the lower system. Whenever a higher system mispredicts a lower system’s next output, it will adjust itself in an attempt to make better predictions in the future.

EDIT: Just realized, this model explains tulpas. Also has connections to perceptual control theory, confirmation bias and people's general tendency to see what they expect to see, embodied cognition, the extent to which the environment affects our thought... whoa.

I get confused when people use language that talks about things like "fairness", or whether people are "deserving" of one thing or another. What does that even mean? And who or what is to say? Is it some kind of carryover from religious memetic influence? An intuition that a cosmic judge decides what people are "supposed" to get? A confused concept people invoke to try to get what they want? My inclination is to just eliminate the whole concept from my vocabulary. Is there a sensible interpretation that makes these words meaningful to atheist/agnostic consequentialists, one that eludes me right now?

Here are some things people might describe as "unfair":

  • Someone shortchanges you. You buy what's advertised as a pound of cheese, only to find out at home that it's only four-fifths of a pound; the storekeeper had their thumb on the scale to deliberately mis-weigh it.
  • Someone passes off a poor-quality item as a good one. You buy a sealed box of cookies, only to find out that half of them are broken and crumbled due to mishandling at the store.
  • Someone entrusted with a decision abuses that trust to their advantage. The facilities manager of a company doesn't hire the landscaping company that makes the best offer to the company, but instead the one that offers the best kickback to the facilities manager.
  • Someone uses a position of power to take something that isn't theirs; especially when the victim can't do anything about it. A boy's visiting grandmother gives him $50 to buy a video game for his birthday; but as soon as the grandmother has left, the boy's mother takes the money away and uses it to buy liquor for herself.
  • Someone abandons a responsibility, leaving it to others to cover. Four people go out to dinner together; and the bill comes to $100. One person excuses himself "to go to the restroom," but doesn't come back, so the others have to pay his share of the bill as well as their own.
  • Someone takes advantage of a person's weak or ignorant position. A taxi driver, knowing that a tourist doesn't know the city, takes a deliberately circuitous route to run up the meter.
  • Someone uses asymmetrical information to deprive others of a stronger negotiating position. An employer tells each of her employees individually that they are poor performers, easily replaceable, and unlikely to get a raise; so that they do not realize that together they are not easily replaceable and that by collective bargaining they could negotiate for higher wages.
  • Someone breaks agreed-upon rules to take something of value. A poker player uses a trick to put a card into play that wasn't dealt to him — the classic "ace up the sleeve" — in order to win money that another player would have won.
  • Someone entrusted to do a good job instead does a bad job in order to gain an advantage some other way. A star sports player deliberately plays poorly so his team will lose a game they are strongly favored to win, allowing people who have bet against his team to win big.
  • Someone gets away with breaking the rules by making outside arrangements with those responsible for enforcing them. By donating to the "police charitable fund," you get a bumper sticker that makes it less likely the police will pull you over if you break the traffic laws.

What sorts of things do you see in common among these situations?

What sorts of things do you see in common among these situations?

Your list seems a bit... biased.

Let's throw in a couple more situations:

  • A homeless guy watches a millionaire drive by in a Lamborghini. "That's not fair!" he says.
  • An unattractive girl watches an extremely cute girl get all the guys she wants and twirl them around her little finger. "That's not fair!" she says.
  • A house owner learns that his house will be taken away from him under an eminent domain claim by the state which wants a developer to build a casino on the land. "That's not fair!" he says.
  • A union contractor is undercut on price by a non-union contractor. "That's not fair!" he says.

While people say "That's not fair" in the above examples and in these, it seems there are two different clusters of what they mean. In the first group, the objection seems to be to self-serving deception of others, particularly violation of agreements (or what social norms dictate are implicit agreements). Your examples don't involve deception or violation of agreements (except perhaps in the case of eminent domain), and the objection is to inequality. I find it strange that the same phrase is used to refer to such different things.

I think you could say that in both groups, people are objecting because society is not distributing resources according to some norm of what qualities the resource distribution is supposed to be based on.

In the first group of examples, people are deceiving others and violating agreements, and society says that people are supposed to be rewarded for honest behavior and keeping agreements.

For the second group of examples:

  • The homeless person example is a bit tricky, since there are multiple different norms that they might be appealing to, but suppose that the homeless person used to be a hard worker before he got laid off and lost his home. The homeless person may then be objecting that society is supposed to reward a willingness to put in hard work, whereas he doesn't perceive the millionaire as having worked equally hard. Or, the homeless person may think that society should provide some minimum level of resources to everyone, and the fact that he has nothing while another person has millions demonstrates a particularly blatant violation of this rule.
  • There's a social ideal saying that people should be rewarded for their "internal" characteristics (like honesty) rather than "external" ones (like appearance), so the unattractive girl is objecting to the attractive girl being rewarded for something she's not supposed to be rewarded for.
  • The house owner is objecting because we usually think that people should be allowed to keep the property they have worked to have, and the eminent domain claim is violating that intuition.
  • The union contractor is complaing because he thinks that being unionized provides benefits for the profession as a whole, and that the non-union contractor is getting a personal benefit while defecting against the rest of the profession.

Regardless of what your ideal society looks like, creating it probably requires consistently maintaining some algorithm that rewards certain behaviors while punishing others. Fairness violations could be thought of as situations where the algorithm doesn't work, and people are being rewarded for things that an optimal society would punish them for, or vice versa.

You could also say that in both groups, there is actually an implicit agreement going on, with people being told (via e.g. social ideals and what gets praised in public) that "if you do this, then you'll be rewarded". If you buy into that claim, then you will feel cheated if you do what you think you should do, but then never get the reward.

Of course, the situation is made more complicated by the fact that there is no consistent, univerally-agreed upon norm of what the ideal society should be, nor of what would be the optimal algorithm for creating it. People also have an incentive to push ideals which benefit them personally, whether as a conscious strategy or as an unconscious act of motivated cognition. So it's not surprising that people will have widely differing ideas of what "fair" behavior actually looks like.

I find it strange that the same phrase is used to refer to such different things.

However looking at reality, the phrase is used in all these ways, isn't it?

As Bart Wilson mentions here, a century ago the word "fairness" referred exclusively to the first cluster. However, due to various political developments during the past century it has drifted and now refers to a confused mix of both.

It's not a theistic concept - if anything, it predates theology(some animals have a sense of fairness, for example). We build social structures to enforce it, because those structures make people better off. The details of fairness algorithms vary, but the idea that people shouldn't be cheated is quite common.

What does that even mean?

I am with Stanislaw Lem -- it's hard to communicate in general, not just about fairness. I find so many communication scenarios in life resemble first contact situations..

How strong is the evidence in favor of psychological treatment really?

I am not happy. I suffer from social anxiety. I procrastinate. And I have a host of another issues that are all linked, I am certain. I have actually sought out treatment with absolutely no effect. On the recommendation of my primary care physician I entered psychoanalytic counseling and was appalled by the theoretical basis and practical course of "treatment". After several months without even the hint of a success I aborted the treatment and looked for help somewhere else.

I then read David Burns' "Feeling Good", browsing through, taking notes and doing the exercises for a couple of days. It did not help, of course in hindsight I wasn't doing the treatment long enough to see any benefit. But the theoretical basis intrigued me. It just made so much more sense to be determined by one's beliefs than a fear of having one's balls chopped off, hating their parents and actively seeking out displeasure because that is what fits the narrative.

Based on the key phrase "CBT" I found "The now habit" and reading me actually helped to subdue my procrastination long enough to finish my bachelor's degree in a highly technical subject with grades in the highest quintile. Then I slipped back into a phase of relative social isolation, procrastionation and so on.

We see these phenomena consistently in people. We also see them consistently in animals being held in captivity not suited to their species' specific needs. I am less and less convinced that this block of anxiety, depression and procrastination is a disease but a reaction to an environment in the broadest sense inherently unsuitable to humans.

The proper and accepted procedure for me would be to try counseling again, this time with a cognitive behavioral approach. But I am unwilling to commit that much time for uncertain results, especially now that I want to travel or do a year abroad or just run away from it all. (Suicide is not an option) What lowers my odds of success even more is that I never feel understood by people put in place to understand in various venues. So how could such a treatment help?

I am open to bibliotherapy. I don't think I am open to traditional or even medical therapy.

but a reaction to an environment in the broadest sense inherently unsuitable to humans.

So, can you say more about what aspect of your environment is bugging you? Captivity?? Do you want to try living somewhere more "outdoors"?

I have suffered from social anxiety continuously and depression off and on since childhood. I've sought treatment that included talk therapy and medication. Currently I am doing EMDR therapy which may or may not end up being helpful, but I don't expect it to work miracles. Everyone in my immediate family has had similar issues throughout their lives. I feel your pain. Despite not being perfect and being in therapy, I feel like my life is going pretty well. Here is what has worked for me:

Acceptance: Not everyone can be or should be the life of the party. Being quiet or reserved or shy is a perfectly acceptable way to live your life. You can still work on becoming comfortable in more social situations but you are fine right now. There are plenty of people who will like you just as you are, even if you social skills are far from perfect. Harsh self-judgement can make anxiety worse and lead to procrastination and depression. What I try to do as best I can is to just do whatever I feel like in the moment, and just let the world correct me. I try not to develop too many theories about how the world will react to me since I know from experience that those theories will be biased and pessimistic.

Decide what you want from the world: I guess this is somewhat generic life advice, but it has really worked for me. I decided fairly early on what I wanted to get from the social world. I wanted 3 things.

  • marriage
  • children
  • a good career

Deciding those things, I plugged away at getting them. I was completely incompetent at talking to women but with some help from e-harmony I found one who I was able to be comfortable with and who liked me. We got married 6.5 years ago and we have a 2 year old daughter and another child on the way. Professionally, I found a career that involves a minimum of politicking and no customer interaction. And yet it is both intellectually satisfying and highly remunerative. Even though neither my home life nor my professional life are perfect, achieving my basic life goals has given me a deep feeling of confidence and satisfaction that I can use to counter feelings of anxiety and depression as they come.

Each step I took along the path towards my goals gave me more confidence to move forward, but that confidence wasn't necessarily automatic. I have to periodically brag to myself about myself because otherwise I will naturally focus on my failures and weaknesses and start to feel like a loser. You should be very proud of your accomplishments in college. Most people could not do what you have done. Remind yourself of that. Feel good about yourself.

Humans are diverse.

I mean this not only in the sense of them coming all kinds of shapes, colours and sizes, having different world views and upbringings attached to them, but also in the sense of them having different psychological, neurological and cultural makeup. It does not sound like something that needs to explicitly said but apparently it needs to be said.

Of course first voices have realised that the usual population for studies is WEIRD but the problem goes deeper and further. Even if the conscientious scientist uses larger populations, more representative for the problem at hand, the conclusions drawn tend to ignore human diversity.

One of the culprits is the concept of "average" or at least a misuse of it. The average person has an ovary and a testicle. Completely meaningless to say, yet we are comfortable in hearing statements like "going to college raises your expected income by 70%" (number made up) and off to college we go. Statements like these suppress a great deal of relevant information, namely the underlying, inherent diversity in the population. Going to college may increase lifetime earnings, but the size of this effect might be highly dependent on some other factor like inherent cognitive ability and choice of major.

Now that is obvious, you might say, but virtually all research shows that this is not the case. It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is. And this can be determined by the answer to a single question. Research on exercise and diet is massively convoluted with questions about endurance/strength and carbs/fats. May this be because of ignoring underlying biological factors?

People are touting the coming age of personalised medicine as they see massively diminishing returns on generic medicine. Ever more diseases are hypothesised to have very specific causes for each person necessitating ever more specialised treatment. The effects of psychedelic substances are found to be dependant on the exact psychological makeup, e.g. cannabis causing psychosis only in individuals already at risk for such episodes.

There is no exact point to this rant. Just the observation that ever more statements are similar to saying "having unprotected sex with your partner has a high probability of leading to pregnancy" to homosexual man.

It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is.

The study you're probably thinking of failed to replicate with a larger sample size. While success at learning to code can be predicted somewhat, the discrepancies are not that strong.

http://www.eis.mdx.ac.uk/research/PhDArea/saeed/

The researcher didn't distinguish the conjectured cause (bimodal differences in students' ability to form models of computation) from other possible causes. (Just to name one: some students are more confident; confident students respond more consistently rather than hedging their answers; and teachers of computing tend to reward confidence).

And the researcher's advisor later described his enthusiasm for the study as "prescription-drug induced over-hyping" of the results ...

Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers.

There are three separate issues:

(a) The concept of averaging. There is nothing wrong with averages. People here like maximizing expected utility, which is an average. "Effects" are typically expressed as averages, but we can also look at distribution shapes, for instance. However, it's important not to average garbage.

(b) The fact that population effects and subpopulation effects can differ. This is true, and not surprising. If we are careful about what effects we are talking about, Simpson's paradox stops being a paradox.

(c) The fact that we should worry about confounders. Full agreement here! Confounders are a problem.


I think one big problem is just the lack of basic awareness of causal issues on the part of the general population (bad), scientific journalists (worse!), and sometimes even folks who do data analysis (extremely super double-plus awful!). Thus much garbage advice gets generated, and much of this garbage advice gets followed, or becomes conventional wisdom somehow.

I've been struggling with how to improve in running all last year, and now again this spring. I finally realized (after reading a lot of articles on lesswrong.com, and specifically the martial arts of rationality posts) that I've been rationalizing that Couch to 5k and other recommended methods aren't for me. So I continue to train in the wrong way, with rationalizations like: "It doesn't matter how I train as long as I get out there."

I've continued to run intensely and in short bursts, with little success, because I felt embarrassed to have to walk any, but I keep finding more and more people who report success with programs where you start slowly and gradually add in more running.

Last year, I experimented with everything except that approach, and ended up hurting myself by running too far and too intensely several days in a row.

It's time to stop rationalizing, and instead try the approach that's overwhelmingly recommended. I just thought it would be interesting to share that recognition.

Research on mindfulness meditation

Mindfulness meditation is promoted as though it's good for everyone and everything, and there's evidence that it isn't-- going to sleep is the opposite of being mindful, and a mindfulness practice can make sleep more difficult. Also, mindfulness meditation can make psychological problems more apparent to the conscious mind, and more painful.

The difficulties which meditation can cause are known to Buddhists, but have not yet known by researchers or the general public. The commercialization of meditation is part of the problem.

This isn't a question, just a recommendation: I recommend everyone on this site who wants to talk about AI familiarize themselves with AI and machine learning literature, or at least the very basics. And not just stuff that comes out of MIRI. It makes me sad to say that, despite this site's roots, there are a lot of misconceptions in this regard.

Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?

Do you have a recommendation for a resource that explains the basics in a decent matter?

How good is the case for taking adderall if you struggle with a lot of procrastination and have access to a doctor to give you a prescription?

Tyler Cowen talks with Nick Beckstead about x-risk here. Basically he thinks that "people doing philosophical work to try to reduce existential risk are largely wasting their time" and that "a serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

A "serious" MIRI would operate in absolute secrecy, and the "public" MIRI would never even hint at the existence of such an organisation, which would be thoroughly firewalled from it. Done right, MIRI should look exactly the same whether or not the secret one exists.

How do I decide whether to get married?

  • My girlfriend of four years and I are both graduating college.
  • I haven't found employment yet, and she's returning home for work.
  • As near as I can tell, we're very compatible.

Pros

  • We are very fond of each other, get a lot of value out of each other's time.
  • We've been able to talk about the subject sanely.
  • Status
  • We agree on religion and politics.
  • Married guys make more on average, but the arrow of causality could point in either direction or come from something else.
  • Financial benefits

Cons

  • Negative Status associated with marrying young?
  • No jobs yet, no clear home or area to live in.
  • She sometimes gets mad at me for things I'm "just supposed to know" to do, not do, say, or not say. I'm not sure if she's right and I'm a jerk.

She has said that she doesn't want to marry me if she's just my female best friend that I sleep with. But I don't know how to evaluate what she's asking. There are a number of possibilities. Maybe I don't feel the requisite feelings and thus she wouldn't want to be married. Maybe I do have the feelings and I have no way to evaluate whether I do or not. Maybe I'm not ever going to feel some extra undetected thing X, ever, and so I should just go through the motions saying that I do, and our marriage prospects are entirely unchanged. Maybe this is just some signalling ritual we have to go through.

We both are concerned that I've not really had a relationship not with her, so there are no points of comparison for me to make.

In your list you didn't mention the topic of getting children. If you marry someone with the intention of spending the rest of your life together with them, I think you should be on the same page with regards to getting children before you marry.

What exactly do you think/hope will change between the current situation (which I assume involves you two living together) and the situation if you were to marry?

Don't get married unless there is a compelling reason to do so. There's a base rate of 40-50% for divorce, and at least some proportion of existing marriages are unhealthy and unhappy. Divorce is one of the worst things that can happen to you, and many of the benefits of marriage to happiness are because happier people are more likely to get married in the first place.

I'm an Orthodox Jew, and I'd be interested to connect with others on LW who are also Orthodox. More precisely, I'm interested in finding other LWers who (a) are Jewish, (b) are still religious in some form or fashion, and (c) are currently Orthodox or were at some point in the past.

I have trouble with the statement "In the end, we're all insignificant." I mean I get the sentiment, which is of awe and aims to reduce pettiness. I can get behind that. But I have trouble if someone uses it in an argument, such as: "Why bother doing X; we're all insignificant anyway."

Because, if you look closely, "significance" is not simply a property of objects. It is, at the very least, a function of objects, agents and scales. For example you can say that we're all insignificant on the cosmic scale; but we're also all insignificant on the microscopic scale. We're also insignificant for some trees in the middle of the rainforest or an alien in another galaxy. We're almost completely insignificant to some random person in the past, present or future, but much more significant to the people around us.

I've been reading about maximizers and satisficers, and I'm interested to see where LessWrong people fall on the scale. I predict it'll be signficantly on the maximizer side of things.

A maximizer is someone who always tries to make the best choice possible, and as a result often takes a long time to make choices and feels regret for the choice they do make ('could I have made a better one?'). However, their choices tend to be judged as better, eg. maximizers tend to get jobs with higher incomes and better working conditions, but to be less happy with them anyway. A satisficer is someone who tries to make a 'good enough' choice - they tend to make choices faster and be happier with them, despite the choices being judged (generally) as worse than those of maximizers.

If you want, take this quiz

And put your score into the poll below: [pollid:682]

I wonder what the person who submitted the number 1488 was thinking. (Maximizing their answer, perhaps.)