Open thread, 24-30 March 2014

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Duration set to six days to encourage Monday as first day.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:24 AM
Select new highlight date
Rendering 50/158 comments  show more

Example #149 of why it's difficult to specify bets...

Louie texted me a screenshot showing that Zagat had given an opinion on Subway (the fast-food chain). My girlfriend said "No way," so we both specified a bet that if we went to the Zagat website, we wouldn't be able to find a Zagat rating for Subway. She said 40% and I said 65%. When we checked, it turned out Zagat had conducted a survey of people who visit fast food joints, and Subway had been one of the restaurants they got survey results for. So does that count as Zagat giving Subway a rating? I don’t know. I was just thinking of "official Zagat ratings," rather than survey ratings, but it's technically true that there's a rating for Subway on the Zagat website because of that survey of random people who eat fast food.

What i really need is a panel of 5 trusted judges to decide whether my bets are right or wrong, in contested cases.

I tried to code a simple bot for recurring threads on LW based on bots written for Reddit. It doesn't work as there is apparently no API or a different one from the vanilla version of Reddit. If there is an API is there a documentation for it that I can access?

At my workplace, the question came up of how best to publicly recognise people for good work, while minimising the amount of politics/friction/jealousy that comes about as a direct result of it. We have only just grown past the point where we all know each other well; hence why this sort of thing is becoming interesting.

My initial response to the question was "Make being praised unpleasant, using ugly trophies (sports team strategy) or stupid hats (university graduation strategy)" but I would like to say something more upbeat as well.

Is anyone aware of good writing on the subject/google keywords I could use to find the literature?

You don't want to make being praised unpleasant for the recipient -- that leads to perverse incentives. And you don't want to give an award a stupid name or an embarrassing shape -- part of the point of this sort of thing is that it looks good on your resume or perched over your desk. You want to mark their achievement in a way they'll genuinely appreciate, but simultaneously add symbolism to make their coworkers feel that their status hasn't been diminished.

I think what you're looking for is a little temporary public humiliation, not intrinsic to the award but coming along with more standard recognition. You could do this in several different ways. If it's a fairly small group and the awards are a fairly big deal, for example, you could run a roast) as part of the party following the award. You could probably contrive ways to add this kind of symbolism to physical awards, too.

I was looking for an old Robin Hanson post to use as an example in an upcoming post of mine, and tried to get there through the Opposite Sex, an old post of Eliezer's. When I click that link, though, I get a "You aren't allowed to do that." error, which appears to be a change in the last two years. Anyone know what happened? (My guess is Eliezer or someone decided to retract the article, but it would be nice to know for sure.)

On Facebook one time, there was some discussion or other about gender, and a link to the post was made. EY said something to the effect of 'I no longer endorse that post sufficiently enough to keep it up', and took it down.

Cryonics ideas in practice:

"The technique involves replacing all of a patient's blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. "If a patient comes to us two hours after dying you can't bring them back to life. But if they're dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed," says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique."

http://www.newscientist.com/article/mg22129623.000-gunshot-victims-to-be-suspended-between-life-and-death.html

Being sick makes me stupid. Yesterday I was teaching economics while I had a mild cold. I made multiple simple math mistakes, far more than normal. I need to be mindful that being sick reduces my cognitive capacities.

It took me a long time to find LessWrong and I found it through a convoluted and ultimately entirely random series of events. Though English is neither my first language nor do I live in an anglophone country so I'd love to find a similar community in my language, German, or more generally interesting smaller, though active, communities in other languages than English. How would I go about that?

There are LessWrong meetups in many countries, in particular there are 4 in Germany.

See http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups

Facebook bought Oculus Rift for $2 billion. What makes this, and so many other large deals, such clean numbers? Are the press rounding the details? Are the companies only releasing approximate or estimate numbers? Can the value of a company like Oculus really not be estimated to the nearest 10%? Or do these whole numbers just serve as nice Schelling points on which to hinge a bargain? Or am I forgetting lots of ugly-numbered deals?

(WhatsApp purchase was 2 significant figures, and this list on Wikipedia does show mostly 2-3 significant figures though some figures are probably converted from other currencies.)

In cases like this, a large portion of the "money" paid is actually in the form of shares, which can vary wildly on a day-to-day basis (especially during a takeover!). It doesn't make sense to specify the value of it too precisely because nobody knows what the shares are going to be worth tomorrow.

What makes you think that these numbers are determined by some kind of rational cost-benefit analysis rather than those with the money rattling off numbers until those with the property give in?

Why not all of the above? We can see some rounding already; http://investor.fb.com/releasedetail.cfm?ReleaseID=835447 says

Facebook today announced that it has reached a definitive agreement to acquire Oculus VR, Inc., the leader in immersive virtual reality technology, for a total of approximately $2 billion. This includes $400 million in cash and 23.1 million shares of Facebook common stock (valued at $1.6 billion based on the average closing price of the 20 trading days preceding March 21, 2014 of $69.35 per share). The agreement also provides for an additional $300 million earn-out in cash and stock based on the achievement of certain milestones.

There's rounding right there (23.1m * 69.35 is not a round number). And there's plenty of uncertainty about how much they will actually pay: how can anyone know how much of that earn-out will ultimately be paid?

LW may be interested to learn about Amazon Smile, which gives 0.5% of your Amazon purchases to charity, and the Smile Always Chrome extension that will route your browser to smile.amazon.com by default. (Yes, you can support MIRI through Amazon Smile.) Total setup time estimated at under 5 minutes.

Oh yeah, it looks like they're having some kind of promotion where if you sign up and make a purchase by March 31, they will give an extra $5 to your chosen charity.

I have been using Amazon Smile and Smile Always for MIRI for about a year.

IIRC, Amazon Smile used to be listed on MIRI's Donate for free page, but has since been replaced by "Good Shop". Good Shop appears to give a higher percentage, but I was unable to get the browser extension working so that it happened automatically, so I still use Smile. If anyone knows of a way to get it working, I'd be happy to hear it. But I tried to do it manually for a while, and I just don't remember often enough.

I welcome criticism of my new personal favorite population axiology:

The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person's life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it's allowed to depend on whether the person's experiences are veridical.

This axiology implies that it's important to ensure that the future will contain many people who have better lives than us; it's consistent with preferring to extend someone's life by N years rather than creating a new life that lasts N years. It's immune to Parfit's Repugnant Conclusion, but doesn't automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.

There are straightforward modifications for dealing with general relativity and splitting and merging people.

The one flaw is that it's temporally consistent: If future generations average the welfare of lives after their "present moments", they will make decisions we disapprove of.

I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don't like my robot. Good thing?

Recently I changed some of my basic opinions about life, in large part because of interaction with LessWrong (mostly along the axes Deism -> Atheism, ethical naturalism -> something else (?)).

It inspired me to try to summarize my most fundamental beliefs. The result is as follows:

  1. Epistemology

1.1. Epistemic truth is to be determined solely by the scientific method / Occam's razor.

1.2. The worldview of mainstream science is mostly correct.

1.3. The many religious / mystical traditions are wrong.

  1. Philosophy of mind

2.1. Consciousness is the result of computing processes in the brain. In particular, if a machine would implement the same computations it would be conscious. However, in general I don't know what consciousness is.

2.2. Identity is not fundamentally meaningful. However, there might be useful "fuzzy" variants of the concept.

  1. Metaethics

3.1. Humans are agents with (approximately) well-defined utility functions.

3.2. The moral value of an action is the expectation value of the utility function of the respective agent.

3.3. I should take actions with as much value as possible. This is the only meaningful interpretation of "should".

  1. Ethics

4.1. Human utility functions are complex.

4.2. I cannot give anything close to a full description of my utility function, but it seems to involve terminal values such as: beauty, curiosity, humor, kindness, friendship, love, sexuality / romance, pleasure... These values are computed on all sufficiently human agents (but I don't know what "sufficiently human" means). The weights for myself and my friends / loved ones might be higher but I'm not sure.

Less fundamental and less certain are:

  1. Metaphysics

5.1. UDT is the correct decision theory.

5.2. Epistemic questions don't make fundamental sense (I realize the apparent contradiction with 1.1 but 1.1 is still a useful approximation and there's also a meta-epistemic level on which UDT itself follows from Occam's razor) as opposed to decision-theoretic questions. Subjective expectations are ill-defined.

5.3. Temark's level IV multiverse is real, or at least as "real" as anything is.

I'm curious to know how many LessWrongers have similar vs different worldviews.

The Good, the Bad, and the Just: Justice Sensitivity Predicts Neural Response during Moral Evaluation of Actions Performed by Others.

Morality is a fundamental component of human cultures and has been defined as prescriptive norms regarding how people should treat one another, including concepts such as justice, fairness, and rights. Using fMRI, the current study examined the extent to which dispositions in justice sensitivity (i.e., how individuals react to experiences of injustice and unfairness) predict behavioral ratings of praise and blame and how they modulate the online neural response and functional connectivity when participants evaluate morally laden (good and bad) everyday actions. Justice sensitivity did not impact the neuro-hemodynamic response in the action-observation network but instead influenced higher-order computational nodes in the right temporoparietal junction (rTPJ), right dorsolateral and dorsomedial prefrontal cortex (rdlPFC, dmPFC) that process mental states understanding and maintain goal representations. Activity in these regions predicted praise and blame ratings. Further, the hemodynamic response in rTPJ showed a differentiation between good and bad actions 2 s before the response in rdlPFC. Evaluation of good actions was specifically associated with enhanced activity in dorsal striatum and increased the functional coupling between the rTPJ and the anterior cingulate cortex. Together, this study provides important knowledge in how individual differences in justice sensitivity impact neural computations that support psychological processes involved in moral judgment and mental-state reasoning.

Scott Aaronson on subjectivity of qualia:

no matter how much is discovered about neurobiology and the measurable correlates of consciousness, it seems to me that stoners will always be able to ask each other, “dude, what if like, my red is your blue?”

Lol.

no matter how much is discovered about mathematics and the measurable regularities of reality, it seems to me that stoners will always be able to ask each other, “dude, what if like, two plus two isn't four?”

Seriously though, that's a really bad argument, why have you added it here?

A friend of mine has mild anorexia (she's on psych meds to keep it contained) and recently asked me some advice about working out. She told me that she is mainly interested in not being so skinny. I offered to work out with her one day of the week to make sure she's going about things correctly, with proper form and everything.

The thing is, just going to the gym and working out isn't effective if her diet and sleeping cycle aren't also improved. I would normally be really blunt about these other facts, but her dealing with anorexia probably complicates things a bit... especially the proper diet part. I was thinking that if she has trouble eating enough, maybe she could try drinking some protein shakes. But I'm not sure if that would actually be effective in helping her reach her goal of putting on more weight if she's not eating properly other times of the day. If anyone has any advice on how I could more effectively broach that subject without being insulting or belittling I would appreciate it.

If she's on medicine to contain her anorexia she knows she has an issue. You could start with simple asking her what she eats and listen empathically.

I would also suggest that you think about your relationship with her. What does she want? Does she want your approval? Does she want that you tell her what to do, to not have to take the responsibility for herself? Does she care about looking beautiful to you? Does she want a relationship with you? Do you want a relationship with her?

Knowing answers to questions like that is important when you deal with deep psychological issues. It shapes how the words you say will be understood.

Do you have any thoughts about whether she's at risk for an exercise disorder?

I've seen a lot of discontent on LW about exercise. I know enough about physical training to provide very basic coaching and instruction to get people started, and I can optimize a plan for a variety of parameters (including effectiveness, duration of workout, frequency of workout, cost of equipment, space of equipment, gym availability, etc.). If anyone is interested in some free one-on-one help, post a request for your situation, budget, and needs and I'll write up some basic recommendations.

I don't have much in the ways of credentials, except that I've coached myself for all of my training and have made decent progress (from sedentary fat weakling to deadlifting 415lbs at 190lb BW and 13%BF). I've helped several friends, all of which have made pretty good progress, and I've been able to tailor workouts to specific situations.

I've been reading a bit of books that I guess could be classified as "pop psychology" and "pop economics" lately. (In this concept I include books like Kahneman's Thinking Fast and Slow. Hence what I mean is by "pop" is not that it's shallow but rather that it has a wide lay audience.) Now I'd like to turn to sociology - arguably the most general and allencompassing of the social sciences. But when I google "pop sociology", all the books seem to have been written by economists or psychologists or non-academics such as Malcolm Gladwell. For instance, see here:

https://www.goodreads.com/shelf/show/pop-sociology

Are there no well-known pop-sociological books written by sociologists and, if so, what does this say about sociology as a discipline? You very seldom hear about sociological research in the media if you compare with economics and psychology, and surely there has to be an explanation of this?

It seems clear that for people with a bachelor's in CS, from a purely monetary viewpoint, getting a master's in the same area usually is dumb unless you plan on programming a long time.

This article says the average mid-career pay for MSc holders is $114,000. This says the mid-career bachelor's salary is $102,000. A master's means 12 to 24 months of lost pay and anywhere from a $20,000/year salary in some lucky cases to a $50,000+ debt. You need at least a decade of future work to justify this. And that likely overstates the benefits since it does not control for ability.

I don't necessarily trust these statistics but employers can always make people write code on whiteboards to assess actual skill.

An exception might be if you want technically cutting-edge CS: Google for example prefers MSc/PhD guys. But I think most programming jobs are not like that.

IMO the real problem is that academia teaches computer science whereas what programmers need to know to be valuable is software engineering. Those seem to be rather different disciplines.

Disclaimer: I didn't study CS myself and this opinion is based on indirect evidence.

Every "proof" of Godel's incompleteness theorem I've found online seems to stop after what I would consider to be the introduction. I find myself saying "yes, good, you've shown that it suffices to prove this fixed point theorem... now where's the proof of the fixed point theorem, surely that's the actual meat of the proof?" Anyone have a good source that shows the full proof, including why for a particular encoding of sentences as numbers the function "P -> P is not provable" must have a fixed point?

Rational thinking against fear in a TED talk by (ex) astronaut Chris Hadfield. Has anyone else seen it? I really enjoyed it, in particular the spider example.

No open_thread tag. ('Latest Open Thread' doesn't link to here)

Edit: For some reason the one before doesn't have the tag either..

I am assembling a list of interesting blogs to read and for that purpose I'd love to see the kind of blog the people in this community recommend as a starting point. Don't see this just as a request to post blogs according to my unknown taste but as a request to post blogs according to your taste in the hope that the recommendation scratches an itch in this community.

Here's a sampling of the best in my RSS reader:

gwern posts on google+ and Kaj Sotala posts interesting stuff on Facebook. I also subscribe to a number of journal's table of contents via this site to keep up with research and some stuff on arxiv.

Am I confused about frequentism?

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis.

This is correct.

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

This is not correct. You seem to be under the impression that

P(data | null hypothesis) + P(data | complement(null hypothesis)) = 1,

but this is not true because

  1. complement(null hypothesis) may not have a well-defined distribution (frequentists might especially object to defining a prior here), and
  2. even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].

More generally, most people (both frequentists and bayesians) would object to "accepting the hypothesis" based on rejecting the null, because rejecting the null means exactly what it says, and no more. You cannot conclude that an alternative hypothesis (such as the complement of the null) has higher likelihood or probability.