Open thread, 16-22 June 2014

Previous open thread

 

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:49 AM
Select new highlight date
All comments loaded

I've been ignoring Open Threads throughout my time on LW, but I've found out recently that this was to my detriment. While there is much noise (i.e., stuff I personally don't care about), there are some genuinely interesting things here.

At the same time, I feel like Discussion for me has just died out and no longer has anything interesting, apart from the Open Threads.

The problem (for me), is that Discussion was very easy to follow, while Open Threads are very hard to follow.

Is there an easier way to follow Open Threads? And/or a way we could start moving some of the Open Thread stuff back to Discussion?

As MathiasZaman said, there is a big push right now to move OT content back into Discussion, and from the increased volume I'm seeing in Discussion lately, I think it's succeeding.

Would anyone be interested in forming an R discussion/study/support group?

I have quite modest R skills, but I would like spectacular R skills that are the toast of the town and the envy of all who see them. I suspect I'm not the only person on LW with this desire, so I thought I'd sound out interest in a group to help mutually achieve this.

What I see such a group doing:

  • Sharing interesting or instructional datasets
  • Suggesting interesting projects
  • Showing off awesome stuff you've done
  • Sharing and discussing relevant media, resources and online content
  • General coordination and collaboration

Anyone interested?

I am an active github R contributor and stackoverflow R contributor and I would be willing to coordinate. Send me an email: rkrzyz at gmail

Are utilitarians theoretically obligated to prefer that Brazil win the world cup? Consider: of the 32 participating countries, only the USA has a larger population, but the central place of soccer in Brazilian culture, and their status as hosts mean that they have more at stake in this competition. So total utility would probably be maximized by a Brazil win.

These considerations would seem to make rooting for any other team immoral from a strict utilitarian perspective. This exposes some things I find problematic about utilitarianism. For example, I also have the intuition that it is okay for people to support their own team, even if that teams victory would make hundreds of millions of Brazilians unhappy. If you are a utilitarian player playing against Brazil, are you doing something morally wrong by trying to win? This seems absurd, but I can't see how to escape this conclusion.

These considerations would seem to make rooting for any other team immoral from a strict utilitarian perspective.

Only if rooting for a team makes it more likely for it to win. ;-)

I'm trying to track down a fallacy or effect that was once explained to me and which I found plausible: The idea that whoever has the more complex and detailed mental model of a topic under question wins a discussion about a question - independent of the actual truth of the matter (and assuming no malicious intent).

The example cited as I remember it was about visual (microscope) inspection of blood samples for some boolean factor (present or not). Two persons got the same samples and were trained to recognize the factor one was always told the truth and the other was lied to a certain fraction of times. After the learning period both had to decide on the factor of some samples together. The result: Even though the person who was lied to had the less accurate model he almost always dominated the decision.

The offered explanation was that the lied to candidate had the more complex model (it somehow had to incorporate factors representing the lies) and that led to the availability of arguments (criteria to look for supposedly explaining the difference) which could be used to convince the other person - despite the falsity of those arguments.

Problem is: I can't find any studies or the like supporting this. Do you know of such a model strength effect? I think it is quite relevant as it seems to be behind the ability of liars or rhetorics to convice the audience by making up complex and impressive structures independent of their truth (the truth must just be inavailable enough).

Oh my dear sweet God YES, Goodman and Tenenbaum wrote a book on probabilistic models of cognition. With a programming language and exercises for writing and running the models.

[squee intensifies]

We often see people offering rewards for compelling arguments for changing their mind. Examples would be Sam Harris), for a counterargument for his book, Jonathon Moseley, for showing that separation of church and state is found in the First Amendment, and perhaps James Randi, for showing the existence of supernatural abilities, could be included. Of course, this sort of reward scheme creates a large incentive to not change your mind. Some of these are clearly publicity stunts, but if I sincerely wanted good evidence against my position, what would be the best way to go about it? Possibilities include:

  • Giving the reward to the best submission, regardless of whether it changes our mind or not.
  • Giving a larger reward if I don't change my mind, to compensate for bias in favor of my position.
  • Having a third party judge.

Thoughts?

I guess you should give the same reward, to the most convincing argument, regardless of whether it really convined you or not. It motivates the other people to do their best, and does not influence you in making the decision.

I don't like the idea of overcompensating for biases. I understand the reason behind it, but I am afraid that this approach creates its own specific problems. For example, how much should you overcompensate? I mean, if overcompensating is good, then the more you overcompensate, the more virtuous you are...

Does anyone have advice for effective learning in distracting/suboptimal environments? I know LW recommends textbooks and learning by accumulation instead of random walks, but I have at most 1-2 hours of uninterrupted time per day I can spend learning optimally vs. 8+ hours per day I could potentially use to learn sub-optimally (e.g. frequent distractions, sudden interruptions, hours between learning sessions) during downtime at work that is currently going to waste. Are there better formats than textbooks for these environments or would it be more effective to divide textbook material into sequences of micro-learning sessions? If so, how does one organize and divide this material for effective self-study? If not, are there other ways to effectively spend this time towards incremental self-improvement?

Make notes. Otherwise you risk spending a large part of the 1 hour repeating the stuff you learned during the 1 hour yesterday.

If you have little time for learning, only learn one thing, not two in parallel, because that would make it even worse.

Regarding the LW meetup feedback results, I said:

I'll be writing up an analysis of results, but that takes time.

That was a month ago. Since then, I've spent about two pomodoros on it, and didn't get much actually written during those. I have three or four other things that I want to spend pomodoros on, and this has fallen by the wayside.

I want this analysis to get written, but there's no particular reason that it needs to be me who writes it. So if someone else would volunteer to write it, I'd be very grateful. The sanitized results are here. I'm not going to share the un-sanitized results, but I don't think there's anything in them that would improve the analysis: all that I've removed is the timestamps, meetup locations, and anonymous identifiers.

(If nobody volunteers, I'll try to prioritize this, and I'm increasing the steepness of my beeminder pomodoro goal as of next week (from 7/week to 8, how hardcore am I), so it should still get done.)

This isn't particularly deep analysis, more just aggregation. Here's my take on the results, though, after some totally biased and ad-hoc tallying:

A total of seventy-five users responded. Convenience, scheduling conflicts, and other personal issues were by far the most common reasons not to attend, as a factor in almost half of the responses. Unfortunately there's not much we can do about this, except possibly giving more thought to location when scheduling, and that seems unlikely to happen given the issues I've seen with finding space and time. Two people felt uncomfortable with an otherwise convenient meetup's location, with a third having no personal complaints but describing complaints from others.

After that, a perception of the participants as too nerdy, weird, or socially awkward seems to be the most common complaint, with ten people citing one or more. A couple of these respondents attended no meetups and were presumably working from perceptions of the LW community at large, but most had. This seems to be a pivotal issue with our community's perception, but I'm not sure what to do about it. I imagine many feel it's a feature rather than a bug.

A lack of structure is a close third, at nine responses. Often this involved uncertainty over time and location, or scheduling difficulties thanks to inadequate information. Games and unstructured discussion seem to be seen as adding little value, with the latter in particular cited as allowing unproductive debate between a couple of participants to dominate meetups. Bad discussion norms were specifically mentioned as a factor in four responses. These are probably the most serious issues we can immediately do anything about: I've personally participated in both structured and unstructured meetups, and the difference is night and day. Establishing some best practices here would go a very long way.

Seven people felt that their meetups were unproductive, adding little value in terms of practical rationality or other useful skills; six complained of boredom. This is likely related to the complaints about structure; several people mentioned both.

Another six people felt intimidated by meetup participants or the LW community, most citing perceived intelligence or technical knowledge. Conversely, five felt that meetup participants were generally unimpressive (my notes say "not awesome enough"); most of these describe a hope for more accomplished peers, perhaps along the lines of Eliezer's mandatory secret identities. I suspect this might be an oblique way of saying they're too nerdy, as above.

Two complained of political differences, and two felt their meetup wasn't diverse enough.

Finally, two respondents described harassing behavior from a participant or organizer.

Thanks for this summary! This is a very important thing for growing of the community.

I was thinking about whether being "too nerdy, weird, or socially awkward" is a bug or a feature, but it seems to me that we need to be more specific, to look into details. Some things in our community are inherently weird (unusual in the everyday discourse); debating artificial intelligence, for example. But some forms of social awkwardness (harassment, boredom, unproductive debates) can -- and should -- be fixed; I mean, not just for the PR purposes, but because that also is a part of "becoming stronger". Let's see how far towards pleasant interaction can we go without sacrificing other values (such as honesty). I guess we can -- and should -- improve here a lot.

Maybe it's an issue of going meta at solving the wrong problem. If I want to have a group of people who talk about artificial intelligence, I must focus not only on the "artificial intelligence" part, but also on the "having a group of people" part. This is probably our blind spot, because the former feels like an academic subject, while the latter feels almost like an opposite to the academia (so we are even tempted to countersignal our sophistication by being bad a it). People can get Nobel price for being educated, but nobody gets a price for making an environment where the former are happy to meet, debate, learn, and discover. All winning comes from people, and yet supporting other people in their winning is somehow low-status (as in: you are unable to win on your own, therefore the best you can do is to support others). Please note that this is specifically an academic bias -- in business, you can make a lot of money and status by creating stuff that other people need.

When we try to build the community, then "building the community" is the topic to focus on. Yeah, it can feel like making a community for the sake of making a community, which would be a lost purpose. But, some things are true for communities in general, because they are true for humans in general, and if we want to have a good community, we have to study that. Also, not everyone has to focus on this, but someone should -- preferably more than one person, so they can talk and share ideas. If you want to have a meetup debating artificial intelligence (or whatever else), create a subgroup that focuses on the topic, and a subgroup which focuses on the community. Both are necessary.

Bringing a box of cookies to the AI debate meetup could be more important than bringing an article about the latest discovery in AI. (And bringing an article about the latest discovery in AI is still preferable to just talking without really learning.) No, we don't want to get to the point where everyone brings cookies and no one debates LW topics -- but I suspect that even this strawman example is closer to a healthy and productive community than where many of us are now.

We need to apply our rationality, and to specifically apply it at creating rationalist communities. Yes, it is difficult. That shouldn't be a reason to avoid it, but a reason to focus on it. It is a problem to be solved. And it will not be solved by anyone other than us.

Let's see how far towards pleasant interaction can we go without sacrificing other values (such as honesty).

I rather suspect -- and this is me talking, not my interpretation of the survey data -- that this already concedes too much. I've talked to LWers who appeared to be hung up on honesty to the point of kneecapping themselves socially: not just preferring a more explicit interaction style, but outright refusing to deal with people who partake in perfectly normal social untruths. These sorts of extremes don't seem to be common, but insofar as they're a problem in some segments of the community, they're not going to be solved without at least a few concessions against existing values.

Properly exploring this would probably take a top-level post, but I think I can summarize by saying I agree with ChrisHallquist here.

When I was in California I noticed that Benja Fallenstein seemed to have a much better thought out way of using TagTime than I did. I asked him for more details by email, and he gave me permission to share the below with all of you:


Most importantly, I make sure that all my tags fit on a single screen on my phone; and that most of my pings need to get only two of these tags: (a) a category, and (b) a Liekert scale rating from 1 to 7. (1 = very bad; 2 = bad; 3 = neutral to bad; 4 = neutral; 5 = neutral to good; 6 = good; 7 = very good)

I have TagTime linked to Beeminder, which lets me see the trends at a glance. I don't currently have any of them linked to goals which are difficult to meet.

Some goals linked to TagTime are: working hours; object-level working hours; sleep; missed pings; procrastination; "bad" (1-2), "neutral" (3-5), "good" (6-7).

I have it set to ping every 30 minutes on average and this works for me.

My categories are, tagtime tag first, meaning in parentheses:

anki (Miri: Anki) l (Leisure) m (Miri: Other) m:adm (Miri: Admin) m:fai (Miri: FAI research) m:mth (Miri: Other math) m:o (Miri: Outreach) m:str (Miri: Strategic research) mis (Missed ping, can be used together with most likely category, or by itself if I don't have a good guess) mp (MSc / PhD) p (Personal: Other) p:adm (Personal: Admin) pcr (Procrastination) s (Semimiri: Other) s:ag (Semimiri: Agentiness) s:gtd (Semimiri: GTD, i.e. Getting Things Done system) s:res (Semimiri: Other research) s:tm (Semimiri: Time management) slp (Sleep)

Additional tag:

sdy (Study -- used together with a category tag)

I have Anki cards for training the meaning of these tags.

I'm considering subdividing leisure into l:fic (fiction), l:soc (social), l:par (Skyping with parents), and l (other).

Sam Harris recently responded to the winning essay of the "moral landscape challenge".

I thought it was a bit odd that the essay wasn't focused on the claimed definition of morality being vacuous. "Increasing the well-being of conscious creatures" is the sort of answer you get when you cheat at rationalist taboo. The problem has been moved into the word "well-being", not solved in any useful way. In practical terms it's equivalent to saying non-conscious things don't count and then stopping.

It's a bit hard to explain this to people. Condensing the various inferential leaps into a single post might make a useful post. On the other hand it's just repackaging what's already here. Thoughts?

I have invented a wormhole with ends separated by ten seconds in time. Unfortunately the power requirements scale exponentially with size so its not practical for anything larger than photons, but it does mean I can send information back in time. How would you exploit this?

Can you chain these wormholes and send information 10 + 10 + 10 + ... seconds back in time?

Have a program use its own output as input, effectively letting you run programs for infinite amounts of time, which depending on how time travel is resolved may or may not give you a halting oracle.

Also you can now brute force most of mathematics:

one way to do this is using first order logic which is expressive enough to state most problems. First order logic is semi-decidable which means that there are algorithms which will eventually return a proof for correct statements. Since your computer will take at most ten seconds to do this, you will have a proof after ten seconds or know that the statement was incorrect if your computer remains silent.

Have a program use its own output as input, effectively letting you run programs for infinite amounts of time, which depending on how time travel is resolved may or may not give you a halting oracle.

To expand on this: Moravec's classic "Time Travel and Computing".

What practical benefits or effects on the world do I get out of my new infinite computing power and mathematical proofs? Presumably i can now decrypt all non-quantum encryption, and do various high cost simulations very fast.