Welcome to Less Wrong! (9th thread, May 2016)

Hi, do you read the LessWrong website, but haven't commented yet (or not very much)? Are you a bit scared of the harsh community, or do you feel that questions which are new and interesting for you could be old and boring for the older members?

This is the place for the new members to become courageous and ask what they wanted to ask. Or just to say hi.

The older members are strongly encouraged to be gentle and patient (or just skip the entire discussion if they can't).

Newbies, welcome!

 

The long version:

 

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

All recent posts (from both Main and Discussion) are available here. At the same time, it's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion. They are also available in a book form.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:09 PM
Select new highlight date
All comments loaded

I've been lurking for a very long time, more than six years I think. Lots of sentences come to mind when I think, "Why haven't I posted anything before?" Here are a few:

  1. "LessWrong forum is just like any other forum" Well my sample size is low, but... I don't care what you tell yourselves; what I observe is people constantly talking past each other. And if, in reading an article or comment, a possible comment comes to mind; hot-on-its-heels is the thought that there isn't really any point in posting it, because the replies would all be of a type drawn from some annoying archetypes, like (A) I agree! But I have nothing to add. (B) You're wrong. Let me Proceed to Misrepresent in some way. (The Misrepresentation is "guaranteed" to be unclear because it insists on starting on its own terms and not mine). And if I yet start a good-natured chain of comments, suddenly I find myself talking about the other person's stuff and not the ideas that motivated my original comment. Which I probably wouldn't have commented on for its own sake. And as soon as a comment has one reply, people stop thinking about it as an entity in its own right. Don't you dare dismiss these thoughts so quickly!

  2. "It's been done. Well, mostly." Eliezer wrote so many general, good, posts - where do I even find a topic sufficiently different that I'm not, for the most part, retreading the ground? Posts by other people are just completely different: instead of the post having a constructive logic of its own, references are given. The type of evidence given by the references is of a totally different sort. Instead of being about rationality, such posts seem to be about "random topic 101"? Ok this isn't very clear.

  3. So few comments are intelligible. What are these people even arguing about? It's not clear! How can one comment on this in good faith? Note that the posts you observe are, therefore, more likely to come from people who are not stopped from posting by having unclear or absurd interpretations of parent comments.

Lesswrong seems like it should be a good place. The sequences are a fantastic foundation, but there's so little else! I'm subjectively pretty sure that E.Y. thinks Lesswrong is failing. Of course one may hope "for the frail shrub to start growing".

In the hope that some people read (and continue to read) this charitably, let me continue. Consider the following in the welcome thread itself.

" We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. "

Err, what? Why on Earth should I immediately give you my life story? Or even, Do these questions make sense? "What I value?" On a rationalist forum you are expecting the English language to contain short sentences that encode a good meaning of "value"? Yeah, yeah, whatever. Taking a breath, --- ---. . . Why do you even want to know?

How about just, "you can post who you are, what you're doing ... found us here, if you want."

I should not have to post such things to get basic community acceptance. And you have no right to be so interested in someone who has, as yet, not contributed to lesswrong at all! Surely THAT is the measure of worth within the Lesswrong site. Questions about things not related to Lesswrong should be "expensive" to ask - at least requiring a personal comment.

Oh, I think I initially found lesswrong via Bostrom via looking at the self-sampling hypothesis? It's kind of hazy.

Hello friends! I have been orbiting around effective altruism and rationality ever since a friend sent me a weird Harry Potter fanfiction back in high school. I started going to Seattle EA meetings on and off a couple years ago, and have since read a bunch of blogs, made friends who were into existential risk, started my own blog, graduated college, and moved to Seattle.

I went to EA Global this summer, attend and occasionally help organize Seattle EA/rationality events, and work in a bacteriophage lab. I plan on studying international security and biodefense. I recently got back from a trip to the Bay Area, that gaping void in our coastline that all local EA group leaders are eventually sucked into, and was lucky to escape with my life.

I'm gray on the LessWrong slack, and I also have a real name. I had a LW account back early in college that I used for a couple months, but then I got significantly more entangled in the community, heard about the LW revitalization, and wanted a clean break - so here we are. In very recent news, I'm pleased to announce in celebration of finding the welcome thread, I'm making a welcome post.

I wasn't sure if it would be tacky to directly link my blog here, so I put it in my profile instead. :)

Areas of expertise, or at least interest: Microbiology, existential risk, animal ethics and welfare, group social norms, EA in general.

Some things I've been thinking about lately include:

  • How to give my System 1 a visceral sense of what "humanity winning" looks like
  • What mental effects hormonal birth control might have
  • Which invertebrates might be able to feel pain
  • What an alternate system of taxonomy based on convergent evolution, rather than phylogeny, would look like
  • How to start a useful career in biorisk/biodefense

Hi there! I didn't sign up before because this community tends to comment what I want to say most of the time anyway, and because signup hurdles are a thing and lack of OpenID support makes me frustrated.

I've been reading LW intermittently for about one and a half years now; whilst integrating these concepts in my life is something I tend to find hard, I have picked some of these up. Specifically anchoring effects and improving my ability to spot "the better action". It's still hard to actually take such actions; I'll find myself coming up with a better plan of action and then executing the inferior plan of action anyway.

I've been horrified at a few of my past mistakes; one of them was accidental p-hacking. (Long story!)

One of the things I had to do for my college degree was performing research. I picked a topic (learning things) and got asked to focus on a key area (I picked best instructional method for learning how to play a game). We had to use two data collection methods; I wanted to do an experiment because that was cool, and I added a survey because if I'm going to have to ask lots of people to do something for me, I might as well ask those same people to do something else. Basically I'm lazy.

My experiment consisted of a few levels (15) in which you have to move a white box to various shapes by dragging it about. I had noticed that teaching research focused on "reading" "doing" "listening" and "seeing" types, (I forgot the specific words, something about Kinestetic, Audititive, Visual... - learning). So I translated to "written text", "imagery", "sounds and spoken text", and "interactivity" to model the reading, seeing, listening and doing respectively.

Then I made each level test a combination of learning methods. First "learning by doing" only. Here's a box. Here's a green circle. Here's a red star. Go.

Most people passed in 5 seconds or in 1 minute. This after I added a background which was dotted so that you'd see a clear white box and not a black rectangle, and a text "this is level 1, experiment!". Some people would think it was still loading without this text. I didn't include the playtesters in the research result data.

After that it showed you 4 colored shapes and a arrow underneath, and a button "next" below it. Hitting next moves you to level 2, where a white box is in the center of the screen, and various colored shapes are surrounding the white box. Dragging the white box over the wrong shape sends you back to the screen with the 4 colored shapes and the arrow. This was supposed to be "imagery".

Then the next screen after that was an audio icon and a "next button". I had recorded myself saying various colored shapes, and people were told at this screen something like "black circle, red triangle, blue star, green square". The idea being you'd have to remember various instructions and act upon them. Hitting the next button brings you to the surrounded white box again. Each level had a different distribution of shapes to prevent memorizing the locations.

Then the 4th text level was just text instructions ("drag the white box over the green circle, then the red star ...")

Then after that came combinations - voiced text, text where I had put the shapes in images on the screen as well, shapes + voice saying what they were... for interactivity, I skipped the instruction screen and just went with text appearing in the center of the screen, and then the text changes when you perform the correct action (else level resets). This to simulate tutorials like "press C to crouch" whenever you hit the first crouch obstacle.

I had recorded the time spent on the instruction screen, the total time for each level, and per attempt, the time between each progress step and failure. So 1.03 seconds to touch the first shape, 0.7 to touch the second, 0.3 to touch a third wrong one, then 0.5 to touch the first, 0.4 to touch the second, 0.8 to touch the third and 1.0 to touch the fourth and level complete.

The idea was that I could use this to see how "efficient" people were at understanding the instructions, both in speed and correctness.

(FYI, N=75 or so, out of a gaming forum with 700 users)

Then I committed my grave sin and took the data, took excel's "correlate" function, and basically compared various columns until I got something with a nice R. This after trying a few things I had thought I would find and seeing non-interesting results.

I "found" that apparently showing text and images in interactive form "learn as you go" was best - audio didn't help much, it was too slow. Interactivity works as a force multiplier and does poorly on its own.

But these findings are likely to be total bogus because, well, I basically compared statistics until I found something with a low chance to randomly occur.

... What scares me not is not that I did this. What scares me is that I turned this in, got told off for "not including everything I checked", thought this was a stupid complaint because look I found a correlation, voiced said opinion, and still got a passing grade (7/10) anyway. And then thought "Look, I am a fancy researcher."

I could dig it up if people were interested - the experiment is in English, the research paper is in Dutch, and the data is in an SQL database somewhere.


This is probably a really long post now, so I'll write more if needed instead of turning this into a task to be pushed down todo lists forever.

I discovered SSC and LW a ~couple months ago, from (I think) a Startpage search which led me to Scott's lengthy article on IQ. Only browsed for a while, but last night rediscovered this after I read Doing Good Better and went to the EA website. I remember CFAR from a Secular Student Alliance conference two years ago.

I like Scott's writing, but I have no hard science training unfortunately.

I have realized that I've become rather used to my comfort zone, and have sort of let my innate intelligence stagnate, when I like to think it still has room to grow. I had psychological testing six years ago that put my IQ at 131 which, if I interpret the survey results correctly, puts me near the bottom of this community? Despite that, I find the philosophical elements of Yudkowsky fascinating [not so much the more mathematical stuff]. At least, this site has made me sit at a computer longer than I'm accustomed to.

It seems from EY's writing that LW wanted to be a homogeneous community of like-minded (in both senses) people, but I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals. Because that changes how one writes. Or is there a tacit resignation that more average people just won't care or grok it; that smarter individuals should focus on their own personal growth and happiness? But then I remember Scott's writing and seeming compassion, and also the percentage of users who are social-democratic, so it seems like there would be higher demand for actually communicating with the outgroup.

I entered the humanities because I wanted to be a professor and I like to write, I like foreign languages, didn't think I would be interested in heavier things (took some psychology and philosophy as a postbac) but now I'm too far into my MA where I'm not sure I could get into an additional Master's program in something meaty and then pursue a better, more intellectually stimulating career.

Ultimately I just want to teach and "help" people. So, that's where I'm at. I read/skimmed DGB yesterday in one sitting while in the middle of yet another existential depression that my shrink thinks was caused by going off an opioid. I can't remember the last time I consumed a book in one sitting.

This was longer than I intended. Thank you.

I think the most important part of rationality is doing the basic stuff consistently. Things like noticing the problem that needs to be solved and actually spending five minutes trying to solve it, instead of just running on the autopilot. At some level of IQ, having the right character traits (or habits, which can be trained) could provide more added value than extra IQ points; and I believe you are already there.

I find the philosophical elements of Yudkowsky fascinating

Does it also make you actually do something in your life differently? Otherwise it's merely "insight porn". (This is not a criticism aimed specifically at you; I suspect this is how most readers of this website use it.)

I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals. Because that changes how one writes.

I think the main problem is that we don't actually know how to make people more rational. Well, CFAR is doing some lessons, trying to measure the impact on their students and adjusting the lessons accordingly; so they probably already do have some partial results at the moment. That is not a simple task; to compare, teaching critical thinking at universities actually does not increase the critical thinking abilities of the students.

So, at this moment we want to attract people who have a chance of contributing meaningfully to the development of the Art of how to make people more rational. And then, when we have the Art, we can approach the average people and apply it on them.

"to compare, teaching critical thinking at universities actually does not increase the critical thinking abilities of the students"

That's sad to hear.

Thank you for the advice. My primary concern is definitely to establish more rational habits. And then also to learn how to better learn.

Just like the Sequences say somewhere, putting a label "cold" on a refrigerator will not actually make it cold. Similarly, calling a lesson "critical thinking" does not do anything per se.

When I studied psychology, we had a lesson called "logic". It was completely unconnected to anything else; all I remember is drawing tables for boolean expressions "A and B", "A or B", "A implies B", "not A", and filling them with ones and zeroes. If you were able to fill the table correctly for a complex expression, you passed. It was a completely mechanical action; no one understood why the hell are we doing that; it was completely unconnected to anything else. So, I guess this kind of lesson actually didn't make anyone more "logical".

Instead we could have spent the time learning about cognitive biases, even the trivial ones, and how it applies to the specific stuff we study. For example, psychologists are prone to see "A and B" and conclude "A implies B" if it fits their prejudice. Just having one lesson that would give you dozen examples of "A and B", and you would have to write "maybe A causes B, or maybe B causes A, or maybe some unknown C causes both A and B, or maybe it's just a coincidence" would probably be more useful then the whole semester of "logic"; it could be an antidote against all that "computer games cause violence / sexism" stuff, if someone would remember the exercise.

But even when teaching cognitive biases, people are likely to apply them selectively to the stuff they want to disbelieve. I am already tired of seeing people abusing Popper this way (for example, any probabilistic hypothesis can be dismissed as "not falsifiable" and therefore "not scientific"), I don't want to give them even more ammunition.

I suspect that on some level this is an emotional decision to make -- you either truly care about what is true and what is bullshit, or you prefer to seem clever and be popular. A university lesson cannot really make you change this.

puts me near the bottom of this community?

No, I don't think so. Self-reported IQs from a self-selected group have a bias. I'll let you guess in which direction :-)

I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals

There's Gleb Tsipursky and his Intentional Insights, but from my point of view this whole endeavour looks quite unfortunate. YMMV, of course.

"No, I don't think so. Self-reported IQs from a self-selected group have a bias. I'll let you guess in which direction :-)"

Of course, but I guess that I would expect a site helping its members to "Overcome Bias" would provide more trustworthy data! :)

Hi! Male, 30-something, bean-counting, sports-watching, alcohol-drinking, right-of-center normie here. Been lurking on LW for several years. I possess an insight-porn level of interest in and real-life application of AR. Slow day at the office so I thought I would say "hi". It's possible that I may never post on this board again, in which case, it's been nice knowing you. xx

Hello everyone. I've slowly become entangled in rationality after stumbling across the site when I was quite young and looking for information about cognitive biases and logical fallacies to use in my speech and debate club. This played a minor role in my deconversion and I've been poking around the website and the #lesswrong IRC ever since. (Some of you know me as Madplatypus.) After moving to Seattle I became much more heavily involved because the community here is the best in all sorts of ways.

I'm still young, and hunting for the best opportunities to meet my goals of,becoming the best person I can be, protecting and growing my expanding circles of loyalty, and insuring humanity has a glorious future. Yes, I already know about 80,000 hours.

I'm interested in finding mentors/building networks beyond Seattle/finding new friends so send me a message and let's talk!

I'm interested in talking about: Virtue ethics, Historical Models, Introspection, Better Life Plans, Oratory, Psychology, Geopolitics, Self-Education, Mental Movements, Phenomenology, Metaphor, Martial Arts, Poetry, and ways of thinking about ethics that aren't horrendously simplified. And more!

I'm busy catching up on some more technical fields like mathematics, programming, and information security, but my passions are generally humanistic.

I want you to tell me about: Your passions/drive, the phenomenology of music, your metaphors of mind, unusual things you find valuable, and social constructs you think should be instantiated.

What I love about the Rationality community: Intentional community building, a focus on clear thinking, and the beautiful combination of people who generate lots of crazy hypotheses and people who knock them down.

What I dislike: Getting told to "Go read X" in response to some disagreement I have with rationalist canon. Chances are, I already have read X. People who critique old philosophy which they have not read. Ethical systems which render humans as a fungible moral asset and abstract individual interests away from their reasoning.

Osthanes is a mythical figure in Greek magical pseudepigrapha, who was held to be the first disciple of Zarathustra. It was held that Zarathustra invented magic, and Osthanes brought it to Greece where it was written down for the first time.

I'm here because of the AI and SETI info and debates.

I am throwing links into the open threads that have some info related to these things, and am interested in seeing if there ends up being other discussions on them.

Pretty sure that there isn't any biological aliens out there at this time, and am pretty spooked by the idea of a machine intelligence running around the galaxy. Looking at some of those dense star clusters, it occurs to me that those would be the only places to put a civ , that was protected from hyper-velocity attacks. Am kinda concerned that the way we find out we aren't alone is a bunch of rocks coming in at a hefty chunk of "c".

Have read most of the background sequences by EY, and most of the discussion posts each week, but don't like getting caught up in arguments that are all about syntax.

Comments use Markdown, not html. The Show Help button at the lower right of of a comment box will give you details. As I recall, two spaces at the end of a line are a hard return-- it's been a while since I had to wrestle with how a list or a poem would appear.

Articles use html, sort of. There's an html button at the top. If you copy and paste from a word processor, you might get inconvenient formatting.

I don't like this system, but the only thing worse than this system is having to try to guess how it works.

[comment score below threshold] I think that's anything with -3 karma. You can see the article/thread for free by clicking on the link, but if you reply to anything on that thread, you lose 5 points.

[continue this thread]-- LW has limited nesting of comments, so this link will start a new window. You will lose your pink borders on the old window if you left click, so I recommend using a new tab for continuing threads. You get the option of a new tab by right-clicking on the link.

About voting - how does voting on the open thread post itself work? The post is pretty much always the same, so why does it get voted up anyway? Is it about the quality of the comments?

some people thank the person who posts the open thread. It's a community responsibility to keep posting it. No one is in charge but it has kept on happening for a long time now.

Something which wasn't clear to me after looking around a bit - it seems the recent comments in the bar at the right is cached, and I saw some comments with a pink border. Does the pink border mean they're new?

Comments with a pink border are new since the last time you (that is, your account) refreshed the page. They might be years old, but they're new to you.

This adds so much more to my LW experience. Reading open threads just became doable, rather than an exercise in trying to remember what parts of the discussion I've already seen and which ones I hadn't. ... Although I'm not seeing everything with a pink border whenever I look at an old page, so I think that part of the explanation is false. That, or there is a bug somewhere...

The first time you look at a page (no matter how old it is) , you don't get any pink borders.

Hey! My name's Jared and I'm a senior in high school. I guess I started being a "rationalist" a couple months ago (or a bit more) when I started looking at the list of cognitive biases on Wikipedia. I've tried very hard to mitigate almost all of them as much as I can and I plan on furthering myself down this path. I've read a lot of the sequences on here and I like to read a lot of rationalwiki and I also try to get information from many different sources.

As for my views, I am first a rationalist and make sure I am open to changing my mind about ANYTHING because reality doesn't change on your ability to stomach it.

As for labels, I'm vegan (or at least strict vegetarian), anarcho-communist (something around the range of left libertarian), agnostic (not in the sense that I'm on the fence but that I'm sure that we don't know - so militant agnostic lol).

My main first question is, since you guy are rationalists, why aren't you vegetarian or vegan? The percentage that is vegetarian on sites like lesswrong and rationalwiki is hardly higher than the public. I would think considering you are rationalists you would understand vegetarian or veganism and go for it for sure. Am I missing something because this actually blows my mind? If you oppose it, I really wanna hear some arguments because I've never heard a single even somewhat convincing argument and I've argued with oh so many people about it. Obviously goal of veganism is to lessen suffering not end it etc.

But yeah hey!

Hi Jared, Your question about vegetarianism is an interesting one, and I'll give a couple of responses because I'm not sure exactly what direction you're coming from.

I think there's a strong rationalist argument in favor of limiting consumption of meat, especially red meat, on both health and environmental grounds. These issues get more mixed when you look at moderate consumption of chicken or fish. Fish especially is the best available source of healthy fats, so leaving it out entirely is a big trade-off, and the environmental impact of fishing varies a great deal by species, wild vs. farmed, and even the fishing method. Veganism gives relatively small environmental gains over vegetarianism, and is generally considered a loss in terms of health.

When you look at animal suffering, things get a lot more speculative. Clearly you can't treat a chicken's suffering the same as a human's, but how many chickens does it take to be equivalent to a human? At what point is a chicken's life not worth living? This quickly bogs down in questions of the repugnant conclusion, a standard paradox in utilitarianism. Although I have seen no thorough analysis of the topic, my sense is that 1) Scaling of moral value is probably more-than-linear with brain mass (that is, you are worth more than the ~300 chickens it would take to equal your gray matter) but I can't be much more precise than that 2) Most of the world's neurons are in wild inverterbrates: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html which argues against focusing specially on domesticated vertebrates 3) Effort expended to reduce animal suffering is largely self-contained--that is, if you choose not to eat a chicken, you probably reduce the number of factory-farmed chickens by about one, with no longer-term effects. Effort to help humans, on the other hand, often has a difficult-to-estimate multiplier from follow-on effects. See here for more on this argument: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/

The upshot is that when you make any significant investment in animal welfare, including vegetarianism and especially veganism, you should consider the opportunity costs. If it makes your life more difficult and reduces the amount of good you can do in other ways, it may not be worth it.

Personally, I used to be a pescetarian and would consider doing so again, depending on the people around me. Trying to do it in my current circumstances would cause more hassle than I think it's worth (having to ask people for separate meals, not participating in group activities, etc). If you know a lot of other vegetarians, there may be no social cost or even some social benefit. But don't assume that's the case for everyone.

Thank you for the polite and formal response! I understand what you're saying about the chicken and fish. Pescetarian is much better than just eating all the red meat you can get your hands on.

When you look at animal suffering, things get a lot more speculative. Clearly you can't treat a chicken's suffering the same as a human's, but how many chickens does it take to be equivalent to a human? At what point is a chicken's life not worth living? This quickly bogs down in questions of the repugnant conclusion, a standard paradox in utilitarianism. Although I have seen no thorough analysis of the topic, my sense is that 1) Scaling of moral value is probably more-than-linear with brain mass (that is, you are worth more than the ~300 chickens it would take to equal your gray matter) but I can't be much more precise than that 2) Most of the world's neurons are in wild inverterbrates: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html which argues against focusing specially on domesticated vertebrates 3) Effort expended to reduce animal suffering is largely self-contained--that is, if you choose not to eat a chicken, you probably reduce the number of factory-farmed chickens by about one, with no longer-term effects. Effort to help humans, on the other hand, often has a difficult-to-estimate multiplier from follow-on effects. See here for more on this argument: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/

Now I understand what you're saying about the animal suffering, but I'd like to add some things. If you don't eat many chickens or many cows than you can save more than one because you're consistently abstaining from meat consumption. Its also not about making the long term effects on your own; its contributing so that something like factory farming can be changed into something more sustainable, more environmentally friendly, and more addressing animal concerns once more people boycott meat. Even if you were to choose to compare gray matter, you have to compare that its the animal's death vs the human's quite minor pleasure that could have been just as pleasurable eating/doing something else.

If it makes your life more difficult and reduces the amount of good you can do in other ways, it may not be worth it.

For you, does it really make life more difficult? From my personal experience and hearing about others, the only hard part is the changing process. Its only difficult in certain situations because of society, and the point of boycotting is to change the society so its easier as well the other benefits.

Thanks again for responding!

factory farming can be changed into something more sustainable

It's sustainable in the sense that we can keep doing it for a very long time.

more environmentally friendly,

This may be more what you were talking about.

Hi Jared! I don't remember the statistics, but here are a few hypotheses:

  • There is usually a distribution of a few "hardcore" members, and many lukewarm ones. In a statistics that includes all of them, the behavior of the hardcore members can easily disappear.

  • Many people eat some kind of paleo diet, which (if we ignore the animal suffering, and look at health benefits of eating lot of vegetables) is almost as good as vegetarianism. Possibly, a paleo person eating meat with mostly unprocessed vegetables has a more healthy diet than a vegan who gets most of their food cooked. For some people, vegetarianism or veganism may seem low status, and paleo high status (simply because it is relatively new).

  • Or maybe it's just that food doesn't get as high priority as e.g. education, making money, or exercise, so people focus their attention on the other things.

  • Or, most obviously -- just because people know something is the right thing to do, it doesn't mean they will automatically start doing it! Not even if they identify as "rationalists".

In my bubble of local hardcore aspiring rationalists, vegetarianism or veganism is almost the norm. (Generally, I would suspect that the hardcore ones go either vegetarian or vegan or paleo.)

There is usually a distribution of a few "hardcore" members, and many lukewarm ones. In a statistics that includes all of them, the behavior of the hardcore members can easily disappear.

Could you explain this more in depth; I'm failing to grasp this completely. I apologize.

if we ignore the animal suffering

Why would we do that?

Or maybe it's just that food doesn't get as high priority as e.g. education, making money, or exercise, so people focus their attention on the other things.

I guess, but you can usually focus on multiple things at once, and most people have certain causes they ascribe to.

Or, most obviously -- just because people know something is the right thing to do, it doesn't mean they will automatically start doing it! Not even if they identify as "rationalists".

Really? Why not though? All humans, excluding sociopaths, have empathy. I'll admit I see this a bit though.

In my bubble of local hardcore aspiring rationalists, vegetarianism or veganism is almost the norm.

Oh, hmm I guess I just missed it.

Thank you for your response and your hypotheses! These responses are great compared to the usual yelling match ... anywhere else.

Could you explain this more in depth

In general, imagine that you have a website about "X" (whether X is rationality or StarCraft; the mechanism is the same). Quite likely, a distribution of people who visit the website (let's assume the days of highest glory of Less Wrong) will be something like this:

10 people who are quite obsessed about "X" (people who dramatically changed their lives after doing some strategic thinking; or people who participate successfully in StarCraft competitions).

100 people who are moderately interested in "X" (people who read some parts of the Sequences and perhaps changed a habit or two; or people who once in a while play StarCraft with their friends).

1000 people who are merely interested in "X" as a topic of conversation (people who read Dan Ariely and Malcolm Gladwell, and mostly read Less Wrong to find cool things they could mention in a debate on similar topics; people who sometimes watch a StarCraft video on YouTube, but actually didn't play it for months).

Now you are doing a survey about whether the readers of the website somehow differ from the general population. I would expect that those 10 obsessed ones do, but those 1000 recreational readers don't. If you put them both in the same category, the obsessed ones make only 1% in the category, so whatever are their special traits, they will disappear in the whole.

For example (completely made up numbers here), let's assume that an average person has a 1% probability of becoming a vegetarian, those 1000 recreational LW readers also have a 1% probability, the 100 moderate LW readers have probability 2%, and the hardcore ones have a probability of 20% (that would be a huge difference compared with the average population). Add them all together, you have 1110 people, of whom 0.01 × 1000 + 0.02 × 100 + 0.2 × 10 = 14 vegetarians; that means 1.26% of the LW readers -- almost the same as the 1% of the general population.

This is further complicated by the fact that you can more easily select professional StarCraft players (e.g. by asking whether they participated in some competition, and what is their ranking), but it's more difficult to tell who is a "hardcore rationalist". Just spending a lot of time debating on LW (which pretty much guarantees high karma), or having read the whole Sequences doesn't necessarily mean anything. But this now feels like talking about "true Scotsmen". Also, there are various status reasons why people may or may not want to identify as "rationalists".

just because people know something is the right thing to do, it doesn't mean they will automatically start doing it!

Really? Why not though?

That's kinda one of the central points of this website. Humans are not automatically strategic, because evolution merely made us execute adaptations, some of which were designed to impress other people rather than to actually change things.

People are stupid, including the smartest ones. Including you and me. Research this thoroughly and cry in despair... then realize you have something to protect, stand up and become stronger. (If these links are new for you, you may want to read the LW Sequences.)

Just look at yourself -- are you doing the literally best thing you could do (with the resources you have)? If not, how large is the difference between what you are actually doing, and the literally best thing you could do? For myself, the answer to this is quite depressing. Considering this, why should I expect other people to do better?

In my bubble of local

I guess I just missed it.

Statistically you are quite likely to be at a different part of the planet, so it's quite easy to miss my local group. ;) Maybe finding a LW meetup nearest to your place could help you find someone like that. (But even within the meetup I would expect that only a few people really try to improve their reasoning, and most are there mostly for social reasons. That's okay, as long as you can identify the hardcore ones.)

These responses are great compared to the usual yelling match ... anywhere else.

Oh, I remember this feeling when I found LW!

Thank you for such a clear response and the additional info! :) I have read most of the sequences but some of those links are new to me.

Or, most obviously -- just because people know something is the right thing to do, it doesn't mean they will automatically start doing it! Not even if they identify as "rationalists".

Really? Why not though?

http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/

These responses are great compared to the usual yelling match

Welcome!

if we ignore the animal suffering

Why would we do that?

If my worldview was, "animals are inferior and their suffering is irrelevant".

If my worldview was, "animals are inferior and their suffering is irrelevant".

Wouldn't that be an irrational 'axiom' to start from though? Maybe the inferior part works, but you can't just say their suffering is irrelevant. If you go off the basis that humans matter just because than that's a case of special pleading saying humans are better because they are human. There suffering may be less but it isn't irrelevant because they can suffer.

If my worldview was, "animals are inferior and their suffering is irrelevant".

Wouldn't that be an irrational 'axiom' to start from though?

you can't just say their suffering is irrelevant.

Why?

If you go off the basis that humans matter

Do humans matter? Why do humans matter? I think you might be leaping a conclusion or a few here.