Open Thread, May 25 - May 31, 2015

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 5:21 AM
Select new highlight date
Rendering 50/303 comments  show more

It appears that MetaMed has gone out of business. Wikipedia uses the past tense "was" in their page for MetaMed, and provides this as a source for it.

Key quote from the article:

Tallinn learned the importance of feedback loops himself the hard way, after seeing the demise of one of his startups, medical consulting firm Metamed.

It would be nice if people were open when their startups close, especially when previously advertised on LW, so we can learn from mistakes. Or is there some reason to not admit a startup has failed?

It seems like the business model of charging individuals prices that are that high just doesn't work for a startup without a proven brand.

Datapoint:

The only exposure I have had to metamed was Yudkowsky saying he spent X dollars on it to get advice to take melatonin microdoses hours before going to bed. When i saw the dollar amount I burst out laughing because I had literally a week earlier come to the same conclusion using google scholar searches at the library in my quest to normalize my own sleep schedule (though my problem wound up having a very different solution in the end).

To be fair you do have a strong biology background that makes you more likely to do an efficient literature search than the average person. You are also at a university with journal subscription which isn't true for everyone.

There might be room for people paying other people to do this kind of research. But the price is likely to high.

I wonder if Metamed's problem was that if you were smart and well informed enough to understand the company's value to the average person, you personally didn't need it because you could do the research yourself.

I'm currently twenty-two years old. Over the last two weeks, I've discussed with a couple friends that among the "millenial" generation, i.e., people currently under the age of thirty-five, people profess having goals for some kind of romantic relationships, but they don't act in a way which will let them achieve those goals. Whether they:

  • are lonely and want companionship,
  • want to stay single, but have more sex,
  • want a monogamous but casual relationship,
  • want a more committed and serious monogamous relationship,
  • want to find someone to one day marry and have children with,
  • want to find someone to love and love them to become happy, or happier,
  • want romance for any other usual reason,

it seems the proportion of young people who are and stay single is greater than I would expect. I don't just mean how the fastest-growing household configuration since the 1980s (in the United States) has been single adults. I mean how most of my friends profess a preference for having some romantic relationship in their life, yet most of my single friends stay single, and don't appear to be dating much or doing something else to correct this. Maybe popular culture exerts a normative social influence which favors people in relationships over single people, and so young single people feel pressured to signal a preference for being in a relationship. However, I can't determine who is just professing fake preferences to signal. It still seems single people aren't seeking or successfully finding relationships at a rate which corresponds well to genuine preferences for a relationship. Why aren't single people trying harder to find relationships?

One answer could be "dating and romance are hard, especially for young people". If that's vaguely true, it doesn't satisfy my curiosity. I think it has in large part to do with the extended adolesence of people born after, e.g., 1980. More committed relationships, higher frequency of dating, and/or marriage seem to people around my age something we're supposed to do more when we're "real adults". That happens some time after you get a "real job". Or after you complete a degree. Or after the age of twenty-five. Something like that.

It also seems dependent upon changes in dating culture in North America. I'm aware there are more hookups and one-night stands among young adults of the current generation than there was for prior generations. In terms of who one settles down with, or marries, people get married at greater ages. I don't know if it's because we young adults are pickier with whom we choose for long-term relationships, or what. This is where I don't know exactly what's going on, so I could use your help. If you (think you) can explain what's going on, please share.

Anyway, what I've concluded so far is that, as someone who doesn't date very much, a sensible strategy would be to date more often and more early to satisfy relationship goals. That is, while many of my generation have similar goals and expectations for dating, relationships and/or marriage compared to previous generations, the styles and culture of such in North America are very different. If young adults wait until their mid-thirties before they start fulfilling long-term relationship goals, it might take longer than they expect, and by that point seeking relationships may cut into time developing other valuable aspects of one's life, such as career. Dating earlier and more frequently allows one to discover what one initially wants in a partner, how to navigate the dating pool and social scenes comfortably, adapt to potential setbacks and heartbreak, and mature.

Now, there are lots of young adults in graduate school, or going through a period of time when prioritizing a romantic relationship wouldn't allow the time and attention to fulfill more immediately important goals. During the period(s) of life when you have downtime, if busy young adults aren't satisfied with being single, I think it makes sense for us to try dating and relationships more, because there may not be as much time and opportunity as we hope later in life. What do you think of this model/strategy?

I have something sort of a potential explanation to it, but it is difficult to formulate it in a way that it will be not misunderstood in the wrong way. Please everybody try to take this post with maximal charity and benefit of doubt.

  1. History tends to swing from one extreme to another, as people tend to OVERreact to the problems they see.

  2. Given that it is an OVERreaction, they are usually wrong, but it also points out a problem. You can diagnose the original problems from the overreactions to them.

  3. These overreactions are sometimes exaggerated only in "quantity", in which case a more moderate version of them would be okay, or they often get the direction completely wrong, still they point out how something is a problem and the issues they raise often have SOME truth to them.

  4. For example, Communism/Bolshevism was a huge OVERreaction to the condition of workers under capitalism, it was not a good solution at all, and even making it more moderate (a moderate, limited dictatorship of people who call themselves proletarians?) would not help much, but it pointed out a problem and now we have better solutions to that problem, such as unions striking when they want a wage raise or something. Or some laws like minimum wages.

  5. In the same vein The Red Pill / Manosphere is an OVERreaction to a problem, yes it is wrong, both it tone and content, misogynistic and so on, misrepresenting history etc. wrong in both quantity and direction, yet it DOES point out a problem, and some ideas when saner and kinder people work them over and remove the jaded or hateful elements of them, are actually useful.

Essentially this is the problem:

  1. Dating is hard for young straight men, not for everybody. Few gays complain, and of women only seriously overweight ones complain and even that is changing, there is more fat acceptance now. And being a straight male 35+ is far easier, have some achievement and don't be fat and you almost see women 32+ throwing themselves at you.

  2. One issue is that a lot of straight young men lack the experiences that would turn them into, well, it depends on your point of view, but you could say: masculine men, or you could say: grown-ups, adults. Being a grown-up or being masculine / manly is NOT the same, but they have a common opposite: a child, a boy is NEITHER. And that is what we have, many young men stay children because their formative experiences are school and videogames, which is not formative at all in this sense. They lack a lot of things, like challenges that require grown-up self-responsibility, or dangerous feeling things that would make them build courage and confidence.

  3. SOME, not all, some women are indeed hypergamous. And SOME, not all, men are polygamous. This basically means that instead of having 100% attention and dedication from one man of lower attractiveness, they rather have 25% of a very attractive man. Although the "spinning plates, soft harems" the RPers speak about are probably exaggerated bullshit, I do see highly attractive men have really fast series of hookups and breakups, lots of fast and short mini-relationships that in practice end up with multiple women "orbiting" one man. (I am NOT talking about real, serious "poly" people, they are still a minority, I am talking about people who think they are monogamous, just they end up starting and ending three relationships during one month.)

  4. This distorts the dating "market". As a very broad model, you have the top 50% women with the top 25% men, you have the next 25% of women and next 25% of men having the usual kinds of monogamous relationships, and you have the bottom 50% of men trying to chase the bottom 25% of women, and that bottom 25% is, not to be too offensive, but these days tends to be... "big". Of course the bottom 50% of men are often not a big catch either, videogaming man-boys without any confidence or adult responsibility. At any rate, the bottom 50% men often give up as they don't think having to compete for the "big" girls 2:1 is better than porn (and pot: a powerful combination), and the "big" girls often seek refuge in cats and sugar too.

4/B) To give you a good example how inequal is the dating market: when people describe sexual relationships as "I don't know, we just got drunk and it happened", very roughly this happened with 80% women and 40% men, at best. At least half the guys are not attractive enough to "just happen", many of them won't even get to the point where it could, as not getting drunk with women or not going out at all. "Just happened" is a narrative of women and handsome / attractive / grown-up / masculine / confident men, it is not a universal human one and for the bottom 50% of men it looks like something happening from a sci-fi.

5/A). There is another issue. RPers tend to blame feminism, I guess it is better to blame the inbalanced social adaptation to feminism, but basically the bottom 50% of guys think "well, I am not much as a man, but I can get an engineering degree, hold down a job, make money and could support a family, does that count for something?" and the answer is today "nothing AT ALL" because now almost every woman can make enough money. Even when they complain about making 77% of what men make or some similar figure, in a first-world society that is still enough to live comfortably without children. And now women rarely want children before 32. In the past, a man could be unattractive but being a breadwinner helped him a bit in finding a mate, now it is not the case. Now a degree in software engineering may still increase a mans chances in India, but not in the US, Canada or Germany.

5/B) The point is here, that since women can make a living on their own, men should probably adapt into being less of a worker bee and more focusing on his attractiveness. Yet it is far harder for the bottom 50% of men than just getting a software engineering degree and working. Besides, his parents may still push him towards this breadwinner role. And frankly this all sometimes feels "unnatural" probably because we are trying to undo thousands of years of historical adaptation to social roles, so it is learning a really uncharted terrain here. We don't have much historical experience in how to make all men attractive to women of a similar income and social status. Formerly it was not really needed. About half the men figure it out sooner or later, but the other half does not.

Maybe things get easier, if feminism, if ever, fully wins 100%. Currently it is totally confusing because it won in some things but not in some other things. Women can now do almost everything men can, yet it seems the most attractive guys are still the ones conforming more or less to traditional concepts like being strong, tall, brave, unfazed / no-fucks-given, and so on.

One good example in how the current kind of feminism-won-halfway-not-fully makes things confusing. Sheryl Sandberg, a highly powerful and successful woman, really a feminist role model, saying "When looking for a life partner, my advice to women is date all of them: the bad boys, the cool boys, the commitment-phobic boys, the crazy boys. But do not marry them." So basically she is saying that although not for marriage, for dating still the old, pre-feminist male archetypes, the traditional masculine archetypes are ideal! Sheryl is a feminist at work/career but not at dating, although at marriage probably yet again! Of course it confuses a young man who has no idea how to be attractive anymore! Be her bad cool crazy commitment phobic (i.e. treating women as sex objects only) boy which is an old-fashioned, 1950's on a motorbike, pre-feminist and borderline misogynistic or be marriagable nice guy in which case wait until 35 or so?

RP is wrong, but it is pointing us towards real actual problems that are begging for a better explanation and solution.

Tangentially, how much is it a problem of "dating", and how much a problem of "dating with sane people", when the pool of sane people is already small?

When I was younger, I wanted to have a romantic relationship with a person whom I would perceive as intellectually equal (plus or minus the LessWrong level). Since I barely knew such people... not much luck.

If I could send a message in time back to myself, it would be: "It will take decades until you find someone you can have meaningful conversation with. Meanwhile, relax, and try to fuck any nice body, but don't get attached. Otherwise you will later regret the wasted time." The only problem is, my younger self would be horrified to hear such advice.

I think it makes sense for us to try dating and relationships more, because there may not be as much time and opportunity as we hope later in life.

How do you suggest people actually implement this 'just date more'?

Most of my friends and acquaintances are committed to long-term relationships (mid-late 20s age group). I've had trouble in this area due to certain personal reasons, but my personal observations lead me to believe that I'm atypical in this regard.

It still seems single people aren't seeking or successfully finding relationships at a rate which corresponds well to genuine preferences for a relationship. Why aren't single people trying harder to find relationships?

It's possible they just don't know what they're doing or are paralyzed by anxiety when it comes to romance.

How do young people get into sexual relationships, any way? I had literally no experience with this in my youth, and not because I spent decades in prison starting around the age of 20 or anything like that. The women I knew as a young man walked around me as a physical object because they couldn't walk through me, but in general they treated me as socially invisible.

How do young people get into sexual relationships, any way?

I think in general 'it just happens', which generally means alcohol.

Found a five years old comment about HPMoR:

I think the biggest problem Yudkowsky will have with this will involve Hermione - A rational and knowledgeable Harry makes her basically redundant. Well, that, and the fact that a good 90% of each book consisted of "Harry screws up repeatedly because he forgot from the last book that he should just always go to Dumbledore first with any new problem"... I don't see this Harry having that same problem.

Heh.

Let's make a top level thread collecting websites that are useful for any purpose. From curetogether.com to pomodoro timers. Also includes download sites of useful software. Eventually this should make it into the wiki.

What would be a good way to do it? Perhaps similar to media threads.

I also know the space I propose to search is ginormous, but the goal is not to make it exhaustive, the goal is to list the favorite web-based tools / learning materials / software / other useful things on the web of LW members. With the hidden hope that we will get a better quality list than asking the same question on Reddit.

The goal would be to later on migrate it into one non-time-based place e.g. wiki page so that it does not get buried.

A personal request: websites that make procrastination more efficient. Essentially websites that teach you something, but in a way that is not necessarily in-depth, but more like a five-minute article about something important, useful or interesting when you want to kill five minutes before starting the next task.

It doesn't appear this is discussed much, so I thought I'd start a conversation:

Who on LessWrong is uncomfortable with or doesn't like so much discussion of effective altruism here? if so, why?

Other Questions:

  • Do you feel there's too much of it now, or would even a little bit of it seem averse?
  • Do you think such discussion is inappropriate given the implicit or explicit goals of LessWrong?
  • Has too much discussion of effective altruism caused you to think less of LessWrong, or use it less?
  • For what reason(s) do you disagree with effective altruism? Is it because of your values and what you care about, or because you don't like normative pressure to take such strong personal actions? Or something else?

I want to discuss it because what proportion of the LessWrong community is averse or even indifferent or disinterested in effective altruism doesn't express their opinions much. Also, while I identify with effective altruism, I don't only value this site as a means to altruistic ends, and I don't want other parts of the rationalist community to feel neglected.

Personally, I'm indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously. I don't really disagree with it, just not interested. As I've mentioned before, I care about myself, my family, my friends, and maybe some prominent people who don't know me, but whose work makes my life better. I feel for the proverbial African children, but not enough for anything more than a token contribution. If LW had a budget, /r/EA would be a good subreddit, though one of those I would rarely, if ever, visit. As it is, I skip the EA discussions, but I don't find them annoyingly pervasive.

That is exactly my own view. I can see the force of the arguments for EA, but remain unmoved by them. I don't mind it being discussed here, but take little interest in the discussions. I have no arguments against it (although the unfortunate end of George Price is a cautionary tale, a warning of a dragon on the way), and I certainly don't want to persuade anyone to do less good in the world.

It's rather like the Christian call to sainthood. Many are called, but few are chosen.

ETA: I am interested, as a spectator, in seeing how the movement develops.

On my part, it strikes me as the greatest and most important contribution this place has had on my life.

(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)

It appears to me that there are two LessWrongs.

The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner's dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.

The first LessWrong results in serious insights that should be integrated into one's life. In Program Equilibrium in the Prisoner's Dilemma via Lob's Theorem, the authors take a moment to discuss the issue of "Defecting Against CooperateBot"--if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner's dilemma. On the latest survey, I instead put the correct answer.

The second LessWrong is the LessWrong of utilitarianism, especially of a Singerian sort, which I find to clash with the first LessWrong. My understanding is that Peter Singer argues that because you would ruin your shoes to jump into a creek to save a drowning child, you should incur an equivalent cost to save the life of a child in the third world.

Now never mind that saving the child might have postive expected value to the jumper. We can restate Singer's moral obligation as a prisoner's dilemma, and then we can apply something like TDT to it and make the FairBot version of Singer: I want to incur a fiscal cost to save a child on the other side of the world iff parents on the other side of the world would incur a fiscal cost to save my child. I believe Singer would deny this statement (and would be more aghast at the PrudentBot version), and would insist that there's a moral obligation regardless of the other theoretical reciprocation.

I notice that I am being asked to be CooperateBot. I don't think CFAR has "Don't be CooperateBot," as a rationality technique, but they should.

Practically, I find that 'altruism' and 'CooperateBot' are synonyms. The question of reciprocality hangs in the background. It must, because Azathoth both generates those who are CooperateBot and those who exploit CooperateBots.

I will also point out that this whole discussion is happening on the website that exists to popularize humanity's greatest collective action problem. Every one of us has a selfish interest in solving the friendly AI problem. And while I am not much of a utilitarian, I would assume that the correct utilitarian charity answer in terms of number of people saved/generated would be MIRI, and that the most straightforward explanation is Hansonian cynacism.

'Altruism' for me doesn't mean 'I assign infinite value to my own happiness (and freedom, beauty, etc.) and 0 to others', but everyone would be better off (myself included) if I sacrificed my own happiness for others'. So I'll sacrifice my own happiness for others'.' Rather, I assign some value to my own happiness, but a lot more value to others' happiness. I care unconditionally about others' happiness.

Since it's only a Prisoner's Dilemma if I value 'I defect, you cooperate' over 'we both cooperate', for me high-stakes 'defecting' would mean directly indulging in my desire to help others, while 'cooperating' via UDT would mean sacrificing humanity's welfare in some small way in order to keep a non-utilitarian agent from doing even more to reduce humanity's welfare. The structure of the PD has nothing to do with whether the agents are selfish vs. altruistic (as long as you take that into account when initially calculating payoffs).

Thought experiments like Singer's are how I found out that I do in fact terminally value people who are distant from me in space (and time). My behavior isn't perfectly utilitarian, but I'd take a pill to become more so, so my revealed preferences aren't what I'd prefer them to be.

Seeing as, in terms of absolute as well as disposable income, I'm probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It's something I know I couldn't participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn't afford anything else. Maybe one day when I'll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain't got no money, take yo' broke ass home!

Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good motives and consequences. On the one hand, the fact that there are people dedicated to donating large fractions of their income is a laudable thing in itself. On the other hand...

  • I don't believe for one second that effective altruism would have been nearly as big of a phenomenon on LessWrong, if the owners of LessWrong hadn't been living off people's donations. MIRI is a charity that wants money. Giving to charity is probably the biggest moral credential on LW. Coincidence? I think not.

  • Ensuring the flow of money in a particular direction may not be the very best effort one can put into making the world a better place. Sure, it's something, and at least in the short term a very vital something, but more than anything else it seems to be a way to patch up, or prop up, a part of the system that was shaky to begin with. The long-term end goal should be to make people less reliant on charity money. Sometimes there is a shortage of knowledge, or of power, or of good incentives, rather than of money. "Throwing money at a cause" is just one way to help -- although I suppose effective altruist organizations already incorporate the knowledge of this problem in their concept of "room for more funding".

  • We already have governments that take away a large portion of our incomes anyway, that have systems in place for allocating funds and efforts, and that purport to promote the same kinds of causes as charities, yet often function inefficiently and even harmfully. However, they're a lot more reliable in terms of actually ensuring the collection of "enough" funds. To pay taxes and to give to charity (yes, I'm aware that charitable giving unlocks tax deductions) is to contribute to two systems that are doing the same job, the second being there mostly because the first isn't doing its job as it should. In this way, and possibly assuming that EA would be a larger movement in the future than it is now, charity might work to mask government inefficiencies and damage or to clean up after them.

  • In the context of earning to give, participating in a particularly noxious industry as a way of earning your livelihood, and using part of that money to contribute to altruist causes, is something that looks to me like a tax on the well-being you thus cause into the world. I'm not sure that tax is always smaller than 100%. And it's more difficult to quantify the negative externalities from your job than it is to quantify the positive effects of your donations, because the first are more causally distant.

To take the discussion back to the meta level, I'm but one user with not so much karma and probably a non-central example of a LessWronger, so I don't demand that anyone accommodates me and my preferences not to discuss EA. However, knowing that other users basically come from an effective altruism mindset makes discussion with them somewhat difficult, since we don't have the same assumptions about the relationship between money and welfare. The most annoying of all is the very rare and very occasional display of charitable snobbery, or a commitment not to aid first world people who are not effective altruists, or who don't donate enough. (I've seen that, but Google seems to fail me at this moment.) It seems easier and more pleasant to discuss ethical matters with people who don't come from an EA worldview, and personally I'd like to see more of a plurality of approaches on the matter on LW.

tl;dr It's a rich people thing and therefore alien to me; as for objective merits, I've got mixed positive and negative feelings about it. But in the end, to each their own.

I think that the image of EA on LW has been excessively donation-focused, but I'd like to point out that things like earning to give are only one part of EA.

EA is about having the biggest positive impact that you can have on the world, given your circumstances and personality. If your circumstances mean that you can't donate, or disagree with donations being the best way to do good, that still leaves options like e.g. working directly for some organization (be it a non-profit or for-profit) having a positive impact on the world. Some time back I wrote the following:

Effective altruism says that, if you focus on the right career, you can have an even bigger impact! And the careers don't even need to be exotic, demanding ones that only a few select ones can do (even if some of them are). Some of the top potential careers that 80,000 hours has identified so far include thing as diverse as being an academic, civil servant, journalist, marketer, politician, or software engineer, among others. Not only that, they also emphasize finding your fit. To have a big impact on the world, you don't need to shoehorn yourself into a role that doesn't suit you and that you hate - in fact you're explicitly encouraged to find an high-impact career that fits you personally.

Analytic? Maybe consider research, in one form or another. Want to mostly support the cause from the side, not thinking about things too much? Let the existing charity evaluation organizations guide who you donate to and don't worry about the rest. Or help out other effective altruists. People person? Plenty of ways you could have an impact. There's always something you can do - and still be effective. It's not about needing to be superhuman, it's about doing the best that you can, given your personality, talents and interests.

I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don't really think short term EA makes sense. I'm surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.

And if the response is that future civilisation is 'far' in the overcoming bias sense, well, so are starving children in Africa.

My brain filters it out automatically. Altruism is not even on my mind AT ALL, until I sorted out my own problems and feel the life of me and my family is reasonably secure, happy, safe, and going up and up. I don't feel I have any surplus for altruism.

I guess in practice I do altruistic things all the time. People ask me for help, I don't say no. I just don't seek out opportunities to.

My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.

I propose that some major academic organization such as the American Economic Association randomly and secretly choose a few members and request that they attempt to get fraudulent work accepted into the highest ranked journals they can. They reveal the fraud as soon as an article is accepted. This procedure would give us some idea how of easy it is to engage in fraud, and give journals additional incentives to search for it. For some academic disciplines the incentives to engage in fraud seem similar to that with illegal performance enhancing drugs and professional sports, and I wonder if the outcomes are similar.

Every so often someone proposes this (and sometimes someone who thinks they are clever actually carries it out) and it's always a terrible idea. The purpose of peer review is not to uncover fraud. It's not even to make sure what's in the paper is correct. The purpose of peer review is just to make sure what's in the paper is plausible and sane, and worth being presented to a wider audience. The purpose is to weed out obvious low-quality material such as perpetual motion machines or people who are duplicating other's work as their own. Could you get fraudulent papers accepted in a journal? Of course. A scientist sufficiently knowledgeable of their field could definitely fool almost any arbitrarily rigorous peer review procedure. Does fraud exist in the scientific world? Of course it does. Peer review is just one of the many mechanisms that serve to uncover it. Real review of one's work begins after peer review is over and the work is examined by the scientific community at large.

The purpose of peer review is not to uncover fraud.

And this is OK if the fraud rate is low, and unacceptable if it's high.

Real review of one's work begins after peer review is over and the work is examined by the scientific community at large.

I doubt this happens to more than a tiny number of papers, although probably the more important the result the more likely it will get reviewed.

The purpose of peer review is not to uncover fraud.

And this is OK if the fraud rate is low, and unacceptable if it's high.

If a paper shows all its working, a competent reviewer can judge whether the work as reported is good. How will they detect that the report is a fabrication? All the reviewer sees is the story the author is telling. The reviewer may notice inconsistencies, such as repeated use of the same figures, or data with an implausible distribution, but they will generally have no way to compare the story with the actual facts of what happened in the lab.

Detecting and preventing fraud is a good thing, but I don't think peer review is a place where much of it can happen.

(Akrasia, because that's all I ever talk about):

I do not know to whose attention I should bring this so as to combat the problem, so I'm asking here:

http://caejones.livejournal.com/18117.html

I have a stupidly difficult time talking to people, too, especially my parents (who pretty much have to manage all the details, because of course they do). This does not help.

Yes, I've read all the Akrasia articles on Lesswrong that I can find. Mostly, I'm hoping there's someone better equipped to fix this than me or the internet, and that someone can help me find that entity and extract a solution from them.

(But if that someone happens to post the solution here, first, that'd be nice. Although turning it into an arduous quest through the temple of doom seems like it could only help, assuming no crippling injuries along the way.)

A few thoughts on Mark_Friedenbach's recent departure:

I thought it could be unpacked into two main points. (1) is that Mark is leaving the community. To Mark, or anyone who makes this decision, I think the rational response is, "good luck and best wishes." We are here for reasons, and when those reasons wane, I wouldn't begrudge anyone looking elsewhere or doing other things.

(2) is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material. I wouldn't mind reading if someone took a crack at a Sequences 2.0, or something completely different. Perhaps something with a more empirical/scientific focus (as opposed to foundational/philosophical), as Mark recommended.

I wouldn't mind reading if someone took a crack at a Sequences 2.0, or something completely different.

One way of looking at the failure mode of Scientology is that they lead with genuinely useful material, which hooks people and establishes them as a credible source of wisdom. They then have a progressive structure that convinces you new epiphanies are just around the corner, you just need to put in a little more effort / time / cash--but there is no epiphany waiting that will be as useful as the original epiphanies.

This happens lots of places. I recall reading about some Alexander Technique expert, who continued doing lessons in the hopes of recapturing the first moment when he experienced lightness in his body. He never could, because the thing that was shocking about the first time was the surprise, not the lightness, and no matter how light he got, he could not become as surprised by it.

The healthy approach is to have a purpose, to pursue a well of knowledge for as long as doing so enhances that purpose, and then to abandon that well of knowledge as soon as it no longer enhances that purpose.

But here we run into the issue that, while rationality may be the common interest of many causes, the "something new" is unlikely to be a specifically rationality thing. It's more likely to be something that some people find interesting and some people find boring, and so the people split into different taskforces to solve different problems. (That is, the Craft and the Community sequence really does anticipate lots of these issues.)