Discuss things here if they don't deserve a post in Main or Discussion.

If a topic is worthy and receives much discussion, make a new thread for it.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 3:17 PM
Select new highlight date
All comments loaded

In one of the subthreads concerned with existential risk and the Great Filter, I proposed that one possible filtration issue is that intelligent species that evolved comparatively earlier in their planets' lifetimes or evolved on planets that formed much sooner compared to when their heavy elements were formed would have a lot more fissionable material (especially uranium-235), and that this might make it much easier for them to wipe themselves out with nuclear wars. So we may have escaped the Great Filter in part by evolving late. Thinking about this more, I'm uncertain how important this sort of filtration is. I'm curious if a) people think this could be a substantial filter and b) if anyone is aware of discussion of this filter in the literature.

If we had had more fissionable material over the last 100 years how would that have made nuclear war more likely?

If life had evolved say 2 billion years earlier than there would be about 6 times as much U-235 on the planet, and most uranium ores would be around 3% U-235 rather than 0.7% U-235. This means that making nuclear weapons would be easier, since obtaining enough uranium would be a lot easier and the amount of enriching needed would go down as well. For similar reasons it would also then be easier to make plutonium in large quantities. However, the fact that one would still need some amount of enrichment means that this would still be technically difficult, just easier. However, fusion bombs are much more effective for civilizations destroying themselves, and even with cheap fissiles, fusion bombs are still comparatively tough.

There's another reason that this filter may not be that big a filtration event: Having more U-235 around means that one can more easily construct nuclear reactors. Fermi's original pile used non-enriched uranium, so one can have a (not very efficient) uranium reactor simply from that without much work, and modern reactors can use non-enriched uranium (although that requires careful designs). But on a large scale, in such a setting, somewhat enriched uranium (compared to what we consider normal) would be much more common, and functional, useful reactors can be easily made with percentages as low as 2% of U-235, and in this setting most of the uranium would be closer to 3% U-235. So making nuclear reactors much easier means one has a much easier source of energy (in fact, on Earth, there's at least one documented case of such a reactor occurring naturally about 1.7 billion years ago ). Similar remarks apply to nuclear rockets which are one of the few plausible ways one can reasonably go about colonizing other planets.

So the two concerns are: a) how much more likely would it be for a civilization to actually wipe itself out in this sort of situation and b) how much is this balanced out by the presence of a cheap energy source and an easier way to leave the planet and go around one's star system with high delta-V?

Perhaps it makes it a little more likely for a civilization to end themselves but it doesn't seem to have the potential to be a great filter. It doesn't seem that likely that even a large scale war with fusion weapons would extinguish a species; and as you point out there is still quite a barrier to development of fusion weapons even with more prolific 235. So far in our history the proliferation of nuclear weapons seems to have discouraged wars of large scope between great powers. In fact two great powers have not fought each other since Japan's surrender. Granted this is a pretty small sample of time but a race without the ability to rationally choose peace probably has little chance regardless of 235 levels. So if there is a great filter here with species extinguishing themselves in war, more 235 makes it a little bit greater only.

What, exactly, would the increased uranium level do?

  • It doesn't seem to me that it would speed up the development of an atomic bomb much because you have to have the idea in the first place; and in our timeline, the atomic bomb followed the idea very quickly (what was it, 6 years?); the lower concentration no doubt slowed things by a few months or perhaps less than 5 years, but the histories I read didn't point to concentrating as a bottleneck but more conceptual issues (how much do you need? how do the explosive lenses work? etc.)

    Nor do I see how it might speed up the general development of physics and study of radioactivity; if Marie Curie was willing to go through tons of pitchblende to get a minute bit of radium, then uranium clearly was nowhere on her radar. Going from 0.6 to 3% won't suddenly make a Curie study uranium ore instead.

    The one such path would be discovering a natural uranium reactor, but how big a window is there where scientists could discover a reactor and speed up development of nuclear physics? I mean, if a scientist in the 1700s had discovered a uranium reactor, would he be able to do anything about it? Or would it just remain a curiosity, something like the Greeks and magnets?

  • Nuclear proliferation is not constrained by the ability to refine ore, but more by politics; South Africa and South Korea and Libya and Iraq didn't abandon their nukes or programs because it was costing them 6x as much to refine uranium.
  • Nukes wouldn't become much more effective; nukes are so colossally expensive that their yields are set according to function and accuracy of targeting. (The poorer your targeting, like Russia, the bigger your yields will be to compensate.)

I'm thinking maybe we should try to pool all LW's practical advice somewhere. Perhaps a new topic in Discussion, where you post a top-level comment like "Will n-backing make me significantly smarter?", and people can reply with 50% confidence intervals. Then we combine all the opinions to get the LW hivemind's opinions on various topics. Thoughts?

PS. Sorry for taking up the 'Recent Comments' sidebar, I don't have internet on my own computer so I have to type my comments up elsewhere and post them all at once.

When writing a comment on LessWrong, I often know exactly which criticisms people will give. I will have thought through those criticisms and checked that they're not valid, but I won't be able to answer them all in my post, because that would make my post so long that no-one would read it. It seems like I've got to let people criticise me, and then shoot them down. This seems awfully inefficient, it's like the purpose of having a discussion rather than me simply writing a long post is just to trick people into reading it.

I suppose if you have an external blog, you can simply summarize the potential criticisms on your LW post and link to a further discussion of them elsewhere. Or you can structure your post such that it discusses them at the very end:

======

Optional reading:

In this way you get your point across first, while those interested can continue on to the detailed analysis.

Briefly summarize expected objections and write whatever you want to write about them in a comment to your comment.

I just finished reading Steven Pinker's new book, The Better Angels of Our Nature: Why Violence Has Declined. It's really good, as in, maybe the best book I've read this year. Time and again, I was shocked to find subjects treated of keen interest to LW, or which read like Pinker had taken some of my essays but done them way better (on terrorism, on the expanding circle, etc.); even so, I was surprised to learn new things (resource problems don't correlate well with violence?).

I initially thought I might excerpt some parts of it for a Discussion or Article, but as the quotes kept piling up, I realized that it was hopeless. Reading reviews or discussions of it is not enough; Pinker just covers too much and rebuts too many possible criticisms. It's very long, as a result, but absorbing.

There was a recent LW discussion post about the phenomenon where people presented with evidence against their position end up believing their original position more strongly. The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly. Does somebody know which discussion post I'm talking about? I'm not finding it.

How to cryonics?

And please forgive me if this is a RTFM kind of thing.

I've been reading LW for a time, so I've been frequently exposed to the idea of cryonics. I usually push it to the back of my mind: I'm extremely pessimistic about the odds of being revived, and I'm still young, after all. But I realize this is probably me avoiding a terrible subject rather than an honest attempt to decide. So I've decided to at least figure out what getting frozen would entail.

Is there a practical primer on such an issue? For example; I'm only now entering grad school, and obviously couldn't afford the full cost. But being at a very low risk of death, I feel that I should be able to leverage a low-cost insurance policy into covering such a scenario.

There's a room open in one of the Berkeley rationalist houses, http://sfbay.craigslist.org/eby/sub/2678656916.html

Reply via the ad if you are interested for more details!

For LifeHacking--instrumental rational skills--does anyone have experience getting lightweight professional advice? E.g., for clothing, hire a personal stylist to pick out some good-looking outfits for you to buy. No GQ fashion-victimhood, just some practical suggestions so that you can spend the time re-reading Pearl's Causality instead of Vogue.

The same approach--simple one-time professional advice, could apply to a variety of skills.

If anyone has tried this sort of thing, I'll be glad to learn your experience.

I don't know much about machine learning, but wouldn't it be possible to use machine learning to get a machine to optimize your diet, exercise, sleep patterns, behaviour, etc.? Perhaps it generates a list of proposed daily routines, you follow one and report back some stats about yourself like weight, blood pressure, mood, digit span, etc.. It then takes these and uses them to figure out what parts of what daily routines do what. If it suspects eating cinnamon decreases your blood pressure, it makes you eat cinnamon so you can tell it whether it worked. The algorithm can optimize diet, exercise, mental exercises, even choose what books you read.

Basically, I'm saying why don't we try something like what Piotr Wozniak did with the SRS algorithms, except instead of optimizing memorization we optimize everything. We do what the people at QS do, except we delegate interpretation of the data to a computer.

Like I said, I don't know much about machine learning, but even the techniques I /do/ know, evolutionary algorithms and neural nets, seem like they could be used for this, and are certainly worth our time trying.

Sounds like it could work, especially if it uses a database of all users, so that users most similar to you also give an indication of what might or might not work for you.

"I am [demographic and psychological parameters] and would like to [specific goal - mood, weight, memoty, knowledge] in the coming [time period]; what would work best?"

Sounds like an interesting project, I'll have to think about it.

Anyone have anything to share in the way of good lifehacks? Even if it only works for you, I would very much like to hear about it. Here are two I've been using with much success lately:

  • Get an indoor cycle or a treadmill and exercise while working on a laptop. At first I just used to cycle while watching movies on TV, but lately I've stopped watching movies and just cycle while doing SRS reps or reading ebooks. Set up your laptop with its power cable and headphones on the cycle, and leave them there always. If you're too tired to cycle, just sit on the cycling machine without cycling. The past few days I've been cycling upwards of 4 hours cumulatively per day, and I feel AWESOME. It also seems to help me get to sleep at the proper time. I would cycle 4 hours a day just for the sleep benefit.

  • The part of my brain that loses to akrasia seems incredibly stupid, whereas my long-term planning modules are relatively smart. I've been trying to take advantage of this by a campaign of active /warfare/ against the akrasia-prone part of me. For instance, I have deleted all the utilities on my laptop needed for networking. I can no longer browse the internet without borrowing someone else's computer, as I am doing now. I also can't get those networking utilities back because for that I need internet. I also destroyed both Ubuntu live-CDs I had, because I can get to the internet through those. Thus far, my willpower has thrice failed me, and each time I have tried to get internet back, and each time I have failed. I count this as a win. The principle is more general, of course: only buy healthy food, literally throw away your television, delete all your computer games, etc.. The first few days without some usual sort of distraction are always painful; I feel depressed and bored of life. But that soon clears up, and my expected-pleasurable-distraction setpoint seems to lower. This is like a way of converting fleeting motivation into long-term motivation.

Sounds awesome, where did you first hear of this?

The current phase of the contest will end December 18th at 11:59pm EST. At that time submissions will be closed. Shortly thereafter the final tournament will be started. The length of the final tournament has not yet been determined but is expected to last less than one week. Upon completion the contest winner will be announced and all results will be publically available.

Anyone interested in starting a team for this?

Gogo LessWrong team! The experience and the potential publicity will be excellent.

I'll chip in with a prize to the amount of ($1000 / team's rank in the final contest), donated to the party of your choice. Team must be identified as "LessWrong" or suchlike to be eligible.

So, anime is recognized as one of the LW cultural characteristics (if only because of Eliezer) and has come up occasionally, eg. http://lesswrong.com/lw/84b/things_you_are_supposed_to_like/

Is this arbitrary? Or is there really something better for geeks about anime vs other forms of pop culture? I have an essay arguing that due to various factors anime has the dual advantages of being more complex and also more novel (from being foreign). I'd be interested in what other LWers have to say.

Neil deGrasse Tyson is answering questions at reddit:

What are your thoughts on cryogenic preservation and the idea of medically treating aging?

neiltyson 737 points 5 hours ago

A marvelous way to just convince people to give you money. Offer to freeze them for later. I'd have more confidence if we >had previously managed to pull this off with other mammals. Until then I see it as a waste of money. I'd rather enjoy the >money, and then be buried, offering my body back to the flora and fauna of which I have dined my whole life.

Does anyone else have a weird stroke of cognitive dissonance when a trusted source places a low probability on a subject you have placed a high probability on?

I have never heard of this person before, but if they actually think "offering my body back to the flora and fauna of which I have dined my whole life." is worth mentioning, it sounds like they're victim of a naturalistic bias.

I'm having trouble deciding how to weight the preferences of my experiencing self versus the preferences of my remembering self. What do you do?

What would you suggest someone to read if you were trying to explain to them that souls don't exist and that a person is their brain? I vaguely remember reading something of Eliezer's on this topic and someone said they would read some articles if I sent them to them. Would it just be the Free Will sequence?

I am currently in an undergrad American university. After lurking on LW for many months, I have been persuaded that the best way for me to contribute towards a positive Singularity is to utilize my comparative advantage (critical reading/writing) to pursue a high-paying career; a significant percentage of the money I earn from this undecided lucrative career will hopefully go towards SIAI or some other organization that is helping to advance the same goals.

The problem is finding the right career that is simultaneously well-paying and achievable, with hopefully some time for my own interests/hobbies.

I was first considering becoming a lawyer, but apparently, only the very top law school graduates actually go on to earn jobs with high salaries. In addition, it seems that the first few years being a lawyer are extremely stressful.

Another option is graduate school. The main academic fields I am interested in are government, economics, and philosophy. However, I'm just not sure that graduate school will lead to other careers besides being a professor, and I don't know if academia is frankly well-paying enough to justify the costs.

Any advice is appreciated, particularly if you have a similar dilemma, have encountered something like this in the past, are in a field that I mentioned, or just if you have any specific information that might help me. Thanks!

I am going to say that academia (in the humanities) is not a good choice if you want to make money, or even be guaranteed a job. Professorial jobs are moving away from tenure-track positions, and towards part-time positions. There are very few professorial jobs and very many people (who are all "the best") who want them.

Boring Data

Or put in more understandable terms: 100 Reasons Not to go into Academia

Or for amusement's sake: PhD Comics