Open Thread, May 5 - 11, 2014

Previous Open Thread

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 2:51 AM
Select new highlight date
Rendering 50/286 comments  show more

Is Less Wrong dying?

Some observations...

  • The top level posts are generally well below the quality of early material, including the sequences, in my estimation.
  • 'Main' posts are rarely even vaguely interesting to me anymore.
  • 'Top Contributors' karma values seem very low compared to what I remember them being ~9-12 months ago.
  • 'Discussion' posts are littered with Meetup reminders.

About all I look at on LW anymore is the Open Discussion Thread, Rationality Quotes and the link to Slate Star Codex. I noticed CFAR and MIRI's websites gave me the impression they were getting more traction and perhaps making some money.

Has LW run it's course?

I think it's a little early to predict the end, but there's less I'm interested in here, and I'm having trouble thinking of things to write about, though I can still find worthwhile links for open threads.

Is LW being hit by some sort of social problem, or have we simply run out of things to say?

I'd add "Metacontrarianism is on the rise" to your list. Many of the top posts now are contrary to at least the spirit of the sequences, if not the letter, or so it feels to me.

Maybe it's because the important things have started, and moved to real life, outside of the LW website. There are people writing and publishing papers on Friendly AI, there are people researching and teaching rationality exercises; there are meetups in many countries. -- Although, if this is true, I would expect more reports here about what happens in the real life. (Remember the fundamental rule of bureaucracy: If it ain't documented, it didn't happen.)

Anyway, this is only a guess; it would be interesting to really know what's happening...

Has LW run it's course?

It seems to be a common sentiment, actually. I mentioned this a few times on #lesswrong and the regulars there appear to agree. Whether this is a some sort of confirmation bias, I am not sure. Fortunately, there is a way to measure it:

Count interesting articles from each period and compare the numbers.

I blame Facebook. Many of the discussions that are had there were of the type that used to invigorate these here boards.

I would say LW is evolving.

The Sequences are and always were the finger that points at the objective, not the objective unto itself. The project of LW is "refining the art of human rationality." But we don't have the defininition of human rationality written on stone tablets, needing only diligence in application to obtain good results. The project of LW is thus a dynamic process of discovery, experimentation, incorporating new data, sometimes backtracking when we update on evidence that isn't as solid was we had thought.

You correctly observe that the style of participation has changed over time. This is probably mostly the result of certain specific high volume contributors moving on to other things. It could also be the result of an aggregated shift in understanding as to what kinds of results can actually be produced by discussing rationality in a vacuum, which may perhaps be why these contributors have moved on. Or maybe they just said all they felt they needed to say, I don't know. I have a 101.1 F fever right now.

As per issue #389, I've just pushed a change to meetups. All future meetup posts will be created in /r/meetups to un-clutter /r/discussion a little bit.

Below is an edited version of an email I prepared for someone about what CS researchers can do to improve our AGI outcomes in expectation. It was substantive enough I figured I might as well paste it somewhere online, too.

I'm currently building a list of what will eventually be short proposals for several hundred PhD theses / long papers that I think would help clarify our situation with respect to getting good outcomes from AGI, if I could persuade good researchers to research and write them. A couple dozen of these are in computer science broadly: the others are in economics, history, etc. I'll write out a few of the proposals as 3-5 page project summaries, and the rest I'll just leave as two-sentence descriptions until somebody promising contacts me and tells me they want to do it and want more detail. I think of these as "superintelligence strategy" research projects, similar to the kind of work FHI typically does on AGI. Most of these projects wouldn't only be interesting to people interested in superintelligence, e.g. a study building on these results on technological forecasting would be interesting to lots of people, not just those who want to use the results to gain a bit of insight into superintelligence.

Then there's also the question of "How do we design a high assurance AGI which would pass a rigorous certification process ala the one used for autopilot software and other safety-critical software systems?"

There, too, MIRI has lots of ideas for plausibly useful work that could be done today, but of course it's hard to predict this far in advance which particular lines of research will pay off. But then, this is almost always the case for long-time-horizon theoretical research, and e.g. applying HoTT to program verification sure seems more likely to help our chances of positive AGI outcomes than, say, research on genetic algorithms for machine vision.

I'll be fairly inclusive in listing these open problems. Many of the problems below aren't necessarily typical CS work, but they could plausibly be published in some normal CS venues, e.g. surveys of CS people are sometimes published in CS journals or conferences, even if they aren't really "CS research" in the usual sense.

First up are 'superintelligence strategy' aka 'clarify our situation w.r.t. getting good AGI outcomes eventually' projects:

  • More and larger expert surveys on AGI timelines, takeoff speed, and likely social impacts, besides the one reported in the first chapter of Superintelligence (which isn't yet published).

  • Delphi study of those questions including AI/ML people, AGI people, and AI safety+security people.

  • How big is the field of AI currently? How many quality-adjusted researcher years, funding, and available computing resources per year? How many during each past previous decade in AI? More here.

  • What is the current state of AI safety engineering? What can and can't we do? Summary and comparison of approaches in formal verification in AI, hybrid systems control, etc. Right now there are a bunch of different communities doing AI safety and they barely talk to each other, so it's hard for any one person to figure out what's going on in general. Also would be nice to know which techniques are being used where, especially in proprietary and military systems for which there aren't any papers.

  • Surveys of AI subfield experts on “What percentage of the way to human-level performance in your subfield have we come in the last n years”? More here.
  • Improved analysis of concept of general intelligence beyond “efficient cross-domain optimization.” Maybe just more specific: canonical environments, etc. Also see work on formal measures of general intelligence by Legg, by Hernandez-Orallo, etc.
  • Continue Katja’s project on past algorithmic improvement. Filter not for ease of data collection but for real-world importance of the algorithm. Interesting to computer scientists in general, but also potentially relevant to arguments about AI takeoff dynamics.
  • What software projects does the government tend to monitor? Do they ever “take over” (nationalize) software projects? What kinds of software projects do they invade and destroy?
  • Are there examples of narrow AI “takeoff”? Eurisko maybe the closest thing I can think of, but the details aren't clear because Lenat's descriptions were ambiguous and we don't have the source code.

  • Cryptographic boxes for untrusted AI programs.

  • Some AI approaches are more and less transparent to human understanding/inspection. How well does each AI approach's transparency to human inspection scale? More here.
  • Can computational complexity theory place any bounds on AI takeoff? Daniel Dewey is looking into this; it currently doesn't look promising but maybe somebody else would find something a bit informative.
  • To get an AGI to respect the values of multiple humans & groups, we may need significant progress in computational social choice, e.g. fair division theory and voting theory. More here.

Next, high assurance AGI projects that might be publishable in some CS conferences/journals. One way to categorize this stuff is into "bottom-up research" and "top-down research."

Bottom-up research aimed at high assurance AGI simply builds on current AI safety/security approaches, pushing them along to be more powerful, more broadly applicable, more computationally tractable, easier to use, etc. This work isn't necessarily focused on AGI specifically but is plausibly pushing in a more safe-AGI-helpful direction than most AI research is. Examples:

To be continued...

Continued...

Top-down research aimed at high assurance AGI tries to envision what we'll need a high assurance AGI to do, and starts playing with toy models to see if they can help us build up insights into the general problem, even if we don't know what an actual AGI implementation will look like. Past examples of top-down research of this sort in computer science more generally include:

  • Lampson's original paper on the confinement problem (covert channels), which used abstract models to describe a problem that wasn't detected in the wild for ~2 decades after the wrote the paper. Nevertheless this gave computer security researchers a head start on the problem, and the covert channel communication field is now pretty big and active. Details here.
  • Shor's quantum algorithm for integer factorization (1994) showed, several decades before we're likely to get a large-scale quantum computer, that (e.g.) the NSA could be capturing and storing strongly encrypted communications and could later break them with a QC. So if you want to guarantee your current communications will remain private in the future, you'll want to work on post-quantum cryptography and use it.
  • Hutter's AIXI is the first fully-specified model of "universal" intelligence. It's incomputable, but there are computable variants, and indeed tractable variants that can play arcade games successfully. The nice thing about AIXI is that you can use it to concretely illustrate certain AGI safety problems we don't yet know how to solve even with infinite computing power, which means we must be very confused indeed. Not all AGI safety problems will be solved by first finding an incomputable solution, but that is one common way to make progress. I say more about this in a forthcoming paper with Bill Hibbard to be published in CACM.

But now, here are some top-down research problems MIRI thinks might pay off later for AGI safety outcomes, some of which are within or on the borders of computer science:

  • Naturalized induction: "Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about." (Agents build with the agent-environment framework are effectively Cartesian dualists, which has safety implications.)
  • Better AI cooperation: How can we get powerful agents to cooperate with each other where feasible? One line of research on this is called "program equilibrium": in a setup where agents can read each other's source code, they can recognize each other for cooperation more often than would be the case in a standard Prisoner's Dilemma. However, these approaches were brittle, and agents couldn't recognize each other for cooperation if e.g. a variable name was different between them. We got around that problem via provability logic.
  • Tiling agents: Like Bolander and others, we study self-reflection in computational agents, though for us its because we're thinking ahead to the point when we've got AGIs who want to improve their own abilities and we want to make sure they retain their original purposes as they rewrite their own code. We've built some toy models for this, and they run into nicely crisp Gödelian difficulties and then we throw a bunch of math at those difficulties and in some cases they kind of go away, and we hope this'll lead to insight into the general challenge of self-reflective agents that don't change their goals on self-modification round #412. See also the procrastination paradox and Fallenstein's monster.
  • Ontological crises in AI value systems.

These are just a few examples: there are lots more. We aren't happy yet with our descriptions of any of these problems, and we're working with various people to explain ourselves better, and make it easier for people to understand what we're talking about and why we're working on these problems and not others. But nevertheless some people seem to grok what we're doing, e.g. I pointed Nik Weaver to the tiling agents paper stuff and despite not having past familiarity with MIRI he just ran with it.

Here's a comment that I posted in a discussion on Eliezer's FB wall a few days back but didn't receive much of a response there, maybe it'll prompt more discussion here:

--

So this reminds me, I've been thinking for a while that VNM utility might be a hopelessly flawed framework for thinking about human value, but I've had difficulties putting this intuition in words. I'm also pretty unfamiliar with the existing literature around VNM utility, so maybe there is already a standard answer to the problem that I've been thinking about. If so, I'd appreciate a pointer to it. But the theory described in the linked paper seems (based on a quick skim) like it's roughly in the same direction as my thoughts, so maybe there's something to them.

Here my stab at trying to describe what I've been thinking: VNM utility implicitly assumes an agent with "self-contained" preferences, and which is trying to maximize the satisfaction of those preferences. By self-contained, I mean that they are not a function of the environment, though they can and do take inputs from the environment. So an agent could certainly have a preference that made him e.g. want to acquire more money if he had less than $5000, and which made him indifferent to money if he had more than that. But this preference would be conceptualized as something internal to the agent, and essentially unchanging.

That doesn't seem to be how human preferences actually work. For example, suppose that John Doe is currently indifferent between whether to study in college A or college B, so he flips a coin to choose. Unbeknownst to him, if he goes to college A he'll end up doing things together with guy A until they fall in love and get monogamously married; if he goes to college B he'll end up doing things with gal B until they fall in love and get monogamously married. It doesn't seem sensible to ask which choice better satisfies his romantic preferences as they are at the time of the coin flip. Rather, the preference for either person develops as a result of their shared life-histories, and both are equally good in terms of intrinsic preference towards someone (though of course one of them could be better or worse at helping John achieve some other set of preferences).

More generally, rather than having stable goal-oriented preferences, it feels like we acquire different goals as a result of being in different environments: these goals may persist for an extended time, or be entirely transient and vanish as soon as we've left the environment.

As an another example, my preference for "what do I want to do with my life" feels like it has changed at least three times today alone: I started the morning with a fiction-writing inspiration that had carried over from the previous day, so I wished that I could spend my life being a fiction writer; then I read some e-mails on a mailing list devoted to educational games and was reminded of how neat such a career might be; and now this post made me think of how interesting and valuable all the FAI philosophy stuff is, and right now I feel like I'd want to just do that. I don't think that I have any stable preference with regard to this question: rather, I could be happy in any career path as long as there were enough influences in my environment that continued to push me towards that career.

It's as Brian Tomasik wrote at http://reducing-suffering.blogspot.fi/2010/04/salience-and-motivation.html :

There are a few basic life activities (eating, sleeping, etc.) that cannot be ignored and have to be maintained to some degree in order to function. Beyond these, however, it's remarkable how much variation is possible in what people care about and spend their time thinking about. Merely reflecting upon my own life, I can see how vastly the kinds of things I find interesting and important have changed. Some topics that used to matter so much to me are now essentially irrelevant except as whimsical amusements, while others that I had never even considered are now my top priorities.

The scary thing is just how easily and imperceptibly these sorts of shifts can happen. I've been amazed to observe how much small, seemingly trivial cues build up to have an enormous impact on the direction of one's concerns. The types of conversations I overhear, blog entries and papers and emails I read, people I interact with, and visual cues I see in my environment tend basically to determine what I think about during the day and, over the long run, what I spend my time and efforts doing. One can maintain a stated claim that "X is what I find overridingly important," but as a practical matter, it's nearly impossible to avoid the subtle influences of minor day-to-day cues that can distract from such ideals.

If this is the case, then it feels like trying to maximize preference satisfaction is an incoherent idea in the first place. If I'm put in environment A, I will have one set of goals; if I'm put in environment B, I will have another set of goals. There might not be any way of constructing a coherent utility function so that we could compare the utility that we obtain from being put in environment A versus environment B, since our goals and preferences can be completely path- and environment-dependent. Extrapolated meta-preferences don't seem to solve this either, because there seems to be no reason to assume that they'd any less stable or self-contained.

I don't know what we could use in place of VNM utility, though. At it feels like the alternate formalism should include the agent's environment/life history in determining its preferences.

I also have lots of objections to using VNM utility to model human preferences. (A comment on your example: if you conceive of an agent as accruing value and making decisions over time, to meaningfully apply the VNM framework you need to think of their preferences as being over world-histories, not over world-states, and of their actions as being plans for the rest of time rather than point actions.) I might write a post about this if there's enough interest.

I've always thought of it as preferences over world-histories and I don't see any problem with that. I'd be interested in the post if it covers a problem with that formulation

Robin Hanson writes about rank linear utility. This formalism asserts that we value options by their rank in a list of options available at any one time, making it impossible to construct a coherent classical utility function.

I recently saw an advertisement which was such a concentrated piece of antirationality I had to share it here. Imagine a poster showing a man's head and shoulders gazing inspiredly past the viewer into the distance, rendered in posterised red, white, and black with a sort of socialist realism flavour. The words: "No Odds Too Long. No Dream Too Great. The Believer."

If that was all, it would just be a piece of inspirational nonsense. But what was it advertising?

Ladbrokes. A UK chain of betting shops.

There is a lot of interest in prediction markets in the Less Wrong community. However, the prediction markets that we have are currently only available in meatspace, they have very low volume, and the rules are not ideal (You cannot leave positions by selling your shares, and only the column with the final outcome contributes to your score)

I was wondering if there would be interest in a prediction market linked to the Less Wrong account? The idea is that we use essentially the same structure as Intrade / Ipredict. We use play money - this can either be Karma or a new "currency" where everyone is assigned the same starting value. If we use a currency other than Karma, your balance would be publicly linked to your account, as an indicator of your predictive skills.

Perhaps participants would have to reach a specified level of Karma before they are allowed to participate, to avoid users setting up puppet accounts to transfer points to their actual accounts

I think such a prediction market would act as a tax on bullshit, it would help aggregate information, it would help us identify the best predictors in the community, and it would be a lot of fun.

Why would LWers use such a prediction market more than PredictionBook?

According to the principle of enlightened self-interest, you should help other people because this will help you in the long run. I've seen it argued that this is the reason why people have an instinct to help others. I don't think that this would mean helping people the way an Effective Altruist would. It would mean giving the way people instinctually do. You give gifts to friends, give to your community, give to children's hospitals, that sort of thing.

This makes me wonder about what I'm calling enlightened altruism. If you get power from helping people in that way, then you can use the power to help people effectively.

Five biotypes of depression

The five defined depression biotypes are:

“It’s not serotonin deficiency, but an inability to keep serotonin in the synapse long enough. Most of these patients report excellent response to SSRI antidepressants, although they may experience nasty side effects,” Walsh said.

Pyrrole Depression: This type was found in 17 percent of the patients studied, and most of these patients also said that SSRI antidepressants helped them. These patients exhibited a combination of impaired serotonin production and extreme oxidative stress.

Copper Overload: Accounting for 15 percent of cases in the study, these patients cannot properly metabolize metals. Most of these people say that SSRIs do not have much of an effect—positive or negative—on them, but they report benefits from normalizing their copper levels through nutrient therapy. Most of these patients are women who are also estrogen intolerant.

“For them, it’s not a serotonin issue, but extreme blood and brain levels of copper that result in dopamine deficiency and norepinephrine overload,” Walsh explained. “This may be the primary cause of postpartum depression.”

Low-Folate Depression: These patients account for 20 percent of the cases studied, and many of them say that SSRIs worsened their symptoms, while folic acid and vitamin B12 supplements helped. Benzodiazepine medications may also help people with low-folate depression.

Walsh said that a study of 50 school shootings over the past five decades showed that most shooters probably had this type of depression, as SSRIs can cause suicidal or homicidal ideation in these patients.

Toxic Depression: This type of depression is caused by toxic-metal overload—usually lead poisoning. Over the years, this type accounted for 5 percent of depressed patients, but removing lead from gasoline and paint has lowered the frequency of these cases.

Those people ranting about anti-depressants and school shootings may have been partially on to something.

Recently I've been trying to catch up in math, with a goal of trying to get to calculus as soon as possible. (I want to study Data Science, and calculus / linear algebra seems to be necessary for that kind of study.) I found someone on LW who agreed to provide me with some deadlines, minor incentives, and help if I need it (similar to this proposal), although I'm not sure how well such a setup will end up working.

Originally the plan was that I'd study the Art of Problem Solving Intermediate Algebra book, but I found that many of the concepts were a little advanced for me, so I switched to the middle of the Introduction to Algebra book instead.

The Art of Problem Solving books deliberately make you think a lot, and a lot of the problems are quite difficult. That's great, but I've found that after 2-3 hours of heavy thinking my brain often feels completely shot and that ruins my studying for the rest of the day. It also doesn't help that my available study time usually runs from about 10am-2pm, but I often only start to really wake up around noon. (Yes, I get enough sleep usually. I also use a light box. But I still often only wake up around noon.)

One solution I've been thinking of would be to take the studying slower: I'd study math only from 12-2, and before that I'd study something else, like programming. The only problem with that is that cutting my study time in half means it'll take twice as long to get through the material. At that rate I estimate it'll take approximately a year, perhaps a bit more, before I can even start Calculus. Maybe that's what's needed, but I was hoping to get on with studying data science sooner than that.

Another possible solution would be to try an easier course of study than the AoPS books. I've had some good experiences with MOOCs, so perhaps that might be a good route to take. To that end I've tentatively signed up to this math refresher course, although I don't really know anything about it. Or perhaps I could just CliffNotes my way through Algebra II and Precalculus, and then take a Calculus MOOC. I wouldn't get the material nearly as well, of course, but at least I'd be able to get to Calculus and move on with my data science studies from there. I could even do one of these alternatives while also doing the AoPS books at a slower pace. That way I could get to data science studying as soon as possible, and I'd also eventually get a more thorough familiarity with the material through the AoPS books.

What would you suggest?

Be very very careful of studying beyond the level you think is comfortable. My experience has been that you cannot push yourself to learn difficult things, especially math, faster than a certain pace. Sure, your limit may be 20% higher than what you think it is, but it's not 200% higher. Spending more time on a task when you just don't feel up to it is useless, because instead of thinking you'll just be spending more time staring at the page and having your mind drift off.

I've found that the various methods of 'productivity boosting' (pomodoros, etc) are largely useless and do one of two things: Either decrease your productivity, or momentarily increase it at the expense of a huge decrease later on (anything from 'feeling fuzzy for a couple of days' to 'total burnout for 3 weeks'). Unless you have a mental illness, your brain is already a finely-tuned machine for learning and doing. Don't fool yourself into thinking you can improve it just by some clever schedule rearrangement.

The point to all of this is that you should refrain from 'planning ahead' when it comes to learning. Sure, you should have some general overall sketch of what you want to learn, but at each particular moment in time, the best strategy is to simply pick some topic and try to learn it as best you can, until you get tired. Then rest until you feel you can go at it again. And avoid internet distractions that use up your mental energy but don't cause you to learn anything.

The United States green card lottery is one of the best lotteries in the world. The payoff is huge (green cards would probably sell for six figures if they were on the market), the cost of entry is minimal ($0 and 30 minutes) and the odds of winning are low, but not astronomically low. If you meet the eligibility criterion and are even a little interested in moving to America, you should enter the lottery this October.

The payoff is huge ...,the cost of entry is minimal

This reminds me of another pretty decent lottery that some U.S. residents can take advantage of. Many major cities, including NYC, have affordable housing programs in brand new buildings. The cost to apply is $0, the payoff of is paying 20% - 25% of market rate of housing in that area. No, it's not for poor people, there are other programs for that, the income requirements vary but in general is set to qualify the working residents of the city (maybe 50k - 95k).

Some of the most desirable and stunning locations in the city, where rents are 4k for 600 sq/f, can go for $700. Just Google the city you live in to see the specific requirements.

Elsewhere in comments here it's suggested that one reason why LW (allegedly) has less interesting posts and discussions than it used to is that "Eliezer has taken to disseminating his current work via open Facebook discussions". I am curious about how the rest of the LW community feels about this.

Poll! The fact that Eliezer now tends to talk about his current work on Facebook rather than LW is ...

[pollid:697]

(For the avoidance of doubt, I am not suggesting that Eliezer has any obligation to do what anyone votes for here. Among many reasons there's this: If he's posting things on FB rather than LW because there are lots of people who want to read his stuff but for whatever reason will never read anything on LW then this poll can't possibly detect that other than weakly and indirectly.)

The main problem is that facebook encourages drastically different quality of thought and expressions than lesswrong does. The quality of thought in Eliezer's comments on facebook is sloppy. I chose to unfollow him on facebook because seeing Eliezer at his worst makes it rather a lot more difficult to appreciate Eliezer at his best (contempt is the mind killer). I assumed that any particularly insteresting work he did (that is safe to share with the public) would end up finding its way into a less transient medium than facebook eventually...

...Have I been missing anything exciting?

facebook encourages drastically different quality of thought and expressions

Not sure if this applies to Eliezer's debate threads, but not having downvotes is a horrible setup for a debate. Every stupid comment is either ignored, which seems like "silence is consent", or starts a flamewar. There is simply no way to reduce noise.

Links: Young blood reverses age-related impairments in cognitive function and synaptic plasticity in mice (press release)(paper)

I think the radial arm water maze experiment's results are particularly interesting; it measures learning and memory (see fig 2c which is visible even with the paywall). There's a day one and day two of training and the old mice (18 months) improve somewhat during the first day and then more or less start over on the second day in terms of the errors they are making. This is also true if the old mice are treated with 8 injections of old blood over the course of 3 weeks (the new curves lie pretty much on top of the old curves in supplemental figure 7d). Young mice (3 months) perform better than the old mice (supplemental figure 5d) they learn faster on the first day and retain it when the second day starts (supp 7d).

However, if you give 8 injections of 100 micro liters of blood from 3 month old mice to 18 month old mice, the treated mice perform dramatically better than the old-blood treated old mice (2c) and much more like young mice (this comparison is less certain; I'm comparing one line from 2c to one line from supp. 7d, but that's how it looks by eye).

One factor in the new blood that plays a role is GDF11. From another paper: "we show that GDF11 alone can improve the cerebral vasculature and enhance neurogenesis"

The New York Times gives an overview and other known effects of young blood such as rejuvenating the musculature / heart / vasculature of old mice with young blood. Young Blood May Hold Key to Reversing Aging, e.g. Restoring Systemic GDF11 Levels Reverses Age-Related Dysfunction in Mouse Skeletal Muscle

I wonder what you think of the question of the origin of consciousness i. e. "Why do we have internal experiences att all?" and "How can any physical process result in an internal/subjective experience?"

I've read some material on the subject before, and reading the quantum physics and identity sequence got me thinking about this again.

Douglas Hofstadter is the go to, mainstream, "hey I recognize that name" authority, though it obviously should be noted that he is a cognitive scientist, not a biologist, neurologist, or nuero-biologist. So, you couldn't build a brain from reading Godel, Escher, and Bach. The only other material I intimately know that discusses the origin of consciousness is Carl Sagan's The Dragons of Eden, which, again, is mainstream and pop science. It's fun reading and enjoyable, but you can't build a brain from it. Someone else can probably suggest better sources for more study.

Of course, some components of these questions can be answered by reducing the question to find out more about what you're looking for.
What's the make up of an internal experience? What are its moving parts? How do you build it?
How are subjective experiences not physical processes? If they aren't physical, what are they?
Taboo "internal/subjective experiences." What are you left with to solve? What mechanics remain to be understood?

Since you've read through the quantum physics sequence, I'm sure you've been exposed to these ideas already. I'm not a neuroscientist or a cognitive scientist. I know very little about the brain that wasn't used for blunt symbolism in Neon Genesis or Xenogears. But I'd guess that, whatever mechanism(s) allows for consciousness, it's built using the matter available. No tricks or slight of hand.

Is anyone familiar with any effective-altruist work on pushing humanity towards becoming a spacefaring species? Seems relevant given the likely difference between a civilization that develops it vs. one that doesn't.

I think it might even have negative return. If you do PR in that regard you are going to encourage misallocation of NASA funds. NASA should spend more resources on tracking near-earth objects and less on PR moves like trying to put a man on Mars. Understanding the climate of our own planet better is also an useful target for NASA spending.

Building human civilisation in Alaska is much easier than doing it on Mars. We don't even get things right in Africa where there fertile ground on which plants grow.

Colonizing Mars will need much better biotech and smarter robots than we have at the moment.

I've been wondering a lot about whether or not I'm acting rationally with regards to the fact that I will never again be as young as I am now.

So I've been trying to make a list of things I can only do while I'm young, so that I do not regret missing the opportunity later (or at least rationally decided to skip it). I'm 27 so I've already missed a lot of the cliche advice aimed at high school students about to enter college, and I'm already happily engaged so that cuts out some other things.

Any thoughts on opportunities only available at a certain age?

One point, just a nitpick: I would suggest not to aim to act "rationally." Aim to win. I may be assuming overmuch about your intended meaning, but remember, if your goal is to do what is rational rather than to do what is best/right/winning, you'll be confused.

That said, I understand what you mean. There are activities I know can done now, in youth, that, while maybe not impossible in my 40s, 50s, or 60s, would be more difficult.

First, your health. Work out, eat right, stay clean. Do everything that can maximize your health NOW and do it to the utmost that you can. If you start working on your health now, the long term payoffs will be exponential rather than linear. The longer you wait to maximize your health, the greater your disadvantage, the less your payoff. EDIT: (As I have no citation to back this claim up, it'd be best not to take my word on this. I would still suggest not delaying improving your health because doing so will result in benefits now, regardless of whether health improvements are exponential or linear with age.)

Second, try everything. We have a whole article on this that spells it out better than I can. And I'll be the first to admit I haven't dove into its methods full force so I can't vouch for them. But, basically, expose yourself to the world. Not in any mean or gross sense, but as a human being, gathering experience. Go to art classes, go to yoga classes, go to MIRI classes, take karate, learn to dance, learn to sing, play an instrument, learn maths, learn history, go to LW meetups.

Of course, you will be limited, and should be limited, by circumstances. You aren't a brain with infinite capacity yet, so you can't literally do everything. So, focus on a few things at a time. Set a schedule to try out new activities while continuing old, beneficial ones. For example, you might have three days for working out, two days for programming learning (as a hobby), one for online studying, one for social networking. Replace with whatever activities most interest or most benefit you (and don't be afraid of overlap if you want to double up). I live in a place with very little stimulus, so I double up on audiobooks and exercise and use recreational times (gaming or working out) to listen to audiobooks or expose myself to new music. The point is to jump in with both feet and do whatever you do well.

Ultimately, your youth gives you two real things: health (presumably) and energy. Now, I have seen 60 year old men in better shape and with more pep than me (marathon runners!), but for the average, your health and energy will come easier to you now than later. Use it.

Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI's fundraising efforts.

See also this old post where Robin Hanson basically predicted that this would happen.

The contrarian will have established some priority with these once-contrarian ideas, such as being the first to publish on or actively pursue related ideas. And he will be somewhat more familiar with those ideas, having spent years on them.

But the cautious person will be more familiar with standard topics and methods, and so be in a better position to communicate this new area to a standard audience, and to integrate it in with other standard areas. More important to the "powers that be" hoping to establish this new area, this standard person will bring more prestige and resources to this new area.

If the standard guy wins the first few such contests, his advantage can quickly snowball into an overwhelming one. People will prefer to cite his publications as they will be in more prestigious journals, even if they were not quite as creative. Reporters will prefer to quote him, students will prefer to study under him, firms will prefer to hire him as a consultant, and journals will prefer to publish him, as he will be affiliated with more prestigious institutions. And of course the contrarian may have a worse reputation as a "team player."

I think this is fine. Convincing people that this is a Real Thing and then specifically making them aware of Eliezer and MIRI should be done separately anyway. Doing the second thing too soon may make the first thing harder, while doing the second thing late makes the first thing easier (because then AI x-risk can be put in a mental category other than "that weird thing that those weird people care about").

Idea for a question for the next LW survey: Have you ever been diagnosed with a mental disorder? If so, what was it? [either a list of some common ones and an "other" box, or, ideally, a full drop-down of DSM-5 diagnoses. Plus a troll-bait non-disorder and a "prefer not to say", of course]

So I often find that interesting people live near me. Anyone have tips on asking random people to meet up? Ask them for coffee? I suppose a short email is better than a long one, which may come off creepy? Anyone have friends they met via random emails?