Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!
If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it's not good enough to say that we're really rational, scientific, altruist, utilitarian, etc, in contrast to those people -- they thought the same.)
So, how might we find that all these ideas are massively wrong?
Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.
Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.
1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.
2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don't launch any AI before civilization runs out of resources or collapses for some other reason.
3) Consciousness is sort of optional feature, intelligence can work just well without it. We can reliably say if given intelligence is a person. In other words, real world works the same way as in Peter Watts "Blindsight". Results if wrong: many, among them classic sci-fi AI rebellion.
4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.
I think the whole MIRI/LessWrong memeplex is not massively confused.
But conditional on it turning out to be very very wrong, here is my answer:
A. MIRI
The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.
MIRI's AI work turns out to trigger a massive negative outcome -- either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.
It turns out that the UFAI explosion really is the risk, but that MIRI's AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.
B. CfAR
It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only find this out years down the road, this could be a massive failure scenario.
It turns out that epistemologically non-rational techniques are instrumentally valuable. Cf. Mormonism. Again, CfAR knows this, but in this failure scenario, they fail to reconcile the differences between the two types of rationality they are trying for.
Again, I think that the above scenarios are not likely, but they're my best guess at what "massively wrong" would look like.
MIRI failure modes that all seem likely to me:
They talk about AGI a bunch and end up triggering an AGI arms race.
AI doesn't explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.
The future is just way harder to predict than everyone thought it would be... we're cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn't have possibly forseen.
Uploads come first.
If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
A few that come to mind:
- Some religious framework being basically correct. Humans having souls, an afterlife, etc.
- Antinatalism as the correct moral framework.
- Romantic ideas of the ancestral environment are correct and what feels like progress is actually things getting worse.
- The danger of existential risk peaked with the cold war and further technological advances will only hasten the decline.
It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren't signed up for cryonics.
Take a figure like Nassim Taleb. He's frequently quoted on LessWrong so he's not really outside the LessWrong memeplex. But he's also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don't take to their full conclusion.
So, how might we find that all these ideas are massively wrong?
It's a topic that's very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can't put into good metrics.
There however no way to explain the framework in an article. Most people who read the introductory book don't get the point before they spent years experiencing the system from the inside.
It's the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won't be misunderstood.
It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.
That's not LW-memeplex being wrong, that's just a LW-meme which is slightly more pessimistic than the more customary "the vast majority of all UFAI's are unfriendly but we might be able to make this work" view. I don't think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, "Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don't actually "FOOM" dramatically ... they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn't much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that."
If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)
Do you want to have a career at a conservative institution such a bank or a career in politics? If so, it's probably a bad idea to have too much attack surface by using your real name.
Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.
If you meet people in real life might already know you from your online commentary that they have read and you don't have to start introducing yourself.
It's really a question of whether you think strangers are more likely to hurt or help you.
Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.
I think the best long-term strategy would be to invent a different name and use the other name consistently, even in the real life. With everyone, except the government. Of course your family and some close friends would know your real name, but you would tell them that you prefer to be called by that other name, especially in public.
So, you have one identity, you make it famous and everyone knows you. Only when you want to get anonymous, you use your real name. And the advantage is that you have papers for it. So your employer will likely not notice. You just have to be careful never to use your real name together with your fake name.
Unless your first name is unusual, you can probably re-use your first name, which is how most people will call you anyway, so if you meet people who know your true name and people who know your fake name at the same time, the fact that you use two names will not be exposed.
Exactly! He is so good example that it is easy to not even notice him being a good example.
There is no "Gwern has an identity he is trying to hide" thought running in my mind when I think about him (unlike with Yvain). It's just "Gwern is Gwern", nothing more. Instead of a link pointing to the darkness, there is simply no link there. It's not like I am trying to respect his privacy; I feel free to do anything I want and yet his privacy remains safe. (I mean, maybe if someone tried hard... but there is nothing reminding people that they could.) It's like an invisible fortress.
But if instead he called himself Arthur Gwernach (abbreviated to Gwern), that would be even better.
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
ETA: Potentially less contentious rephrase: why isn't making a life as important as saving a life?
Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn't, since nonexisting entities can't have preferences.
The question also turns on issues about population ethics. The previous paragraph assumes the "total view": that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn't count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn't.
For further discussion, see Peter Singer's Practical Ethics, chap. 4 ('What's wrong with killing?").
Making a person and unmaking a person seem like utilitarian inverses
Doesn't seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you're only focusing on the utility for the person made or unmade, then maybe (although see blacktrance's comment on that), but as a utilitarian you have no license for doing that.
A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple's child leads a life far longer and happier than the forgotten Hermit's ever would have been.
Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?
Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.
If you're asking me, I'd say no, but I'm not a utilitarian, partly because utilitarianism answers "yes" to questions similar to this one.
How much does a genius cost? MIRI seems intent on hiring a team of geniuses. I’m curious about what the payroll would look like. One of the conditions of Thiel’s donations was that no one employed by MIRI can make more than one-hundred thousand a year. Is this high enough? One of the reasons I ask is I just read a story about how Google pays an extremely talented programmer over 3 million dollars per year - doesn't MIRI also need extremely talented programmers? Do they expect the most talented to be more likely to accept a lower salary for a good cause?
Do they expect the most talented to be more likely to accept a lower salary for a good cause?
Yes. Any one with the necessary mindset of thinking that AI is the most important issue in the world will accept a lower salary than what's possible in the market elsewhere.
I don't know whether MIRI has an interest in hiring people who don't have that moral framework.
Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.
How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences - first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they're "clearly" being the victims of a Dutch Book, too - they keep repeating this set of trades every evening, and losing lots of time because of that.
To me this seems to suggest that having circular preferences isn't necessarily the bad thing that it's often made out to be - after all, the people in question probably wouldn't say that they're being exploited. But maybe I'm missing something.
The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind "I prefer A to B at time t1 and B to A at time t2", like the ones of your example. They are more like "I prefer A to B and B to C and C to A, all at the same time".
The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it---they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other "bad" kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn't actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.
On the Neil Degrasse Tyson Q&A on reddit, someone asked: "Since time slows relative to the speed of light, does this mean that photons are essentially not moving through time at all?"
Tyson responded "yes. Precisely. Which means ----- are you seated?Photons have no ticking time at all, which means, as far as they are concerned, they are absorbed the instant they are emitted, even if the distance traveled is across the universe itself."
Is this true? I find it confusing. Does this mean that a photon emitted at location A at t0 is absorbed at location B at t0, such that it's at two places at once? In what sense does the photon 'travel' then? Or is the thought that the distance traveled, as well as the time, goes to zero?
Other people have explained this pretty well already, but here's a non-rigorous heuristic that might help. What follows is not technically precise, but I think it captures an important and helpful intuition.
In relativity, space and time are replaced by a single four-dimensional space-time. Instead of thinking of things moving through space and moving through time separately, think of them as moving through space-time. And it turns out that every single (non-accelerated) object travels through space-time at the exact same rate, call it c.
Now, when you construct a frame of reference, you're essentially separating out space and time artificially. Consequently, you're also separating an object's motion through space-time into motion through space and motion through time. Since every object moves through space-time at the same rate, when we separate out spatial and temporal motion, the faster the object travels through space the slower it will be traveling through time. The total speed, adding up speed through space and speed through time, has to equal the constant c.
So an object at rest in a particular frame of reference has all its motion along the temporal axis, and no motion at all along the spatial axes. It's traveling through time at speed c and it isn't traveling through space at all. If this object starts moving, then some of the temporal motion is converted to spatial motion. It's speed through space increases, and its speed through time decreases correspondingly, so that the motion through space-time as a whole remains constant at c. This is the source of time dilation in relativity (as seen in the twin paradox) - moving objects move through time more slowly than stationary objects, or to put it another way, time flows slower for moving objects.
Of course, the limit of this is when the object's entire motion through space-time is directed along the spatial axes, and none of it is directed along the temporal axes. In this case, the object will move through space at c, which turns out to be the speed of light, and it won't move through time at all. Time will stand still for the object. This is what's going on with photons.
From this point of view, there's nothing all that weird about a photon's motion. From the space-time perspective, which after all is the fundamental perspective in relativity, it is moving pretty much exactly like any other object. It's only our weird habit of treating space and time as extremely different that makes the entirely spatial motion of a photon seem so bizarre.
There are no photons. There, you see? Problem solved.
(no, the author of the article is not a crank; he's a Nobel physicist, and everything he says about the laws of physics is mainstream)
Does this mean that a photon emitted at location A at t0 is absorbed at location B at t0, such that it's at two places at once?
In the photon's own subjective experience? Yes. (Not that that's possible, so this statement might not make sense). But as another commenter said, certainly the limit of this statement is true: as your speed moving from point A to point B approaches the speed of light, the subjective time you experience between the time when you're at A and the time when you're at B approaches 0. And the distance does indeed shrink, due to the Lorentz length contraction.
In what sense does the photon 'travel' then?
It travels in the sense that an external observer observes it in different places at different times. For a subjective observer on the photon... I don't know. No time passes, and the universe shrinks to a flat plane. Maybe the takeaway here is just that observers can't reach the speed of light.
Not quite either of those.
The first thing to say is that "at t0" means different things to different observers. Observers moving in different ways experience time differently and, e.g., count different sets of spacetime points as simultaneous.
There is a relativistic notion of "interval" which generalizes the conventional notions of distance and time-interval between two points of spacetime. It's actually more convenient to work with the square of the interval. Let's call this I.
If you pick two points that are spatially separated but "simultaneous" according to some observer, then I>0 and sqrt(I) is the shortest possible distance between those points for an observer who sees them as simultaneous. The separation between the points is said to be "spacelike". Nothing that happens at one of these points can influence what happens at the other; they're "too far away in space and too close in time" for anything to get between them.
If you pick two points that are "in the same place but at different times" for some observer, then I<0 and sqrt(-I) is the minimum time that such an observer can experience between visiting them. The separation between the points is said to be "timelike". An influence can propagate, slower than the speed of light, from one to the other. They're "too far away in time and too close in space" for any observer to see them as simultaneous.
And, finally, exactly on the edge between these you have the case where I=0. That means that light can travel from one of the spacetime points to the other. In this case, an observer travelling slower than light can get from one to the other, but can do so arbitrarily quickly (from their point of view) by travelling very fast; and while no observer can see the two points as simultaneous, you can get arbitrarily close to that by (again) travelling very fast.
Light, of course, only ever travels at the speed of light (you might have heard something different about light travelling through a medium such as glass, but ignore that), which means that it travels along paths where I=0 everywhere. To an (impossible) observer sitting on a photon, no time ever passes; every spacetime point the photon passes through is simultaneous.
So: does the distance as well as the time go to 0? Not quite. Neither distance nor time makes sense on its own in a relativistic universe. The thing that does make sense is kinda-sorta a bit like "distance minus time" (and more like sqrt(distance-squared minus time-squared)), and that is 0 for any two points in spacetime that are visited by the same photon.
(Pedantic notes: 1. There are two possible sign conventions for the square of the interval. You can say that I>0 for spacelike separations, or say that I>0 for timelike separations. I arbitrarily chose the first of these. 2. There may be multiple paths that light can take between two spacetime points. They need not actually have the same "length" (i.e., interval). Strictly, "interval" is defined only locally; then, for a particular path, you can integrate it up to get the overall interval. 3. In the case of light propagating through a medium other than vacuum, what actually happens involves electrons as well as photons and it isn't just a matter of a photon going from A to B. Whenever a photon goes from A to B it does it, by whatever path it does, at the speed of light.)
The Lorentz factor diverges when the speed approaches c. Because of Length contraction and time dilation, both the distance and the time will appear to be 0, from the "point of view of the photon".
(the photon is "in 2 places at once" only from the point of view of the photon, and it doesn't think these places are different, after all they are in the same place! This among other things is why the notion of an observer traveling at c, rather than close to c, is problematic)
What motivates rationalists to have children? How much rational decision making is involved?
ETA: removed the unnecessary emotional anchor.
ETA2: I'm not asking this out of Spockness, I think I have a pretty good map of normal human drives. I'm asking because I want to know if people have actually looked into the benefits, costs and risks involved, and done explicit reasoning on the subject.
I wouldn't dream of speaking for rationalists generally, but in order to provide a data point I'll answer for myself. I have one child; my wife and I were ~35 years old when we decided to have one. I am by any reasonable definition a rationalist; my wife is intelligent and quite rational but not in any very strong sense a rationalist. Introspection is unreliable but is all I have. I think my motivations were something like the following.
Having children as a terminal value, presumably programmed in by Azathoth and the culture I'm immersed in. This shows up subjectively as a few different things: liking the idea of a dependent small person to love, wanting one's family line to continue, etc.
Having children as a terminal value for other people I care about (notably spouse and parents).
I think I think it's best for the fertility rate to be close to the replacement rate (i.e., about 2 in a prosperous modern society with low infant mortality), and I think I've got pretty good genes; overall fertility rate in the country I'm in is a little below replacement and while it's fairly densely populated I don't think it's pathologically so, so for me to have at least one child and probably two is probably beneficial for society overall.
I expected any child I might have to have a net-positive-utility life (for themselves, not only for society at large) and indeed probably an above-average-utility life.
I expected having a child to be a net positive thing for marital harmony and happiness (I wouldn't expect that for every couple and am not making any grand general claim here).
I don't recall thinking much about the benefits of children in providing care when I'm old and decrepit, though I suppose there probably is some such benefit.
So far (~7.5 years in), we love our daughter to bits and so do others in our family (so #1,#2,#5 seem to be working as planned), she seems mostly very happy (so #4 seems OK so far), it's obviously early days but my prediction is still that she'll likely have a happy life overall (so #4 looks promising for the future) and I don't know what evidence I could reasonably expect for or against #3.
What motivates rationalists to have children?
The same what motivates other people. Being rational doesn't necessarily change your values.
Clearly, some people think having children is worthwhile and others don't, so that's individual. There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.
The amount of decision-making also obviously varies -- from multi-year deliberations to "Dear, I'm pregnant!" :-)
Disclaimer: I don't have kids, won't have them anytime soon (i.e. not in the next 5 years), and until relatively recently didn't want them at all.
The best comparison I can make is that raising a child is like making a painting. It's work, but it's rewarding if done well. You create a human being, and hopefully impart them with good values and set them on a path to a happy life, and it's a very personal experience.
Personally, I don't have any drive to have kids, not one that's comparable to hunger or sexual attraction.
Doesn't cryonics (and subsequent rebooting of a person) seem obviously too difficult? People can't keep cars running indefinitely, wouldn't keeping a particular consciousness running be much harder?
I hinted at this in another discussion and got downvoted, but it seems obvious to me that the brain is the most complex machine around, so wouldn't it be tough to fix? Or does it all hinge on the "foom" idea where every problem is essentially trivial?