If it's worth saying, but not worth its own post, even in Discussion, it goes here.
Velocity Raptor: a simple physics Flash game where the physics simulates special relativity. Lorenz contraction, time dilation, red shifts, visual distortions ... people seem to get stuck on level 30, though Gwern made it to level 31. It's one thing to look at equations, it's another to get a feel for it. I strongly recommend this to everyone.
Suspended Animation the first blog post in a series on Urban Future that I am currently reading. Stagnation in our time:
What seems pretty clear from most of this (and already in Cowen's account) is that nothing much has been moving forward in the world's 'developed' economies for four decades except for the information technology revolution and its Moore's Law dynamics. Abstract out the microprocessor, and even the most determinedly optimistic vision of recent trends is gutted to the point of expiration. Without computers, there's nothing happening, or at least nothing good.
Robot cars may already be better drivers than humans. And if not, they're clearly on their way to become so.
Driving is an area of life where millions of "ordinary" humans (non-specialists) make life-critical and therefore morally-significant judgments every day. When we drive, we are taking our lives and those of others in our hands. Many of us would wish to be better drivers than we are: not only more skilled, but better in ways that could be described as "virtue": less prone to road rage, negligence, driving while impaired, and other faults. Robots don't get angry, they don't get distracted, and they don't get drunk or tired. Since bad driving kills people, we can reasonably say that robot driving is (or can become) morally superior to human driving — in a plain consequentialist sense.
This seems like a natural analogy for CEV in superhuman systems. We do not want a robot driver to drive just like a human. We want a robot driver to drive as a human would drive if that human were faster-thinking, calmer, clearer-minded, more focused; had sharper eyes, better knowledge of the roads and hazards, better ability to cooperate with other drivers. We want a robot to optimize a utility function derived closely from ours — crudely, "get me to my destination and don't kill anyone or cause any damage on the way" — and to do so better than we can.
It is only within a limited domain that the robot car is a superhuman decision-maker; but that limited domain is one that pretty much every adult is acquainted with. When robot cars become commonplace, every human driver will — every day — be interacting with limited-domain superhuman, non-conscious, non-recursively-optimizing artificial decision agents implementing a form of extrapolated volition and making morally significant, life-critical choices.
People might notice that the robots are nicer than humans to share the road with. They don't cut you off. They let you merge. They stop for Grandma entering the crosswalk. They don't run bikers off the road by not seeing them. They don't drive really slow in the ultra-fast lane while people behind them are going insane — they're not assholes.
We should expect this will dramatically increase visibility of AI ethics as a field.
It's a very good example. It also illustrates how hard is to specify a useful utility function for an AGI: "get me to my destination and don't kill anyone or cause any damage on the way" can lead to a number of non-obvious unintended consequences, compared to the CEV version "drive as a human would drive if that human were faster-thinking, calmer, clearer-minded, more focused; had sharper eyes, better knowledge of the roads and hazards, better ability to cooperate with other drivers".
LSD-Enhanced Creativity (HT: Isegoria)
Over the course of the preceding year, IFAS researchers had dosed a total of 22 other men for the creativity study, including a theoretical mathematician, an electronics engineer, a furniture designer, and a commercial artist. By including only those whose jobs involved the hard sciences (the lack of a single female participant says much about mid-century career options for women), they sought to examine the effects of LSD on both visionary and analytical thinking. Such a group offered an additional bonus: Anything they produced during the study would be subsequently scrutinized by departmental chairs, zoning boards, review panels, corporate clients, and the like, thus providing a real-world, unbiased yardstick for their results.
In surveys administered shortly after their LSD-enhanced creativity sessions, the study volunteers, some of the best and brightest in their fields, sounded like tripped-out neopagans at a backwoods gathering. Their minds, they said, had blossomed and contracted with the universe. They’d beheld irregular but clean geometrical patterns glistening into infinity, felt a rightness before solutions manifested, and even shapeshifted into relevant formulas, concepts, and raw materials.
But here’s the clincher. After their 5HT2A neural receptors simmered down, they remained firm: LSD absolutely had helped them solve their complex, seemingly intractable problems. And the establishment agreed. The 26 men unleashed a slew of widely embraced innovations shortly after their LSD experiences, including a mathematical theorem for NOR gate circuits, a conceptual model of a photon, a linear electron accelerator beam-steering device, a new design for the vibratory microtome, a technical improvement of the magnetic tape recorder, blueprints for a private residency and an arts-and-crafts shopping plaza, and a space probe experiment designed to measure solar properties. Fadiman and his colleagues published these jaw-dropping results and closed shop.
I'd be very interested in what information those of us who are into nootropics might provide on the risks and benefits of LSD. I find this microdosing particularly interesting:
First things first: Fadiman defines a micro-dose as 10 micrograms of LSD (or one-fifth the usual dose of mushrooms). Because he cannot set up perfect lab conditions due to the likelihood of criminal prosecution, he has instead crafted a study in which volunteers self-administer and self-report. Which means that they must acquire their own supply of the Schedule 1 drug and separate a standard hit of 50 to 100 micrograms into micro-doses. (Hint: LSD is entirely water-soluble.)
Beginning in 2010, an unspecified but growing number of volunteers have taken a micro-dose every third day, while conducting their typical daily routines and maintaining logbooks of their observations. Study enrollment may last for several weeks or longer: There doesn’t appear to be a brightly drawn finish line. After several weeks (or, um…), participants send their logbooks to an email address on Fadiman’s personal website, preferably accompanied by a summary of their overall impressions.
I've been rather impressed by how much data gwern can get out of self-study. I can't help but wonder what we as a community might do if we established a culture of running our own experiments and studies. Much of our culture and reasoning is built on studies that are likely to be false (because most studies are likely to be false). Worse we don't have a good way to test the theories we build empirically.
Now we might re-purpose CFAR to do some such studies, perhaps by getting them to lauch kickstarter-style donation drives to run particular experiments relevant to human rationality. But on research that is not legal community driven seems to be the way to go.
To add a disclaimer much of the rest of the original article is filled with obviously silly if somewhat virulent memes of which noble savage is probably the most obvious. There is also some pretty heavy handed politicking and tribal attire (for example the non sequitur occupy wall street references). Please ignore that.
I have finally posted my self-experiment on LSD microdosing: http://www.gwern.net/LSD%20microdosing
Something is hinky with the upvote and downvote buttons (for me at least). When I press one nothing happens. Repeated pressing doesn't seem to do anything, but then sometimes the button colours-in after a delay. Sometimes it doesn't look like I pressed the button and then when I refresh the page I see that the button is coloured and the vote did register. Anyone else have the same problem?
Previously, the interface responded immediately, but the vote wasn't immediately applied (if you reopened the same post/comment, you wouldn't see your vote for a while). Sometimes, a vote would be lost, never applied, even though it was reflected in the interface. It looks like now the interface waits for the vote to actually get received, and only updates once it has been. As before, it takes a while for that to happen, and sometimes it doesn't happen at all, but the difference is that now this effect is apparent.
If this delay can't be easily fixed, an animation indicating that the operation is in progress (like one appearing when sending a comment) might help with the interface responsiveness issue.
PSA: If you want there to be a new Stupid Questions Open Thread, make it yourself! There is not and never has been a rule against this. I consider the "how often to make them" question unanswered, but a good interim answer is, "whenever someone feels like making one".
(Also, my computer broke, and so I posted this from a Wii, which is incapable of using the article editor. If someone could kindly edit "the sentence" into the post.)
Races are clusters in DNA-space by James_G
Thinking about Eliezer's post about Doublethink Speaking of deliberate, conscious self-deception he opines: "Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen."
This seems odd for a site devoted to the principle that most of the time, most human minds are very biased. Don't we have the brains of one species of apes that has evolved to be particularly sensitive to politics? Why wouldn't doublethink be the evolutionarily adaptive norm?
My intuition, based on my own private experience, is the opposite of Eliezer's -- I'd assume that most industrialized people practice some degree of doublethink routinely. I'd further suspect that this talent can be cultivated, and I'd think that (say) most North Koreans might be extremely skilled at deliberate self-deception, in a manner that would have been very familiar to George Orwell himself.
This seems like an empirical question. What's the evidence out there?
Review of “America’s Retreat from Victory” by Joseph R. McCarthy
This excellent review makes me think this will be an interesting book to add to my reading list. Has anyone else read it? I probably should add this statement as a sort of disclaimer:
A rationalist has a hard time not reviewing history from that period and concluding that for all intents and purposes McCarthy was right about the extent of communist infiltration and may have indeed grossly underestimated and misunderstood the nature of intellectual sympathies for communism and how deeply rooted those sources of sympathy where in American elite intellectual tradition.
He basically though he needed to eliminate some foreign sources of corruption and that he would be helped rather than sabotaged by well meaning Americans in positions of great power at least after they where made aware of the extent of the problem. He was wrong. For his quest to have been less quixotic he would have needed to basically remake the entire country (and at that point in time, the peak of American power that basically meant by extension the remaking of the entire West).
Actually that whole thread was a very interesting one with many cool posts by various people so go read it!
Related to: List of public drafts on LessWrong
Public Draft On Moral progress -- Text dump
For now this is just a text dump for relating to a conversation I had, that I retracted, not because I found them so lacking but because that particular irrationality game thread turned out to have been made by a likely troll. Expect changes in the next few days. Here is a link to the original conversation.
We have not been experiencing moral progress in the past 250 years. Moral change? Sure. I'd also be ok with calling it value drift. I talked about this previously in some detail here and here. I hope some of you have read that material before. It is also neat if you read the meta-ethics sequence and particularly this post.
Against the Better Angels of our Nature counterargument
Named after this excellent long book which you guys really should read. Actually someone should do a review of the book. Note to self: Do review in one year if no one else beats you to it.
The trend to moral progress has been one of less accepting of violence, less acceptance of nonconsensual interaction, less victim blaming, and less standing by while terrible things happen to others (or at least looking indignant at past instances of this).
This leads to a falsifiable prediction. In the next one to four centuries, vegetarianism will increase to a majority, jails will be seen as unnecessarily, brutally, unjustifiably harsh, "the poor" will be less of an Acceptable Target (c.f. delusions that they are "just lazy" and so on), and a condemnation of the present generation for being so terrible at donating in general and at donating to the right causes. If all of those things happen, moral progress will have been flat-out confirmed.
I don't think I should be a vegetarian. Thus at best I feel uneasy that people in four centuries thinking vegetarianism should be compulsory and at worst I'll be dismayed them spending time on activities related to that instead of things I value. If I thought that was great I'd already be vegetarian, duh.
Also I think I like some violence to be ok. Completely non-violent minds would be rather inhuman, and violence has some neat properties if viewed from the perspective of fun theory. In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication. Your point on consent is questionable. Victim blaming as well since especially in the 20th century I would think all we saw was one set of scapegoats being swapped for another one.
This leads me to suspect Homer's FAI is probably different from my own FAI, is different from the FAI of 2400 AD values. If FAI2400 gets to play with the universe around forever, instead of FAI2012 I'd be rather pissed. Just because you see a trend line in moral change doesn't mean there is any reason to outsource your future value edits to. Isn't this the classical mistake of confusing is for should?
But if it was as you say then all our worries about CEV and FAI would be silly, since our society apparently already automagically is something very similar to what we want, we just need to figure out how to design it so that we can include emulated human minds into it while it continues working its thing.
Yay positive singularity problem solved!
Is moral progress a coherent concept? What is moral progress?
Short anser: Yes I tentatively think it is. I need to work to make my answer to the second question more explicit, if not into an independent essay, I'll be citing some thought done by Eliezer Yudkwoksy on CEV and will also be relying on James_G's concept of the eminent self.
Do you believe that there is no non-arbitrary way to define "moral progress", or you think that "moral progress" is a coherent concept, just we haven't experienced it?
I think moral progress is a coherent concept, I'm inclined to argue no human society so far has experience it, though obviously I can't rule out some outliers that did do so in certain time periods since this is such a huge set. we have so little data and there seems to be great variance in the kinds of values we seen in them.
"Moral progress" simply describes moral change or value drift in the speaker's preferred direction. Very confident (~95%).
I don't use it that way. I like lots of moral changes in the past 250 years but feel the process behind it isn't something I want to outsource morality to. Just like I like having opposable thumbs but feel uncomfortable letting evolution shape humans any further. We should do that ourselves so it doesn't grind down our complex values.
There are lots of people running around who think society in 1990 is somehow morally superior to society in 1890 on some metric of rightness beyond the similarity of their values to our own. This is the difference between someone being on the "wrong side of history" being merely a mistake in reasoning they should get over as soon as possible and it being a tragedy for them. A tragedy that perhaps kept repeating for every human society and individual in existence for nearly all of history.
This also suggests different strategies are appropriate for dealing with future moral change. I think we should be very cautious since I'm sure we don't understand the process. Modern Western civilization doesn't have narrative of "over time values became more and more like our own", but "over time morality got better and better and this gives our society meaning!". Its the difference between seeing "God guiding evolution" and confronting the full horror of Azathoth.
Do you think any human society ever experienced moral progress?
Hard to say, history is blurry, we do know the past 300 years well enough that I'm ok with this level certainty.
I'm far from comfortable saying that there was no moral progress in say some Medieval European societies. Not perhaps from our perspective, but from a sort of CEV-of-700 AD values looking at 1100 AD, who knows? I don't know enough to have a reasonable estimate.
There was also useful progress in philosophy made before the "Enlightenment" that sometimes captured previous values and preferences and fixed them up. But again nearly any society for which that is true there was also lots of harmful philosophy that mutated values in responses to various pressures.
If you can't produce evidence that moral progress ever happened and believe that it definitely hasn't happened in the recent past, why do you think that moral progress is a coherent concept?
I didn't say I had great confidence in moral progress being a coherent concept. But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.
Feedback at any stage is welcomed. Expect Frequent Edits
Note: I've had very good experiences with such public drafts so far and I recommend them to others.
Commentary on LessWrong and its norms
That I would like to share. I recently found it on the blog Writings by James_G. I am going to add some emphasis and commentary of my own, but I'm mostly interested how other LWers see this. The main topic of the post itself is about politics and cooperation but I want to emphasise that isn't the topic I'd like to open.
...
So, neurological egalitarians like (I should imagine) Zachary and neurological racist-authoritarians like myself need to be able to cooperate. Unfortunately, politics is the mind-killer.
No wait—that can’t be true. I’m writing this highly political essay, and my mind ain’t killed (Aberlour notwithstanding). This is the problem with Yudkowsky: he’s right so often, that the odd misfire goes unnoticed.
People go funny in the head when talking about politics.
Close, but no cigar. People go funny in the head when their emotions are aroused, and “political” arguments tend to be provocative. Thinking about and discussing politics doesn’t always evoke strong emotions; strong emotions can be evoked by things other than politics. Politics and out-of-control emotions are closely related, but here Yudkowsky didn’t cleave reality at its joints.
Yudkowsky’s rationalist forum, lesswrong.com, is based on the idea that politics is the mind-killer. When someone comments on what he considers a political subject, he apologises for dropping a mind-killer. Political arguments are taboo. The forum also has a karma system: every post and comment is subject to anonymous positive and negative ratings from other users. This is especially effective because of the forum members’ high regard for LessWrong’s majority opinion; negative karma is an assault on one’s soul. Given the high quality of the founding population (Overcoming Bias commenters), these features make LessWrong an unusually civil place.
Yeah it kind of can feel like that. Consider the strong reaction and even written out objections people have when down voted. Yet I think we should be doing more down voting.
So, is LessWrong an exemplar for efficient cooperation across the neuropolitical divide? I don’t think so.
There seems to be evidence that we indeed failing at this.
First, enforcing the no-politics taboo isn’t straightforward. “Politics” is an ill-defined term. It means roughly, “ideas and arguments associated with governance, how people should live, and decisions that significantly affect many people’s lives”. A LessWrong thread about the irrationality of Keynesianism and fraudulence of Keynesian economists would be highly political—seditious. But a (quite interesting) thread about Awful Austrians isn’t political, because Austrian economists are marginal. Austrian theory isn’t influential and might never be, therefore attacking it doesn’t seem political in everyone’s eyes. In this way, no-politics can easily become no-political-opinion-that-isn’t-mainstream—not a recipe for rationality.
Another problem is that the scope of “mind-killing arguments” is embarrassingly wide. For example:
I’ve recently read a lot of strong claims and mind-killing argumentation made against E.Y.’s assertion that MWI is the winning/leading interpretation in QM. The SEP seems to agree with this, which means I’ve got a bottom-line here to erase since both of my favorite authorities agree on that particular conclusion.
If arguments about quantum mechanics are mind-killing, what isn’t? Is arguing in general taboo? That isn’t rational.
Emotion is the mind-killer, so an apolitical argument could kill a nerd’s mind. For example, his opponent might insinuate that only rubes take the Copenhagen interpretation seriously. Being insulted, or simply losing an argument can stimulate emotions. But a rational person learns not to let anyone kill his mind (and to be a skilful mind-assassin when it suits him). To describe every firm clash of opinions as “mind-killing” is a self-fulfilling prophecy.
Emotions may have evolved to permit ignorant humans to practise timeless decision theory in situations requiring reciprocity and deal-making, like the Parfit’s Hitchhiker thought experiment. “Emotion” signifies a shift in the balance of mental sub-agents, which induced TDT behaviour in fecund ancestral humans. If the brain in question is a moral realist, it rationalises these emotions using moral projectivism: “I responded like that because he was morally wrong”. This epistemic error obstructs the displaced sub-agent from regaining control; moral realism legitimises upstart sub-agents.
Some emotions don’t prompt moral rationalisation. The excuse for odd behaviour associated with mating is, “I love X”. Love is nonetheless another instance of TDT pre-commitment; but since mating is a private interaction, unlike morality the (degenerate) rationalisation for the emotion of love need not act as a common currency for collective negotation and deal-making. Whether or not TDT considerations fully explain the evolution of emotion, we know that emotion “kills minds”—it promotes upstart sub-agents—and we can identify its causes.
Internet fora are provocative. Anyone can comment; even if 9 out of 10 discussants are reasonable, there’s always a jerk. The low bandwidth of internet discussions also causes problems. In meatspace, body language, tone of voice and familiarity allow people to respect one another’s emotional limits; internet interlocutors inadvertently upset one another. LessWrong’s karma system is also subtly infuriating. Outside cyberspace, nobody can snipe someone’s reputation with the impunity of the anonymous, silent downvoter. In real life, not everyone’s opinion is equally status-enhancing or -detracting, and every off-hand comment isn’t susceptible to meticulous scrutiny. Unwarranted downvotes—and jerks’ downvotes are indistinguishable from anyone else’s— are the Jim Jones of mind-killing.
LessWrong does a great job of maintaining civility; a more polite, entirely open internet forum I cannot imagine. But the costs of the no-politics taboo and karma system—entrenching mainstream ideas, stifling discussion of important problems, and creating effete rationalists—are unavoidable, and gradual dissipation of the highly rational, open-minded Overcoming Bias founding group may exacerbate these downsides.
A completely open forum, however effective the karma system and informal rules, doesn’t permit neurological leftists and rightists to cooperate and discourse efficiently. Still, internet fora are a great means of exchanging information. To confine useful discussion to email and glacial blogospheric exchanges isn’t ideal. We need a way to discuss politics honestly, without emotional turmoil. I propose two things: a protocol, and a forum design.
The protocol is a formal way to conduct internet discussions, which minimises mind-killing. First, each discussant must state his utility function. “Humans” don’t have utility functions, but their sub-agents do. For example, I (speaking now) am a hedonic utilitarian, which inhabits a brain populated by competing sub-agents. Refusal to state a utility function implies failure to accurately reduce the “I” in a statement like “I want to do X”.
Discussants whose utility functions differ substantially must accept that this is an impediment to cooperation. But the strongest sub-agent in an educated mind is usually a hedonic utilitarian. Ideally, all parties to a discussion claim to share the same utility function.
...
I'm not sure this protocol is workable. The full article is here.
I wish my mother had aborted me-- extreme utilitarianism.
Simple explanation of meta-analysis; below is a copy of my attempt to explain basic meta-analysis on the DNB ML. I thought I might reuse it elsewhere, and I'd like to know whether it really is a good explanation or needs fixing.
Hm, I don't really know of any such explanation; there's Wikipedia, of course: http://en.wikipedia.org/wiki/Meta-analysis
A useful concept is the hierarchy of evidence: we all know anecdote are close to worthless, correlations or surveys fairly weak, experiments good, randomized experiments better, controlled randomized experiments much better, and blind controlled randomized experiments best. If a randomized experiment contradicts an anecdote, we know to believe the experiment; and if a blind controlled randomized experiment contradicts an experiment, we know to believe the blind controlled randomized experiment. But what happens when we have a bunch of studies on the same level... which don't agree? What do we do if only 3 out of 5 experiments report the same result? We need to somehow combine the 5 experiments into 1 final result. The process of combining them is a "meta-analysis".
What parts of the experiments get combined may surprise you if you've read a few papers. Meta-analyses usually presume you know what an 'effect size' is. This is different from stuff like p-values, even though p-values are what everyone usually focuses on when judging results! The difference is that p-values say whether there is a difference between the control and experiment, while effect sizes say how big the difference is. It turns out that you can't really combine p-values from different studies, but you can combine effect sizes.
Each study gives you an effect size, based on the averages and standard deviation (how variable or jumpy the data is). What do you do with 10 effect sizes? How do you combine or add or aggregate them? That's where meta-analysis comes in.
You could just treat each as a vote: if 6 of the effect sizes are positive, and 4 are negative, then declare victory: "There's an effect of X size." (Some of the first meta-analyses, like the famous one combining studies of psychic effects, did just this.)
But what if some of the effects are huge, like 0.9, and all the others are 0.1? If we just vote, we get 0.1 since that's the majority. But is 0.1 really the right answer here? Doesn't seem like it.
So instead of voting, let's average! We add up the 10 studies and get something like +5; then divide by 10 and get 0.5 as our estimate. Much more reasonable: 0.9 seems too high like they may be outliers, but 0.1 is kind of weird since we did get some 0.9s; we split the difference.
But studies don't always have the same number of subjects, and as we all know, the more subjects or data you have, the better an estimate you have of the true value. A study with 10 students in it is worth much less than a study which used 10,000 students! A simple average ignores this truth.
So let's weight each effect size by how many subjects/datapoints it had in it: the effect size from the study with 10 students is much smaller* than the one from 10,000 students. So now if the first 9 studies have ~10 datapoints, and the 10th study has 1000 datapoints, those 9 count as, say, 1/10th* the last study since they totaled ~100 to its 1,000.
So each effect size gets weighted by how many datapoints went into making it, and then they're averaged together as before to give One Effect Size To Rule Them All.
With this done, we can start looking at other questions like:
- confidence intervals (this One Effect Size is not exactly right, of course, but how far away is it from the true effect size?)
- heterogeneity (are we comparing apples and apples? or did we include some oranges)
- or biases (funnel plots and trim-and-fill: does it look like some studies are missing?)
These other factors help us in the unlikely case that we have multiple meta-analyses at odds:
- which meta-analysis is made up of studies higher on the hierarchy? A meta-analysis of experiments beats a meta-analysis of surveys, just like experiments beat surveys.
- which has more studies in it?
- which has smaller confidence intervals?
- which has lower heterogeneity?
- which looks better on the bias checks? etc.
An example of the further questions we can ask:
In the case of the DNB meta-analysis, we can look at the One Effect Size over all studies which was something like 0.5. But some studies are high and some are low; is there any way to predict which are high and low? Is there some characteristic that might cause the effect sizes to be high or low? I suspected that there was: the methodological critique of active vs passive control groups. (I actually suspected this before the Melby meta-analysis came out, which did the same thing over a larger selection of WM-related studies.)
So I subcategorize the effect sizes from active control groups and the ones with passive control groups, and I do 2 smaller separate meta-analyses on each category. Did the 2 smaller meta-analyses spit out roughly the same answer as the full meta-analysis? No, they did not! They spat out quite different answers: studies with passive control groups found that the effect size was large, and studies with active control groups found that the effect size was small. This serves as very good evidence that yes, the critique is right, since it's not that likely that a random split of studies would separate them so nicely.
And that's the meat of my meta-analysis. I hope this was helpful?
* how much smaller? Well, that's where statistics comes in. It's not a simple linear sort of thing: 100 subjects is not 10x better than 10 subjects, but less than 10x better. Diminishing returns. Some formula and power calculations in https://plus.google.com/u/0/103530621949492999968/posts/i4RB2DHnW5y
Questions about Eliezer's Metaethics
According to Eliezer’s metaethics, morality incorporates the concept of reflective equilibrium. Given that presumably every part of my mind gets entangled with my output if I reflect long enough on some topic, isn’t Eliezer’s metaethics equivalent to saying that “right” refers to the output of X, where X is a detailed object-level specification of my entire mind as a computation?
In principle, X could decide to search for some sort of inscribed-in-stone morality out in the physical universe (and adopt whatever it finds or nihilism if it finds none), so Eliezer’s metaethics doesn’t even seem to rule out that kind of "objective" morality. To me, a satisfactory solution to metaethics might be an algorithm for computing morality that can be isolated from the rest of a human mind, along with some explanation of why this algorithm can be said to compute morality, and some conclusions about what properties the algorithm and its output might have. Is Eliezer’s theory essentially a negative one, that such a solution to metaethics isn’t possible?
X is supposed to be a stand-alone description of a computation and not something like “whatever computation my brain does” . But I do not have introspective access to most of my mind nor hold a copy of it as a quine. How can I mean X when I say “morality” if I don’t know what X is and also can’t give a logical/mathematical definition that unpacks into X? Is there a theory of semantics that makes it clear that words can sensibly have meanings like this?
Suspended Animation the first blog post in a series on Urban Future that I am currently reading. Stagnation in our time: