Wiki Contributions

Comments

By the same token, I’m generally opposed to grand unified theories of the body. The shoulder involves a ball-and-socket joint, and the kidney filters blood. OK cool, those are two important facts about the body. I’m happy to know them! I don’t feel the need for a grand unified theory of the body that includes both ball-and-socket joints and blood filtration as two pieces of a single grand narrative.

I think I am generally on board with you on your critiques of FEP, but I disagree with this framing against grand unified theories. The shoulder and the kidney are both made of cells. They both contain DNA which is translated into proteins. They are both are designed by an evolutionary process.

Grand unified theories exist, and they are precious. I want to eke out every sliver of generality wherever I can. Grand unified theories are also extremely rare, and far more common in the public discourse are fakes that create an illusion of generality without making any substantial connections. The style of thinking that looks at a ball-and-socket joint and a blood filtration system and immediately thinks "I need to find how these are really the same" rather than studying these two things in detail and separately is apt to create these false grand unifications, and altho I haven't looked into FEP as deeply as you or other commenters the writing I have seen on it smells more like this mistake than true generality.

But a big reason I care about exposing these false theories and the bad mental habits that are conducive to them is precisely because I care so much about true grand unified theories. I want grand unified theories to shine like beacons so we can notice their slightest nudge, and feel the faint glimmer of a new one when it is approaching from the distance, rather than be hidden by a cacophony of overblown rhetoric coming from random directions. 

I think MIRI's Logical Inductor idea can be factored into two components, one of which contains the elegant core that is why this idea works so well, and the other is an arbitrary embellishment that obscures what is actually going on. Of course I am calling for this to be recognized and that people should only be teaching and thinking about the elegant core. The elegant core is infinitary markets: Markets that exist for an arbitrarily long time, with commodities that can take arbitrarily long to return dividends, and infinitely many market participants who use every computable strategy. The hack is that the commodities are labeled by sentences in a formal language and the relationships between them are governed by a proof systems. This creates a misleading pattern that that the value of the commodity labeled phi appears to measure the probability that phi is true; in fact what it measures is more like the probability the that proof system will eventually affirm that phi is true, or more precisely like the probability that phi is true in a random model of the theory. Of course what we really care about is the probability phi is actually true, meaning true in the standard model where the things labeled "natural numbers" are actual natural numbers and so on. By combining proof systems and infinitary markets, one obscures how much of the "work" in obtaining accurate information is done by either. I think it is better to study these two things separately. Since proof systems are already well-studies and infinitary markets are the novel idea in MIRI's work, that means they should primarily study infinitary markets.

I think it is a mistake to focus on these kinds weird effects as "biological systems using quantum mechanics", because it ignores the much more significant ways quantum mechanics is essential for all the ordinary things that are ubiquitous in biological systems. The stability of every single atom depends on quantum mechanics, and every chemical bond requires quantum mechanics to model. For the intended implication on the difficulty of Whole Bird Emulation, these ordinary usages of QM are much more significant. There are a huge number of different kinds of molecular interactions in a bird's body and each one requires solving a multi-particle Schroedinger equation. The computation work for this one effect is tiny in comparison.

As I understand, the unique thing about this effect is that it involves much longer coherence times than in molecular interactions. This is cool, but unless you can argue that birds have error-correcting quantum computers inside them, which is incredibly unlikely, I don't think it is that relevant to AI timelines.

While I like a lot of Hanson's grabby alien model, I do not buy the inference that since humans appeared early in cosmological history, that implies that the cosmic commons are taken quickly and so a lower bound on how often grabby aliens appear. I think that is neglecting the possibility that the early universe is inherently more conducive to creating life, so most life is created early, but these lifeforms may be very far apart.

Eliezer is very explicit and repeats many times in that essay, including in the very segment you quote, that his code of meta-honesty does in fact compel you to never lie in a meta-honesty discussion. The first 4 paragraphs of your comment are not elaborating with what Eliezer really meant, they are disagreeing with him. Reasonable disagreements too, in my opinion, but conflating them with Eliezer's proposal is corrosive to the norms that allows people to propose and test new norms.

I had trouble making the connection between the first two paragraphs and the rest. Are you introducing what you mean by an "alarm" and then giving a specific proposal for an alarm afterwards? Is there significance in how the example alarms are in response to specific words being misleading?

Writing suggestion: Expand the acronym "ELK" early in the piece. I looked at the title and my first question was what ELK is, I quickly skimmed the piece and wasn't able to find out until I clicked on the link to the ELK document. I now see it's also expanded in the tag list, which I normally don't examine. I haven't read the article more closely than a skim.

On further thought I want to walk back a bit:

  1. I confess my comment was motivated by seeing something where it looked like I could make a quick "gotcha" point, which is a bad way to converse.
  2. Reading the original comment more carefully, I'm seeing how I disagree with it. It says (emphasis mine)

in practice the problems of infinite ethics are more likely to be solved at the level of maths, as opposed on the level of ethics and thinking about what this means for actual decisions.

I highly doubt this problem will be solved purely on the level of math, and expect it will involve more work on the level of ethics than on the level of foundations of mathematics. However, I think taking an overly realist view on the conventions mathematicians have chosen for dealing with infinities is an impediment to thinking about these issues, and studying alternative foundations is helpful to ward against that. The problems of infinite ethics, especially for uncountable infinities, seem to especially rely on such realism. I do expect a solution to such issues, to the extent it is mathematical at all, could be formalized in ZFC. The central thing I liked about the comment is the call to rethink the relationship of math and mathematical infinity to reality, and that doesn't necessary require changing our foundations, just changing our attitude towards them.

If the only alternative you can conceive of for ZFC is removing the axiom of choice then you are proving Jan_Kulveit's point.

I was reading the story for the first quotation entitled "The discovery of x-risk from AGI", and I noticed something around quotation that doesn't make sense to me and I'm curious if anyone can tell what Eliezer Yudkowsky was thinking. As referenced in a previous version of this post, after the quoted scene highest Keeper commits suicide. Discussing the impact of this, EY writes,

And in dath ilan you would not set up an incentive where a leader needed to commit true suicide and destroy her own brain in order to get her political proposal taken seriously.  That would be trading off a sacred thing against an unsacred thing.  It would mean that only true-suicidal people became leaders.  It would be terrible terrible system design.

So if anybody did deliberately destroy their own brain in attempt to increase their credibility - then obviously, the only sensible response would be to ignore that, so as not create hideous system incentives.  Any sensible person would reason out that sensible response, expect it, and not try the true-suicide tactic.

The second paragraph is clearly a reference to acausal decision theory, people making a decision because how they anticipate others react to expecting that this is how they make the decision rather than the direct consequences of the decision. I'm not sure if it really makes sense, a self-indulgent reminder that nobody has knows any systematic method for producing prescriptions from acausal decision theories in cases where purportedly they differs from causal decision theory in everyday life. Still, it's fiction, I can suspend my disbelief.

The confusing thing is that in the story the actual result of the suicide is exactly what this passage says shouldn't be the result. It convinces the Representatives to take the proposal more seriously and implement it. This passage is just used to illustrate how shocking the suicide was, no additional considerations are described why for the reasoning is incorrect in those circumstances. So it looks like the Representatives are explicitly violating the Algorithm which supposedly underlies the entire dath ilan civilization and is taught to every child at least in broad strokes, in spite of being the second-highest ranked governing body of dath ilan.

Load More