I agree that trying to map all human values is extremely complex as articulated here [http://wiki.lesswrong.com/wiki/Complexity_of_value] , but the problem as I see it, is that we do not really have a choice - there has to be some way of measuring the initial AGI to see how it is handling these concepts.
I dont understand why we don’t try to prototype a high level ontology of core values for an AGI to adhere to - something that humans can discuss and argue about for many years before we actually build an AGI.
Law is a useful example which shows that human values cannot be absolutely quantified into a universal system. The law is constantly abused, misused and corrected so if a similar system were to be put into place for an AGI it could quickly lead to UFAI.
One of the interesting things about the law is that for core concepts like murder, the rules are well defined and fairly unambiguous, whereas more trivial things (in terms of risk to humans) like tax laws, parking laws are the bits that have a lot of complexity to them.
Well, I certainly agree that there's lots of things we don't think about, and that a sufficiently intelligent system can come up with courses of action that humans will endorse, and that humans will like all kinds of things that they would not have endorsed ahead of time... for that matter, humans like all kinds of things that they simultaneously don't endorse.
And no, not really interested in private discussion of alternate FAI approaches, though if you made a post about it I'd probably read it.
Generally we aim to come up with things humans will both like and endorse. Optimizing for "like" but not "endorse" leads to various forms of drugging or wireheading (even if Eliezer does disturb me by being tempted towards such things). Optimizing for "endorse" but not "like" sounds like carrying the dystopia we currently call "real life" to its logical, horrid conclusion.
How well-founded does a set of notes or thoughts have to be in order to be worth posting here?