I agree that trying to map all human values is extremely complex as articulated here [http://wiki.lesswrong.com/wiki/Complexity_of_value] , but the problem as I see it, is that we do not really have a choice - there has to be some way of measuring the initial AGI to see how it is handling these concepts.
I dont understand why we don’t try to prototype a high level ontology of core values for an AGI to adhere to - something that humans can discuss and argue about for many years before we actually build an AGI.
Law is a useful example which shows that human values cannot be absolutely quantified into a universal system. The law is constantly abused, misused and corrected so if a similar system were to be put into place for an AGI it could quickly lead to UFAI.
One of the interesting things about the law is that for core concepts like murder, the rules are well defined and fairly unambiguous, whereas more trivial things (in terms of risk to humans) like tax laws, parking laws are the bits that have a lot of complexity to them.
Mapping human values is even more difficult than mapping human everyday concepts as e.g. Cyc did/tried. Put vagueness into exact symbolic form. And with vagueness I don't mean 'I don't care' but 'related in a varying way'. Varying with respect to other relations (recursively) and varying with individual differences.
If we really tried to map human values symbolically we'd have to map each individuals values symbolically too and then symbolically aggregate that.
I don't think that we can do that. An AGI could but that is too late.
What we can do is map human values vaguely. We could e.g. train large deep neuronal nets to learn and approximate these concepts from whatever evidence we feed it. And then look at the inferred structure whether it is sufficiently close to what we want. That way we do not have to do the mapping ourselves; only the checking.