I agree that trying to map all human values is extremely complex as articulated here [http://wiki.lesswrong.com/wiki/Complexity_of_value] , but the problem as I see it, is that we do not really have a choice - there has to be some way of measuring the initial AGI to see how it is handling these concepts.
I dont understand why we don’t try to prototype a high level ontology of core values for an AGI to adhere to - something that humans can discuss and argue about for many years before we actually build an AGI.
Law is a useful example which shows that human values cannot be absolutely quantified into a universal system. The law is constantly abused, misused and corrected so if a similar system were to be put into place for an AGI it could quickly lead to UFAI.
One of the interesting things about the law is that for core concepts like murder, the rules are well defined and fairly unambiguous, whereas more trivial things (in terms of risk to humans) like tax laws, parking laws are the bits that have a lot of complexity to them.
A typical (which may not be the most common) related local belief is that it's both possible (practically speaking) and necessary to build an AGI which we know to be more reliable than human brains.
If one posits this, then comparing the initial AGI's results to the output of our own brains and rejecting the AGI if it fails to match is obviously silly. It's kind of like the role of Linnean taxonomy in evaluating DNA-based graphs of species relationships... where they conflict, we simply note that the Linnean taxonomy was wrong and move on.
That's not to say a taxonomy of human value is useless... it might be good for something, just as Linnean taxonomy might have once been. It might document value drift over time, for example, or value variation in different communities.
But given local assumptions, it doesn't do much towards AGI, so it's not too surprising that there's not much effort going in those directions here.
The first part of the original plan for CEV is to get an AI to work out human value from all the humans. Without having some idea as to how it would do this, this appears to be a magical step. So asking the question seems a reasonable thing to do.