Most concern about AI comes down to the scariness of goal-oriented behavior. A common response to such concerns is “why would we give an AI goals anyway?” I think there are good reasons to expect goal-oriented behavior, and I’ve been on that side of a lot of arguments. But I don’t think the issue is settled, and it might be possible to get better outcomes without them. I flesh out one possible alternative here, based on the dictum "take the action I would like best" rather than "achieve the outcome I would like best."
(As an experiment I wrote the post on medium, so that it is easier to provide sentence-level feedback, especially feedback on writing or low-level comments.)
I do not understand how anything you said relates to the weakness of your argument that I've pointed out. Namely, that you've simply moved the values complexity problem somewhere else. All your reply is doing is handwaving that issue, again.
Human beings can't endorse actions per se without context and implied goals. And the AI can't simply iterate over all possible actions randomly to see what works without having some sort of model that constrains what it's looking for. Based on what I can understand of what you're proposing, ISTM the AI would just wander around doing semi-random things, and not actually do anything useful for humans, unless Hugh has some goal(s) in mind to constrain the search.
And the AI has to be able to model those goals in order to escape the problem that the AI is now no smarter than Hugh is. Indeed, if you can simulate Hugh, then you might as well just have an em. The "AI" part is irrelevant.