http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
I realize this might go into a post in a media thread, rather than its own topic, but it seems big enough, and likely-to-prompt-discussion enough, to have its own thread.
I liked the talk, although it was less polished than TED talks often are. What was missing I think was any indication of how to solve the problem. He could be seen as just an ivory tower philosopher speculating on something that might be a problem one day, because apart from mentioning in the beginning that he works with mathematicians and IT guys, he really does not give an impression that this problem is already being actively worked on.
I'm not sure your argument proves your claim. I think what you've shown is that there exist reasons other than the inability to create perfect boxes to care about the value alignment problem.
We can flip your argument around and apply it to your claim: imagine a world where there was only one team with the ability to make superintelligent AI. I would argue that it'll still be extremely unsafe to build an AI and try to box it. I don't think that this lets me conclude that a lack of boxing ability is the true reason that the value alignment problem is so important.