Some time ago I read an article on the difficulties of agents building succesors. The argument was something like this: Suppose we have solved the ultimate objective of goodness and built an agent A who does only things that he can prove are good (according to such definition). Can agent A build a more powerful successor B? The reasoning then goes that it cannot since A cannot prove that anything B does will be good, since B is more powerful than A. If he could prove that, one may consider them equally intelligent. However, I think the argument above does not consider the power of interactive proofs. With interactive proofs one can go all the way up the complexity classes in terms of convincing an sceptical verifier of the truth of some statement by an all-powerful prover (L. Babai, L. Fortnow, and C. Lund. Nondeterministic exponential time has two-prover interactive protocols, Computational Complexity 1:3-40, 1991.; Z. Ji, A. Natarajan, T. Vidick, J. Wright, H. Yuen. MIP* = RE, arXiv:2001.04383, 2020.). My question is: Am I just reinventing the wheel with such idea? (My secondary question is: is that even an expression in English? :) ) The way to proceed would be: A builds B who proposes actions and then plays an interactive proof game with A to prove it that whatever he is proposing is good. So, agent A can check that the proposals of B are sound even if it is much weaker in terms of intellectual power.

New Answer
New Comment

1 Answers sorted by

Jsevillamol

50

Paul Christiano has explored the framing of interactive proofs before, see for example this or this.

I think this is a exciting framing for AI safety, since it gets to the crux of one of the issues as you point out in your question.

1 comment, sorted by Click to highlight new comments since:

I can confirm that that's an expression in English.