Posts

Sorted by New

Wiki Contributions

Comments

If Deep Learning people suddenly starting working hard on models with dynamic architectures who self-modify (i.e. a network outputs its own weight and architecture update for the next time-step) and they *don't* see large improvements in task performance, I would take that as evidence against AGI going FOOM.

As far as I can see you can use the same techniques to learn to play any perfect information zero-sum game