I've written up a rationality game which we played several times at our local LW chapter and had a lot of fun with. The idea is to put Aumann's agreement theorem into practice as a multi-player calibration game, in which players react to the probabilities which other players give (each holding some privileged evidence). If you get very involved, this implies reasoning not only about how well your friends are calibrated, but also how much your friends trust each other's calibration, and how much they trust each other's trust in each other.
You'll need a set of trivia questions to play. We used these.
The write-up includes a helpful scoring table which we have not play-tested yet. We did a plain Bayes loss rather than an adjusted Bayes loss when we played, and calculated things on our phone calculators. This version should feel a lot better, because the numbers are easier to interpret and you get your score right away rather than calculating at the end.
Then why take Aumann's name in vain?
I think the relationship to Aumann's theorem is direct and strong. It's the same old question of how Aumann-like reasoning plays out in practice, for only partially rational agents, that was much discussed back in the Overcoming Bias days.