tl;dr: This post suggests a direction for modelling Molochs. The main thing this post does is to rename the concept of “potential games” (an existing concept in game theory) to “Moloch games” to suggest an interpretation of this class of games. I also define "the preferences of a Moloch" to generalize that notion (the preferences may be intransitive).

This post assumes that you have familiarity with game theory and the concept of a Moloch.


What do group dynamics want? If a society/group (the Moloch) wants things that are different from what the individuals want, how can we assign preferences or a utility function to that society/group that doesn't model what would be good for the aggregated preference of the individuals of the group, but that describes what the group dynamics is actually "trying" to achieve (even against the interest of its individual members)? Here is a suggested answer.

Intuition. A Moloch game is a game such that there is a utility function , called “the Moloch’s utility function”, such that if the agents behave individually rationally, then they collectively behave as a “Moloch” that controls all players simultaneously and optimizes . In particular, the Nash equilibria correspond to local optima of .

Not all games are Moloch games.

Definition. A game with finite number of players and for each player  a strategy space  is a cardinal Moloch game (in the game theory literature, a cardinal potential game), if there is a utility function  such that for all players , and all strategies  for the other players,

Intuitively, if you take any strategy-profile for all the players, and adjust the strategy of one player, then the Moloch's utility will increase/decrease by the same amount as the utility function of that particular player. Hence, intuitively, every player behaves always as if they are optimizing the Moloch's utility function.

The definition for an ordinal Moloch game replaces the condition with

Intuitively,    represents the Moloch's preferences ordinally but not cardinally. Obviously, cardinal Moloch games are also ordinal Moloch games.


Example. The prisoners dilemma:

  Player 2
  CooperateDefect
Player 1Cooperate1, 1-1, 2
Defect2, -10 , 0

We will show that this is a cardinal Moloch game, by just computing the cardinal utility function and showing that there are no inconsistencies:

How to compute the cardinal utility function of the Moloch: Pick an arbitrary strategy profile to have utility 0 (I take Defect, Defect). Then iteratively compute the utility of rows and columns by just applying the constraint that the definition gives: Compute the difference in utility of the player whose row/column you're moving along (i.e. player 2 for the rows, player 1 for the columns) of each cell in the row/column from a cell of which you know the value of . In this case, we know . So compute  as  which equals . Similar for . For , there are two ways to compute it: Using player 1's utility function and  or player 2's utility function and . If these two give different answers, then the game is not a cardinal Moloch game. 

Here is the cardinal utility function of the Moloch for the prisoner's dilemma (The above algorithm gives a utility function that is unique up to translations) : 

 CooperateDefect
Cooperate-3-1
Defect-10

Intuition. In this case, even though all players prefer Cooperate, Cooperate over Defect, Defect, the Moloch prefers the opposite. This corresponds to the fact that it is individually rational for the players to Defect. This Moloch utility function captures the "preferences of the group dynamics" as opposed to the preferences of the individuals. (It is obviously very different from the notion of "aggregate preferences" or "welfare").


A Moloch game assumes in some sense that "the Moloch has transitive preferences". We can generalize to Molochs with possibly intransitive preferences (I don't know of this being defined this way in the literature on potential games):

Definition. Let  be a game with a finite number of players, each of which has a preference relation  over the strategy space  (by default derived from a utility function ). Then the Moloch's preferences  are defined as the preference relation satisfying for all players , and all strategy profiles  for the other players:

Observation. These preferences are always incomplete (intuitively, the Moloch doesn't have an opinion on the comparison between different players changing their strategies, because it doesn't have this information: players individually make choices given their options). They may be either transitive or intransitive. I'll say a Moloch's preferences are rational if they are transitive (neglecting the usual requirement of completeness).

Just to show that the concepts are what they should be:

Lemma. Any game whose Moloch has transitive preferences is an ordinal Moloch game. Any ordinal Moloch game has a Moloch with transitive preferences. 

Proof. For any transitive relation on a space there is a real-valued function on it that is consistent with that relation. The other direction follows directly.

Intuition. If the Moloch has transitive preferences, then the Moloch knows what it wants and the game will have a pure Nash equilibrium (there is a theorem that formalizes this). Conversely, if the Moloch has intransitive preferences, then the Moloch doesn't know what it wants and the game will tend to have cycles (not all of them will because the players might want to move out of them into a "transitive region" of the Moloch's preferences). 

I won't show this here, but the literature on potential games (cf. the thing I am calling Moloch games), these are examples:

Games with rational Molochs (i.e. Moloch games / potential games):

  • Prisoner's dilemma
  • Battle of the sexes
  • Coordination game
  • Game of Chicken

Games with irrational Molochs (i.e. not Moloch games / potential games):

  • Matching pennies
  • Rock paper scissors

I probably won't spend much more time on this, but here is a suggestion for taking this as a starting point to modelling Molochs:

  • Check if various informal ideas about Molochs can be phrased in this language. Check if the language is satisfying to talk about actual Molochs.
  • Look at the literature on potential games to see if it contains much insight. Make a dictionary of concepts named in the terminology of the ontology we're interested in (similar to how I renamed "potential game" to "Moloch game") to make this literature an "efficiently queryable database" for insights into Molochs.
  • I suspect that there might be ideas to be had about Moloch games that aren't treated there, because as far as I know, potential games were developed mostly as a trick to make computations easier, not as a conceptual tool for thinking about Molochs, societal inadequacy and so forth. It's plausible that certain obvious questions haven't been asked about them for this reason. Try to actually model Molochs this way and see if these definitions allow us to answer questions we want to ask about them. Use this as a stepping stone and see where it is unsatisfying. Build on top of that to push the analysis further.

Feel free to contact me if you want to think about this. 


Some reading:

Flows and Decompositions of Games: Harmonic and Potential Games. In the language of this post: decomposing a game into a "rational part" of the Moloch, and an irrational deviation from it. Finding the "closest rational Moloch" of a game.

Some further ideas and questions to ask:

  • Can real world societies be decomposed into multiple Molochs? In the style of the "Flows and Decompositions of Games" paper, it wouldn't have to be a decomposition in terms of subgroups of players, but of "aspects of the game-theoretic interaction". (e.g. an individual might simultaneously be part of a "capitalism Moloch" and a "politics Moloch"). Maybe Molochs can be approximately decomposed.
  • Is there a notion of "Moloch game" for sequential games? Games with limited information? (The potential game literature probably has asked analogous questions).
New Comment
9 comments, sorted by Click to highlight new comments since:

That's a neat interpretation of potential games

If the Moloch has transitive preferences, then the Moloch knows what it wants and the game will have a Nash equilibrium

You mean the game will have a pure Nash equilibrium. Any game has some (mixed) Nash equilibrium.

Yes that's what I meant, thanks.

This is a great idea, well done.

I'm not convinced we gain anything by further anthroporphising (or agent-izing) Moloch.  Moloch is the result of misaligned agents (who want different outcomes than each other, and will sacrifice the shared environment to pursue their goals), not a separate entity.  

This captures a part of my intuitions about Moloch. But I think some conditions need to be added to make it fit properly:

IMO, an important part of Moloch is that the Moloch-preferred state is one that none of the players is happy with. But this post's definition doesn't have any condition like that. For example, multiplying all utilities in a Moloch Game seems to still fit the definition of a another Moloch Game. (Another example: take the Prisonner's Dilemma matrix and change the (D,D) reward to +5, +5. That would still satisfy the definition.)

Yeah I suppose that you're taking an essential property of a Moloch to be that it wants something other than the sum of utilities. That's a reasonable terminological condition I suppose, but I'm addressing the question of "what does it even mean for 'society' to want anything at all?" Then whatever that is, it might be that (e.g. by some coincidence, or by good coordination mechanisms, or because everyone wants the same thing) what society wants is the same as what would be good for the sum of individual utilities. It seems to me that the question of "what does society want?" is more fundamental than "how does that which society want deviate from what would be good for its individuals?"

I definitely agree that (1) "what society wants" is a useful notion and that it is different from (2) "situations in which what society wants deviates from what would be good for its individuals". I would just argue that given both the historical and SSC-inspired connotations of "Moloch", this term should be associated with (2) rather than with (1) :-).

Maybe. I actually don't think the term "Moloch" is very important. What I think is important is getting a good conceptual understanding of the behavioural notion of "what society wants", behavioural in the sense that it is independent of idealized notions of what would be good or what individuals imagine society wants but depends on how the collection of agents behaves/is incentivized to behave. I view the fact that this ends up deviating from what would be good for the sum of utilities, as essentially the motivation for this topic, but not the core conceptual problem. So I'd want to nudge people who want to clarify "Molochs" to focus mostly on conceptually clarifying (1) and only secondarily on clarifying (2). 

Secondarily, just to push back against your point that "Moloch" is historically more connotated with (2). This is sort of true, but on the other hand, what does the concept of "Moloch" add to our conceptual toolbox, above and beyond the bag of more standard concepts like "collective action problem" and "externalities" and so forth? I'd say that it is already well-understood that collections of individuals can end up interacting in ways that is globally pareto-suboptimal. I think the additions to this analysis made in SSC are something like: conceptualizing various processes as optimizing in a certain direction/looking at the system-level for optimization processes. The core point to get clarity on here is I think (1), and then (2) should fall out of that.

Intuition. A Moloch game is a game such that there is a utility function , called “the Moloch’s utility function”, such that if the agents behave individually rationally, then they collectively behave as a “Moloch” that controls all players simultaneously and optimizes . In particular, the Nash equilibria correspond to local optima of .

Minor, but this tripped me up. My read of "controls all players simultaneously" would be that there's no such thing as a local optimum, it can just move directly to the global optimum from any other state. I'm not sure what would be a better wording though, and your non-intuitive definition was clear enough to set me right.