Game theory is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally developed by John von Neumann and Oskar Morgenstern, it has significantly broadened to include behavioral economics, evolutionary models, and other tools.
Game theory attempts to mathematically capture behavior in strategic situations, in which an individual’s success in making choices depends on the choices of others. While used in economics, game theory can be applied to understand any situation with strategic interactions between parties: auctions, bargaining, duopolies, fair division, oligopolies, social network formation, war, voting systems, and more.
Game theory analyzes mathematical models of conflict and cooperation between intelligent rational decision-makers. An approach to achieve a definite objective from a mix of cooperation and conflict. It is about interdependent decision makers whose decisions affect each other.
The individuals or groups making strategic decisions are called players. A player can be an individual, firm, nation, or animal.
A strategy is a complete plan of action a player will take given the set of circumstances that might arise within a game. It describes what action a player will select.
Payoffs represent the motivations of players in a game. It reflects the outcome of the game for a player and is expressed numerically. Players aim to maximize their payoffs.
An information set is the collection of possible game states that a player can distinguish given the information available to them when making a move. It contains all the possible moves a player could have observed before making their move.
A Nash equilibrium is a stable state of a game involving two or more players in which each player holds the correct expectation about the other players’ behavior and acts rationally, therefore no player has an incentive to change only his own strategy unilaterally.
Backward induction is a technique to solve finite sequential games. It first considers the last time a decision might be made and chooses what to do in any situation at that time. Using this strategy, it then considers the second-to-last time a decision might be made, and so on.
Types of Games
There are different categories into which games can be classified based on certain parameters. The most common classifications are:
Cooperative vs Non-cooperative Games
- Cooperative games – Players can communicate, make binding agreements and coordinate their strategies.
- Non-cooperative games – Players make decisions independently without collaboration with any other player.
Zero-sum vs Non-zero-sum Games
- Zero-sum games – If the total gains of all players sum to zero. In zero-sum games the interests of the players are strictly opposed. One person’s gain strictly equals the other player’s loss.
- Non-zero-sum games – If the total gains of the players are positive or negative. Players’ interests are neither completely opposed nor completely coincident. There are some cooperative elements among players.
Symmetric vs Asymmetric Games
- Symmetric games – All players have the same strategies available to them and have identical payoffs for any given combination of strategies.
- Asymmetric games – Players have different strategies available to them or different payoffs.
Simultaneous vs Sequential Games
- Simultaneous games – Players choose strategies without knowing the choices of other players. Their decisions are made at the same time.
- Sequential games – Later players observe earlier actions by other players before choosing their strategies. Decisions are made in a sequential manner.
Perfect vs Imperfect Information Games
- Perfect information games – All players know the strategies and payoffs available to other players. They have complete information about the game.
- Imperfect information games – At least one player has incomplete information about the game. Some private information is hidden from other players.
One-shot vs Repeated Games
- One-shot games – Players make decisions simultaneously or sequentially, and these decisions determine the payoffs. The game is played only once.
- Repeated games – Players interact repeatedly and have some knowledge about earlier actions. Players can adopt strategies based on previous interactions.
Game Theory Solutions and Analysis
Game theory provides tools to predict when and how players change strategies, the outcome of games if rational decisions are made, as well as equilibrium points where all players are content with their strategy. Some key game theory solutions include:
Dominant Strategy Solution
A strictly dominant strategy provides the best payoff for a player, regardless of the strategies pursued by other players. It makes a certain strategy the clear choice for a rational player.
A Nash equilibrium occurs when each player takes the best decision for themselves based on their predictions of what the other players will do. No player can benefit by changing their strategy as long as the other players’ strategies remain unchanged.
Backward induction can be used to solve sequential games. Starting from the end of the game tree, one first determines the optimal strategy in the last time step. Using this information, one can then determine the optimal strategy for the second-to-last time step, and so on.
A subgame perfect Nash equilibrium further refines Nash equilibria to eliminate non-credible threats in sequential games. A strategy is subgame perfect if players’ strategies represent a Nash equilibrium in every subgame of the original game.
These games model situations of imperfect information. Players are uncertain about the characteristics of other players, such as their available strategies or payoffs. Bayesian equilibrium allows players to update their beliefs according to Bayes’ theorem.
In this more general concept, an outside party selects the strategy profiles for players. They provide a recommendation that, if followed, leads to better payoffs for players than if they chose independently.
Formal models refer to mathematical models designed to shed light on complex real-world problems through theoretical analysis. They are simplified abstractions representing key variables and their interactions. Some examples of formal models used in game theory include:
Normal Form Games
These games model simultaneous-move games. The game is represented in a matrix form, showing the players, strategies, and payoffs in a standard game matrix. This allows analysis to determine dominant strategies and Nash equilibria.
Extensive Form Games
Extensive form games capture sequential moves using a game tree structure. They model the order of players’ decisions, what each player knows when making decisions, chance events, payoffs, and more. This visual representation enables backward induction.
Evolutionary Game Theory
These games model populations where strategies evolve over time based on their relative payoff success. Strategies performing well are imitated and reproduced in future generations. Evolutionary stability is analogous to equilibrium.
Formal models of repeated games identify equilibria when players interact recurrently. Factors like discount rates and being able to punish/reward other players’ past actions alter dominant strategies relative to one-shot games.
These games study cooperative games where players can form collaborative groups or coalitions. The coalition as a whole receives a certain payoff which can be divided amongst players according to the rules set by the coalition.
Signaling games are two-player games with asymmetric information and two stages. In the first stage, the informed sender chooses whether and how to signal the state. In the second stage, the uninformed receiver observes the signal and takes an action.
Applications of Game Theory
Game theory has been applied to many fields due to its ability to model strategic interactions. Some examples include:
Game theory is used to model competition and cooperation in markets, auctions, bargaining, negotiations, industrial organization, and incentivizing optimal performance. Concepts like Nash equilibrium inform our understanding of markets.
Voting systems, bargaining, auctions, and campaigns can be analyzed using game theory. It provides insight into political decision making, international relations, war strategy, and more.
Evolutionary game theory is used to understand competition, conflict, and cooperation in ecology and biology. It provides insight into the evolution of biological systems and behaviors among competing life forms.
Applications in computer science include algorithmic game theory, machine learning, cybersecurity, networking (e.g. traffic routing) and logic systems. Analysis of opponent algorithms and development of optimization algorithms rely on game theoretic concepts.
Game theory informs key issues in philosophy like decision theory, ethics, rational choice theory, and logic. Philosophical paradigms like bounded rationality, Pareto optimality, risk dominance, and perfect rationality draw directly from game theory.
Scheduling, traffic flow optimization, supply chain management, and resource allocation rely on game theoretic tools to model interactions between independent decision makers and identify efficient cooperative/non-cooperative solutions.
Game theory provides insight into human behaviors like altruism, reciprocity, bargaining, and attitudes towards risk and fairness. Psychological and experimental research relies on game theory’s mathematical models of strategic interaction.
Analysis of legal rules and regulations can be done based on how rational actors respond to incentives. This provides insights into policymaking. Game theory informs contracts, property rights, torts, criminal law, and constitutional law.
Key Figures in Game Theory
Some foundational thinkers who helped develop and advance game theory include:
John von Neumann
Known as the founder of game theory, von Neumann established the mathematical foundations of game theory and the field of study. With Oskar Morgenstern, he authored the groundbreaking 1944 book “Theory of Games and Economic Behavior”.
Nash made key contributions to non-cooperative games, Nash equilibrium, bargaining games, and more. For this foundational work, he was awarded the 1994 Nobel Memorial Prize in Economic Sciences.
Harsanyi introduced the concept of incomplete information into game theory by developing the analysis of Bayesian games. This paved the way for the analysis of asymmetric information games. In 1994, he received the Nobel Memorial Prize in Economics along with Nash and Reinhard Selten.
Selten significantly advanced the analysis of non-cooperative games. He developed the concept of subgame perfect equilibria and bounded rationality models. He was awarded the 1994 Nobel Memorial Prize jointly with Nash and Harsanyi.
Aumann’s work focused on repeated games and the analysis of long-run relationships. He helped develop the theory of correlated equilibrium and coauthored influential books like “Values of Non-Atomic Games”. He was awarded the 2005 Nobel Memorial Prize in Economic Sciences.
John Maynard Smith
Maynard Smith applied game theory to biology, pioneering the field of evolutionary game theory. This adapts classical game theory to evolving populations changing their behavior by natural selection over time.
Schelling applied game theory to international relations, arms control, and tax compliance. He developed focal point theory and contributed to the formal analysis of bargaining. He received the 2005 Nobel Prize in Economics.
Arrow contributed to general equilibrium theory, social choice theory, and made significant advances in game theory and decision theory. He analyzed issues like information asymmetry, risk aversion, and uncertainty. In 1972, he was awarded the Nobel Prize in Economics.
With John von Neumann, Morgenstern co-authored “Theory of Games and Economic Behavior” which established the foundations of game theory. He made key contributions to utility theory as the basis for decision-making under uncertainty.
Key Concepts and Theorems
Some fundamental theorems, solutions, and concepts which form the foundations of game theory analysis include:
Provides conditions for the existence of saddle points in zero-sum games, establishing optimal mixed strategy solutions. Authored by John von Neumann in 1928.
States that each player’s strategy is optimal against those chosen by other players. If players are in a Nash equilibrium, they have no incentive to deviate from their strategy. Proposed by John Nash in 1950.
Arrow’s Impossibility Theorem
Shows difficulties in aggregating individual preferences into community preferences. Demonstrates difficulties of creating a satisfactory voting system. Proved by Kenneth Arrow in 1951.
Describes outcomes that can emerge in repeated games. Shows under what circumstances cooperative outcomes are sustained over time in an equilibrium. Generalizes to many non-cooperative games.
Provides a method to convert a game with incomplete information into a game with imperfect information. Introduced by John Harsanyi in 1967.
A situation where no individual can be made better off without making another individual worse off. A Pareto optimal outcome provides an efficient allocation of resources.
Requires strategies to represent a Nash equilibrium in every subgame. Eliminates non-credible threats in sequential games. Refines Nash equilibria in extensive form games.
Perfect Bayesian Equilibrium
Equilibrium refinement for extensive form games with incomplete information. Combines Nash equilibrium and Bayesian inference under consistency constraints. Allows updated beliefs.
Trembling Hand Perfection
Refines subgame and Nash equilibria by considering deviations happen with small probability. Eliminates overly fragile equilibria reliant on non-credible threats.
An outside party selects strategy profiles for players and provides recommendations. Following recommendations leads to better payoffs than independent choices. Generalizes Nash equilibrium.
Some widely used and influential graduate-level textbooks covering game theory’s key models and applications include:
- Game Theory (2013) by Erich Prisner
- Game Theory 101: The Complete Textbook (2014) by William Spaniel
- Game Theory: An Introduction (2013) by Steven Tadelis
- Game Theory (2015) by Michael Maschler, Eilon Solan, Shmuel Zamir
- Lectures on Game Theory (1989) by Eric Rasmusen
- Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations (2013) by Yoav Shoham and Kevin Leyton-Brown
- Games of Strategy (2015) by Avinash Dixit and Susan Skeath
- The Ideas Behind the Chess Openings (1989) by Reuben Fine
- Playing for Real: A Text on Game Theory (2007) by Ken Binmore
- An Introduction to Game Theory (2004) by Martin J. Osborne
- Game Theory 101: Bargaining (2014) by William Spaniel
- Game Theory (1991) by Drew Fudenberg and Jean Tirole
These books cover both classical and modern game theory. They provide comprehensive coverage of key models, applications, theorems and research directions through detailed technical exposition and worked examples. These texts are widely used for teaching at the graduate level.
Some of the most prominent peer-reviewed academic journals that publish game theory research include:
- Games and Economic Behavior
- International Journal of Game Theory
- Journal of Economic Theory
- Economics Letters
- Mathematical Social Sciences
- Journal of Mathematical Economics
- Social Choice and Welfare
- Dynamic Games and Applications
- International Game Theory Review
- The BE Journal of Theoretical Economics
These journals highlight the interdisciplinary nature of modern game theory spanning economics, mathematics, operations research, and computer science. They publish research on all aspects of game theory including computational techniques, applications, and extensions into new problem domains.
Some of the major academic conferences that bring together game theory researchers include:
- World Congress of the Game Theory Society
- North American Annual Meetings of the Game Theory Society
- Conference on Web and Internet Economics
- International Conference on Autonomous Agents and Multiagent Systems
- International Joint Conference on Artificial Intelligence
- Conference on Economics and Computation
- International Conference on Algorithmic Game Theory
- Game Theory Festival
- Analytic Hierarchy Process International Symposium
- International Conference on Machine Learning
These conferences provide a platform for researchers across disciplines like computer science, mathematics, economics, operations research and optimization to share new developments in all aspects of game theory. They also publish proceedings highlighting cutting-edge research.
Recent Research Directions
Some active modern research directions in game theory include:
Algorithmic Game Theory
Developing computational methods and tools to analyze large and complex games. Using algorithms to compute solutions and equilibria. Bridging computer science and game theory.
Behavioral Game Theory
Incorporating insights from psychology into game theoretic models to better capture bounded rationality, altruism, reciprocity and human biases. Explaining deviations from perfect rationality.
Using game theory to design information sharing protocols and analyze security against outside attacks and coalitions of dishonest players. Applications in blockchain, elections, auctions.
Using game theory to improve the stability and robustness of machine learning algorithms against manipulation and adversarial examples. Designing competitive environments and multi-agent training.
Designing rules of interactions to achieve desired outcomes. Applications in auction design, public policy, regulations and incentives. Inverse game theory.
Extending game theory into the realm of quantum information and physics. Modeling the impact of superposition, entanglement, and interference on strategic interactions.
Evolutionary Game Theory
Modeling evolving populations where agents replicate successful strategies. Applications in biology, cognition, language emergence, social norms, technology growth and economics.
Analyzing strategic interactions over networks, where agent payoffs depend on local network connections. Applications in social networks, telecommunication networks, cybersecurity.
In summary, game theory is a powerful and versatile mathematical modeling framework to analyze optimal decision-making in situations of strategic interdependence. The formal models, key concepts, common games, and techniques discussed in this article form the foundation of modern game theory analysis across diverse disciplines. Ongoing research is expanding the boundaries of game theory into new domains and problems. The rich literature and active research community will continue developing novel applications and insights using game theory as a lens.
- Von Neumann, John, and Oskar Morgenstern. Theory of games and economic behavior. Princeton university press, 2007.
- Nash Jr, John
- Fudenberg, Drew, and Jean Tirole. Game theory. MIT press, 1991.
- Myerson, Roger B. Game theory. Harvard university press, 2013.
- Osborne, Martin J., and Ariel Rubinstein. A course in game theory. MIT press, 1994.
- Dixit, Avinash K., Susan Skeath, and David H. Reiley. Games of strategy. WW Norton & Company, 2009.
- Leyton-Brown, Kevin, and Yoav Shoham. Essentials of game theory: A concise multidisciplinary introduction. Synthesis lectures on artificial intelligence and machine learning, 2008.
- Shoham, Yoav, and Kevin Leyton-Brown. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, 2008.
- Nisan, Noam, Tim Roughgarden, Éva Tardos, and Vijay V. Vazirani. Algorithmic game theory. Cambridge University Press, 2007.
- Maschler, Michael, Eilon Solan, and Shmuel Zamir. Game theory. Cambridge University Press, 2013.
- Aumann, Robert J. “Subjectivity and correlation in randomized strategies.” Journal of mathematical Economics 1.1 (1974): 67-96.
- Aumann, Robert J. “Correlated equilibrium as an expression of Bayesian rationality.” Econometrica: Journal of the Econometric Society (1987): 1-18.
- Nash, John. “Equilibrium points in n-person games.” Proceedings of the national academy of sciences 36.1 (1950): 48-49.
- Selten, Reinhard. “Reexamination of the perfectness concept for equilibrium points in extensive games.” International journal of game theory 4.1 (1975): 25-55.
- Kreps, David M., and Robert Wilson. “Sequential equilibria.” Econometrica: Journal of the Econometric Society (1982): 863-894.
- Kuhn, Harold W. “Extensive games and the problem of information.” Contributions to the Theory of Games 2.28 (1953): 193-216.
- Harsanyi, John C. “Games with incomplete information played by “Bayesian” players, I–III Part I. The basic model.” Management science 14.3 (1967): 159-182.
- Maynard Smith, John. Evolution and the Theory of Games. Cambridge university press, 1982.
- Rubinstein, Ariel. “Finite automata play the repeated prisoner’s dilemma.” Journal of Economic Theory 39.1 (1986): 83-96.
- Binmore, Ken, et al. “The Nash bargaining solution in economic modelling.” The RAND Journal of Economics 17.2 (1986): 176-188.
- Kohlberg, Elon, and Jean-Francois Mertens. “On the strategic stability of equilibria.” Econometrica: journal of the Econometric Society (1986): 1003-1037.
- Milgrom, Paul, and John Roberts. “Rationalizability, learning, and equilibrium in games with strategic complementarities.” Econometrica: Journal of the Econometric Society (1990): 1255-1277.
- McKelvey, Richard D., and Thomas R. Palfrey. “Quantal response equilibria for normal form games.” Games and economic behavior 10.1 (1995): 6-38.
- Mailath, George J., and Larry Samuelson. “Who wants a good reputation?.” The Review of Economic Studies 73.2 (2006): 415-441.
- Young, H. Peyton. “The evolution of conventions.” Econometrica: Journal of the Econometric Society (1993): 57-84.
- Blume, Lawrence E. “The statistical mechanics of strategic interaction.” Games and economic behavior 5.3 (1993): 387-424.
- Sandholm, Tuomas. “Population games and evolutionary dynamics.” MIT press, 2010.
- Jackson, Matthew O. “A survey of network formation models: stability and efficiency.” Group Formation in Economics: Networks, Clubs, and Coalitions. Cambridge University Press, 2005. 11-57.
- Galeotti, Andrea, Sanjeev Goyal, Matthew O. Jackson, Fernando Vega-Redondo, and Leeat Yariv. “Network games.” The review of economic studies 77.1 (2010): 218-244.
- Young, H. Peyton. “The evolution of social norms.” Annual Review of Economics 7 (2015): 359-387.
- Arslan, Gokhan, Sertac Karaman, and Ming Cao. “Games of strategic colleagues: Who to play with and how to play.” Annual review of control, robotics, and autonomous systems 1 (2018): 155-177.
- Han, The Anh, Luis A. Pineda, Yi Shi, Qi Qi, and Jiarui Gan. “Learning by playing-solving sparse reward tasks from scratch.” International Conference on Machine Learning. PMLR, 2018.
- Perolat, Julian, Bilal Piot, and Olivier Pietquin. “Scaling up mean field games with online mirror descent.” Advances in Neural Information Processing Systems 33 (2020): 11309-11320.
- Zhang, Chongjie, and Victor Lesser. “Coordinating multi-agent reinforcement learning with limited communication.” Autonomous Agents and Multi-Agent Systems 25.2 (2012): 380-403.
- Crandall, Jacob W., Mayada Oudah, Fatimah Ishowo-Oloko, Sherief Abdallah, Jean-François Bonnefon, Manuel Cebrian, Azim Shariff, Michael A. Goodrich, and Iyad Rahwan. “Cooperating with machines.” Nature communications 9.1 (2018): 1-12.
- Balcan, Maria-Florina, Avrim Blum, Nika Haghtalab, and Ariel D. Procaccia. “Commitment without regrets: Online learning in stackelberg security games.” Proceedings of the sixteenth ACM conference on economics and computation. 2015.
- Jain, Manish, Eilon Solan, and Nisarg Shah. “Equilibrium computation and robust optimization in zero sum games with payoff uncertainty.” ACM Transactions on Economics and Computation (TEAC) 8.3 (2020): 1-23.
- Brown, Noam, Tuomas Sandholm, and Brandon Amos. “Depth-limited solving for imperfect-information games.” Advances in neural information processing systems 33 (2020).
- Lanctot, Marc, Edward Lockhart, Jean-Baptiste Lespiau, Vinicius Zambaldi, Satyaki Upadhyay, Julien Perolat, Sriram Srinivasan, Finbarr Timbers, Karl Tuyls, Shayegan Omidshafiei, Daniel Hennes, Dustin Morrill, Paul Muller, Timo Ewalds, Ryan Faulkner, János Kramár, Bart De Vylder, Brennan Saeta, James Bradbury, David Ding, Sebastian Borgeaud, Matthew Lai, Julian Schrittwieser, Thomas Anthony, Edward Hughes, Ivo Danihelka, and Jonah Ryan-Davis. “OpenSpiel: A framework for reinforcement learning in games.” arXiv preprint arXiv:1908.09453 (2019).
- Balduzzi, David, Marta Garnelo, Yoram Bachrach, Wojciech M. Czarnecki, Julien Perolat, Max Jaderberg, and Thore Graepel. “Open-ended learning in symmetric zero-sum games.” International Conference on Machine Learning. PMLR, 2019.
- Peng, Bei, Jiarui Gan, Guanyu Feng, and Yanbing Guo. “Deep distributional reinforcement learning for playing single-player games.” Advances in Neural Information Processing Systems 34 (2021).