Game theory is a branch of mathematics and economics that studies strategic interactions among rational decision-makers. It provides a framework for analyzing situations where the outcome of an individual's choice depends on the choices of others. This chapter serves as an introduction to the fundamental concepts and applications of game theory in welfare economics.
Game theory is concerned with the analysis of strategic interactions, where the outcome of a player's decision depends on the decisions of other players. It is a powerful tool for understanding complex systems where individual actions have collective implications. The key components of a game include players, strategies, payoffs, and the information available to each player.
Several fundamental concepts are essential for understanding game theory:
Game theory has its roots in the study of zero-sum games, where one player's gain is another player's loss. The concept was formalized by John von Neumann and Oskar Morgenstern in their groundbreaking work "Theory of Games and Economic Behavior" published in 1944. However, the origins of game theory can be traced back to earlier works in economics and mathematics, such as the work of Emile Borel and John Maynard Keynes.
Game theory has wide-ranging applications in economics, including but not limited to:
In welfare economics, game theory is particularly useful for analyzing situations where individual decisions have collective implications, such as the provision of public goods, environmental policy, and labor market institutions.
In the following chapters, we will delve deeper into the specific models and concepts of game theory, exploring their applications in various areas of welfare economics.
Game theory provides a framework for analyzing strategic interactions among rational decision-makers. This chapter introduces some of the fundamental models that illustrate the key concepts and principles of game theory. These models are simple yet powerful, capturing essential aspects of strategic behavior in various economic and social contexts.
The Prisoner's Dilemma is a classic model that highlights the tension between individual rationality and collective rationality. Two suspects, A and B, are arrested and separated. The prosecutors lack sufficient evidence for a conviction, so they offer each suspect a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The possible outcomes and their utilities are as follows:
The dilemma arises because the dominant strategy for each prisoner is to betray the other, even though this leads to a worse outcome for both when compared to mutual cooperation. This model illustrates how individual self-interest can lead to a suboptimal outcome for all parties involved.
The Stag Hunt model, also known as the Assurance Game, captures the idea of cooperation and trust. Two players, A and B, are out hunting. There is a choice between hunting a stag (a high-risk, high-reward option) or a hare (a low-risk, low-reward option). The payoffs are as follows:
This model demonstrates that cooperation can be sustained if players trust each other to choose the stag. However, if trust is lacking, the rational choice is to hunt the hare, leading to a suboptimal outcome. The Stag Hunt model highlights the importance of trust and commitment in achieving cooperative outcomes.
The Battle of the Sexes model illustrates the coordination problem in matching. Two players, A and B, want to meet and go to a movie. Player A prefers one movie (e.g., a comedy), while player B prefers another movie (e.g., a drama). The payoffs are as follows:
This model shows that players need to coordinate their choices to achieve a high payoff. However, without communication or a predefined strategy, players may end up choosing different movies, leading to a low payoff for both. The Battle of the Sexes model underscores the importance of communication and agreement in resolving coordination problems.
Coordination games, also known as congestion games, involve multiple players choosing from a set of strategies that lead to different outcomes based on the number of players choosing each strategy. The key feature of coordination games is that players' payoffs depend on the actions of others. A well-known example is the traffic congestion game, where players choose routes to work, and the payoff depends on the number of players choosing each route.
Coordination games can have multiple Nash equilibria, where no player has an incentive to unilaterally change their strategy. In some cases, coordination games may also exhibit the "tragedy of the commons," where individual self-interest leads to a suboptimal outcome for all players. Understanding coordination games is crucial for analyzing situations where players' choices affect each other's outcomes.
A Nash equilibrium is a fundamental concept in game theory, named after the mathematician John Nash. It represents a situation where no player can benefit by changing their strategy while the other players keep theirs unchanged. This concept is crucial for understanding strategic interactions in various fields, including economics, political science, and biology.
A Nash equilibrium is a set of strategies, one for each player, such that no player can benefit by changing their strategy while the other players keep theirs unchanged. In other words, each player's strategy is an optimal response to the strategies of the other players.
Consider the classic Prisoner's Dilemma game. Two suspects are arrested and separated. Each suspect is given the option to either cooperate (remain silent) or defect (confess). The payoff matrix is as follows:
| Cooperate | Defect | |
|---|---|---|
| Cooperate | (3, 3) | (0, 4) |
| Defect | (4, 0) | (1, 1) |
In this game, the Nash equilibrium is for both players to defect. If one player cooperates while the other defects, the cooperating player gets a payoff of 0, which is less than the payoff of 1 if both defect. Therefore, no player has an incentive to unilaterally change their strategy.
The existence of a Nash equilibrium is a fundamental question in game theory. While not all games have a Nash equilibrium, many standard games do. The key result is the Nash's Theorem, which states that every finite game with a finite number of players has at least one Nash equilibrium in mixed strategies.
Mixed strategies involve players randomizing over their pure strategies. For example, in the Prisoner's Dilemma, a mixed strategy Nash equilibrium might involve each player defecting with a probability of 0.5. This ensures that no player can benefit by deviating from this mixed strategy.
In finite games, where both the number of players and the number of strategies are finite, the existence of a Nash equilibrium is guaranteed. The proof involves showing that the set of strategy profiles is compact and the payoff functions are continuous, ensuring the existence of a maximum.
Consider a game with two players, each having two strategies. The strategy space is a 2x2 matrix, and each cell represents a payoff pair. By examining all possible strategy profiles, one can find a profile where neither player can improve their payoff by unilaterally changing their strategy.
In infinite games, the situation becomes more complex. The existence of a Nash equilibrium is not guaranteed, and the concept of mixed strategies becomes more intricate. However, for certain classes of infinite games, such as continuous games, the existence of a Nash equilibrium can be established under specific conditions.
For example, in a continuous game where players choose strategies from a compact metric space, the existence of a Nash equilibrium can be proven using fixed-point theorems. This involves showing that the best response correspondence has a fixed point, which corresponds to a Nash equilibrium.
In summary, Nash equilibrium is a powerful concept that helps analyze strategic interactions in various settings. Understanding its definition, existence, and properties in both finite and infinite games is essential for applying game theory to real-world problems.
Strategic form games, also known as normal form games, provide a comprehensive framework for analyzing interactive decision-making processes. In this chapter, we will delve into the intricacies of strategic form games, exploring their representation, key concepts, and solution methods.
In strategic form, a game is represented by a matrix that outlines the payoffs for each player's strategy combination. Each row of the matrix corresponds to a strategy chosen by one player, and each column corresponds to a strategy chosen by the other player. The intersection of a row and a column displays the payoffs for both players.
For example, consider a two-player game where Player 1 has two strategies, A and B, and Player 2 has two strategies, X and Y. The normal form representation might look like this:
| X | Y | |
|---|---|---|
| A | (3, 2) | (1, 1) |
| B | (0, 4) | (2, 3) |
In this table, the first number in each cell represents Player 1's payoff, and the second number represents Player 2's payoff.
A strategy is said to be dominant if it yields a higher payoff regardless of the opponent's strategy. Conversely, a dominated strategy is one that yields a lower payoff for all possible strategies of the opponent.
In the example above, strategy A for Player 1 is dominant because it results in a higher payoff than strategy B, regardless of whether Player 2 chooses X or Y. Conversely, strategy B for Player 1 is dominated.
The best response function for a player specifies the strategy that maximizes their payoff given the strategies chosen by the other players. In the example above, if Player 2 chooses strategy X, Player 1's best response is strategy A, which yields a payoff of 3. If Player 2 chooses strategy Y, Player 1's best response is still strategy A, which yields a payoff of 1.
In some games, players may benefit from randomizing their choices. A mixed strategy involves assigning probabilities to pure strategies, allowing players to choose a strategy randomly according to these probabilities. Mixed strategies are particularly useful in games where no pure strategy is dominant.
For instance, consider a game where the payoff matrix is as follows:
| X | Y | |
|---|---|---|
| A | (2, 2) | (0, 3) |
| B | (3, 0) | (1, 1) |
In this game, no pure strategy is dominant. However, if Player 1 uses a mixed strategy where they choose A with probability p and B with probability 1-p, and Player 2 uses a mixed strategy where they choose X with probability q and Y with probability 1-q, the expected payoffs can be calculated as follows:
Player 1's expected payoff: 2pq + 3p(1-q) + 0(1-p)q + 1(1-p)(1-q)
Player 2's expected payoff: 2pq + 0p(1-q) + 3(1-p)q + 1(1-p)(1-q)
By solving these equations, we can find the mixed strategy Nash equilibrium, which specifies the optimal probabilities for each player's strategies.
Extensive form games are a fundamental concept in game theory, providing a detailed representation of strategic interactions. Unlike strategic form games, which focus on players' choices and payoffs, extensive form games capture the sequential nature of decision-making. This chapter delves into the key aspects of extensive form games, including their tree representation, solution concepts, and applications.
In extensive form games, the structure of the game is represented as a tree. Each node in the tree represents a decision point, and the branches emanating from a node correspond to the possible actions that can be taken at that point. The tree has a single root node, which represents the starting point of the game, and terminal nodes, which represent the end of the game and the payoffs to the players.
Key elements of the tree representation include:
Backward induction is a solution concept used to determine the optimal strategy for players in extensive form games. It involves working backwards from the terminal nodes to the root node, solving for the optimal action at each decision point given the subsequent decisions.
Backward induction is particularly useful in games of perfect information, where each player knows the complete history of the game up to their decision point. In such games, the optimal strategy can be found by solving a series of smaller games, starting from the end of the game and moving backwards.
Perfect information games are those in which each player knows the complete history of the game up to their decision point. Sequential games are a special case of perfect information games, where players move in a fixed order, and each player's decision depends on the previous players' actions.
In perfect information games, backward induction can be used to find the subgame-perfect equilibrium, which is a refinement of the Nash equilibrium that requires the strategy to be optimal in every subgame of the original game. This ensures that the strategy is consistent with the optimal play in any possible future scenario.
Imperfect information games are those in which players do not have complete knowledge of the game's history. Bayesian games are a subclass of imperfect information games, where players have private information that affects their payoffs and is drawn from a known probability distribution.
In Bayesian games, players update their beliefs about the private information of other players based on their observations and the known probability distribution. The solution concept for Bayesian games is the Bayesian Nash equilibrium, which specifies the optimal strategy for each player given their beliefs about the private information of other players.
Bayesian games have wide-ranging applications in economics, including auctions, signaling, and contract theory. They allow for the analysis of strategic interactions in situations where players have incomplete information about each other's preferences and types.
Repeated games are a fundamental concept in game theory, where players interact over multiple periods. This chapter explores the dynamics and strategies that emerge in such settings.
Finite repeated games involve a fixed number of stages. In each stage, players engage in a simultaneous move game, and their choices affect their payoffs in that stage. The cumulative payoffs over all stages determine the overall outcome.
Key aspects of finite repeated games include:
Infinite repeated games extend the interaction to an infinite number of stages. The discount factor, which determines the present value of future payoffs, becomes crucial. A discount factor of δ means that a payoff one period in the future is worth only δ times the current payoff.
In infinite repeated games:
Trigger strategies are a class of strategies where a player's actions depend on the opponent's past behavior. These strategies are particularly effective in repeated games because they provide a clear incentive for cooperation.
Examples of trigger strategies include:
Folk theorems provide conditions under which certain outcomes can be sustained in repeated games. These theorems show that, under mild assumptions, any feasible payoff vector can be supported as a subgame-perfect Nash equilibrium in an infinitely repeated game.
Key folk theorems include:
Folk theorems highlight the power of repetition in driving cooperation and achieving desired outcomes in strategic interactions.
Evolutionary game theory (EGT) is a branch of game theory that applies concepts from evolutionary biology to understand strategic interactions. It focuses on how strategies evolve over time through processes such as mutation, selection, and replication. This chapter explores the key concepts and applications of evolutionary game theory in both biological and economic contexts.
Replicator dynamics is a fundamental concept in EGT that describes how the frequency of different strategies changes over time. In a population of players, replicator dynamics can be represented by the following differential equation:
dxi / dt = xi (πi - π)
where xi is the frequency of strategy i, πi is the average payoff of strategy i, and π is the average payoff of the entire population. This equation shows that strategies with above-average payoffs increase in frequency, while those with below-average payoffs decrease.
An evolutionarily stable strategy (ESS) is a strategy that, if adopted by a population, cannot be invaded by any alternative strategy. In other words, an ESS is a strategy that is resistant to invasion by mutant strategies. A strategy s* is an ESS if, for any alternative strategy s, the following conditions hold:
π(s*, s*) > π(s, s*) and π(s*, s) ≥ π(s, s)
This means that the ESS performs better against itself than any other strategy, and it performs at least as well against other strategies as those strategies perform against themselves.
Evolutionary game theory has been extensively applied in biology to study the evolution of behaviors and strategies in various species. For example, it has been used to explain the evolution of cooperation in social insects, the maintenance of sexual dimorphism, and the coexistence of multiple strategies in populations. In economics, EGT has been applied to study the dynamics of industry structures, the evolution of standards and conventions, and the emergence of social norms.
One notable application is the "El Farol Bar" problem, which illustrates how individuals can coordinate their actions to avoid congestion in a public good scenario. This problem has been studied using EGT to understand how strategies evolve over time and how coordination emerges from individual decisions.
While evolutionary game theory provides valuable insights into strategic interactions, it also faces several limitations and criticisms. One major criticism is that it often assumes that players are myopic and only consider their immediate payoffs, ignoring the long-term consequences of their strategies. Additionally, EGT may oversimplify complex adaptive systems by focusing solely on replicator dynamics and ESS.
Another limitation is that EGT often relies on the assumption of infinite populations, which may not always be realistic in practical situations. Furthermore, the concept of an ESS is not always robust to changes in the game's structure or the environment, leading to potential instability in strategic interactions.
Despite these limitations, evolutionary game theory remains a powerful tool for understanding the dynamics of strategic interactions in various fields. By combining insights from game theory and evolutionary biology, EGT offers a unique perspective on how strategies evolve and adapt over time.
Game theory provides a powerful framework for analyzing economic interactions involving public goods, common resources, and public policy. This chapter explores how game theory can be applied to understand and solve problems in public economics.
Public goods are non-excludable and non-rivalrous, meaning that one person's consumption does not reduce the availability of the good for others. Examples include national defense, lighthouses, and public parks. Common resources, on the other hand, are rivalrous but not excludable, such as fisheries or atmosphere.
In game theory, the provision of public goods can be modeled using the Prisoner's Dilemma framework. Each individual has a dominant strategy to free-ride on the contributions of others, leading to an under-provision of the public good. However, the Nash equilibrium is not Pareto efficient, as there exists a situation where all individuals could be better off.
To overcome this inefficiency, various mechanisms have been proposed, including tax-funded public goods, voluntary contributions, and peer-to-peer funding platforms. These mechanisms aim to align individual incentives with the collective interest in providing public goods.
Taxation is a crucial tool in public economics for funding public goods and redistributing income. Game theory can be used to analyze the optimal tax system, taking into account the strategic behavior of taxpayers and the government.
One approach is to model the taxpayer as a risk-averse individual who maximizes their expected utility. The government, as the principal, aims to maximize social welfare by choosing the optimal tax rate. This can be analyzed using the Principal-Agent problem, where the taxpayer is the agent and the government is the principal.
Another approach is to use the Stackelberg game framework, where the government moves first by announcing the tax rate, and taxpayers respond by choosing their effort level in avoiding taxation. The government's objective is to maximize total tax revenue, while taxpayers aim to maximize their net payoff.
Principal-agent problems arise when one party (the principal) hires another party (the agent) to act on their behalf, but the agent's interests may not align with those of the principal. This is a common issue in public economics, such as in the provision of public services or the implementation of public policies.
Game theory can be used to analyze these problems by modeling the interaction between the principal and the agent. The principal's objective is to design incentives that align the agent's interests with their own. This can be achieved through contract theory, where the principal offers a contract that specifies the agent's payoff as a function of their effort.
However, the agent may have private information that affects their effort, leading to moral hazard. To address this, the principal can use screening mechanisms, such as requiring the agent to provide information about their private characteristics or using performance-based contracts.
Mechanism design is the study of designing rules of interaction to achieve a desired outcome, even when the participants have private information and may have conflicting interests. In public economics, mechanism design can be used to design efficient and incentive-compatible public policies.
One example is the design of auction mechanisms for allocating public resources, such as spectrum licenses or government contracts. The goal is to maximize social welfare while ensuring that bidders reveal their true valuations of the resources.
Another application is the design of voting systems for public referendums or legislative bodies. The objective is to ensure that the outcome of the vote reflects the preferences of the majority, even when voters have private information about their preferences.
In both cases, game theory provides the tools to analyze the strategic behavior of the participants and design mechanisms that achieve the desired outcome.
Game theory provides a powerful framework for analyzing various aspects of labor economics. This chapter explores how game theory can be applied to understand job search and matching, wage bargaining, unemployment, labor market institutions, and discrimination.
Job search and matching models use game theory to analyze how workers and firms interact in the labor market. These models often assume that workers and firms have different information and preferences, leading to strategic behavior. Key concepts include:
One prominent model is the Spence Model, which explains how education signals ability and affects wages. Another is the Matching Model, which focuses on the matching process between workers and firms.
Wage bargaining in labor economics can be analyzed using game theory to understand how workers and employers negotiate wages. Key concepts include:
The Bargaining Model uses game theory to analyze the outcomes of wage negotiations, considering factors like information, power, and preferences.
Game theory can also be applied to understand unemployment and the role of labor market institutions. Key concepts include:
The Job Flow Model uses game theory to analyze the dynamics of job creation and destruction, while the Matching Model focuses on the matching process between workers and firms.
Game theory can help understand discrimination and segregation in the labor market. Key concepts include:
The Roth-Susskind Model uses game theory to analyze taste-based discrimination, while the Mincer Model focuses on statistical discrimination.
In conclusion, game theory offers valuable insights into various aspects of labor economics, from job search and matching to wage bargaining, unemployment, and discrimination. By modeling strategic interactions, game theory helps economists understand and predict labor market outcomes.
This chapter delves into more specialized and complex topics within game theory, expanding the understanding of strategic interactions beyond the basics. We will explore cooperative game theory, repeated games with incomplete information, signaling and screening, and behavioral game theory.
Cooperative game theory studies situations where players can form binding commitments and enforce agreements. This contrasts with non-cooperative games where players act in self-interest. Key concepts include the Shapley value, which fairly distributes the total surplus among players, and the core, which identifies stable outcomes where no coalition has an incentive to deviate.
We will also discuss coalitional games, where the worth of a coalition is determined by the collective payoff of its members. This includes characteristic function games, where the worth of a coalition is exogenously given, and coalitional function games, where the worth is determined endogenously.
Repeated games with incomplete information extend the repeated games framework by introducing uncertainty about players' types or strategies. This is particularly relevant in economics, where agents may have private information that affects their behavior.
We will explore Bayesian repeated games, where players have beliefs about each other's types, and signaling games, where players can send signals to influence others' beliefs. The concept of credible threats will be discussed, which are commitments that are believed to be enforced.
Signaling and screening are mechanisms through which one party (the sender) can influence the beliefs of another party (the receiver) about the sender's type. This is crucial in various economic interactions, such as job interviews, auctions, and insurance markets.
In signaling games, the sender has private information that affects the receiver's valuation. The sender can choose to reveal or conceal this information to influence the receiver's decision. In screening games, the receiver can choose to accept or reject the sender based on the sender's characteristics.
Behavioral game theory incorporates insights from psychology to study how people actually behave in strategic situations. This field challenges the rational choice assumption of traditional game theory and explores concepts such as bounded rationality, cognitive biases, and emotional decision-making.
We will discuss experiments and empirical studies that demonstrate deviations from rational behavior. Additionally, we will explore how these behavioral insights can be integrated into game-theoretic models to better predict and understand real-world phenomena.
In conclusion, this chapter provides a glimpse into the advanced and specialized topics within game theory. These extensions offer deeper insights into strategic interactions and have significant implications for various fields, including economics, biology, and political science.
Log in to use the chat feature.