I. Foundational Concepts
Nash Equilibrium
A set of strategies where no player can improve their outcome by unilaterally changing their own strategy. Every finite game has at least one Nash equilibrium (possibly in mixed strategies). It’s the baseline prediction for how rational actors will behave.
Application: Pricing in competitive markets. If two airlines serve the same route, neither will raise prices unilaterally (they’d lose customers) or cut prices indefinitely (they’d lose money). The stable price point where neither wants to move is the Nash equilibrium. Understanding this helps you predict competitor behavior and identify when a market has settled into a stable but suboptimal pattern.
Dominant Strategy
A strategy that produces a better outcome for a player regardless of what opponents do. When every player has a dominant strategy, the game has a dominant strategy equilibrium—the strongest possible prediction in game theory.
Application: Investing in cybersecurity. Regardless of whether competitors invest, the cost of a breach so exceeds the cost of protection that securing your systems dominates. Similarly, offering free shipping above a threshold dominates for many e-commerce players—it increases conversion regardless of competitor behavior.
Mixed Strategy
When no pure strategy dominates, players randomize across options with specific probabilities. The equilibrium mix makes the opponent indifferent between their own options, which is what stabilizes the randomization.
Application: Audit and compliance strategies. Tax authorities can’t audit everyone, so they randomize. The optimal audit rate is the one that makes taxpayers indifferent between cheating and complying. Retailers use similar logic with loss prevention—random checks at a rate that makes shoplifting unprofitable in expectation.
Minimax
Each player minimizes their maximum possible loss. In zero-sum games, minimax strategies produce the Nash equilibrium. The concept formalizes defensive, worst-case thinking.
Application: Portfolio hedging. A fund manager who can’t predict whether rates will rise or fall might construct a portfolio that minimizes the worst-case loss across both scenarios. Military strategy, competitive bidding where you assume the opponent plays optimally—any context where you want to guarantee a floor on your outcome.
II. Classic Strategic Games
Prisoner’s Dilemma
Two players each choose to cooperate or defect. Mutual cooperation beats mutual defection for both, but each player is individually tempted to defect regardless of what the other does. Defect-defect is the Nash equilibrium even though cooperate-cooperate is better for everyone.
Application: Price wars. Two competitors would both profit more by keeping prices high, but each is tempted to undercut. The result: both slash prices and both suffer. OPEC production quotas, industry trade associations, and tacit collusion all represent attempts to sustain the cooperative outcome. The model explains why cartels are unstable and why industries with few players and transparent pricing tend toward price discipline while fragmented ones race to the bottom.
Repeated Prisoner’s Dilemma & Tit-for-Tat
When the same players interact repeatedly with no known end date, cooperation becomes sustainable. The Folk Theorem proves that virtually any mutually beneficial outcome can be sustained in equilibrium through threat of future punishment. Robert Axelrod’s tournaments showed that Tit-for-Tat—cooperate first, then mirror the opponent’s last move—is remarkably effective: simple, retaliatory, forgiving, and clear.
Application: Supplier relationships. A manufacturer who squeezes a supplier on one contract may get a short-term win, but the supplier retaliates on quality or priority next time. Long-term vendor relationships function as repeated games where reputation and reciprocity sustain cooperation. The key insight: make the relationship feel infinite (long-term contracts, ongoing business) rather than finite (one-off RFPs), and cooperation emerges naturally.
Chicken (Hawk-Dove)
Two players drive toward each other. Swerving is safe but looks weak; not swerving wins if the opponent swerves, but mutual stubbornness is catastrophic. There are two Nash equilibria (one swerves, the other doesn’t) and a mixed-strategy equilibrium. The game captures situations where backing down is costly to reputation but collision is worse.
Application: Labor negotiations and political brinksmanship. A union threatens to strike; management threatens to lock out. Both sides posture as committed, but a strike hurts everyone. The strategic key is credible commitment—burning your own boats so the other side believes you really won’t swerve. In business: aggressive market entry where both an incumbent and entrant would lose from a price war, but the entrant commits capital that makes retreat costly.
Stag Hunt
Two hunters can cooperate to catch a stag (high payoff, requires coordination) or independently hunt hares (safe, lower payoff). Unlike the Prisoner’s Dilemma, mutual cooperation is a Nash equilibrium—but so is mutual defection. The challenge isn’t incentive alignment; it’s trust and coordination.
Application: Industry standards and platform adoption. Two companies would both benefit from adopting a shared standard (USB-C, a common API), but each risks investing in coordination that the other abandons. The stag hunt explains why standards bodies, consortia, and public commitments matter—they build the mutual confidence needed to reach the superior equilibrium. Also applies to team performance: everyone working hard is an equilibrium, but so is everyone coasting, and the difference is trust.
Battle of the Sexes
Two players prefer to coordinate on the same option but disagree on which one. Both prefer being together over being apart, but each wants coordination on their preferred choice. There are two pure-strategy equilibria and one mixed-strategy equilibrium (which is inefficient because the players sometimes mismatch).
Application: Technology platform choices within an organization. Engineering wants to standardize on one stack, product wants another. Both agree that using the same stack is better than fragmentation, but each prefers their own. Resolution mechanisms: precedent (“we used X last time”), authority (“CTO decides”), or side payments (“we’ll use your stack now if you support ours next quarter”). Also captures co-marketing partnerships where both brands benefit from alignment but disagree on creative direction.
Coordination Game
Players benefit from choosing the same action, and there’s no conflict over which one. The challenge is purely informational—how do players converge? Schelling’s focal points (salient, culturally obvious choices) often solve this.
Application: Network effects and platform adoption. Everyone benefits from being on the same messaging platform, but which one? The answer often comes from focal points—what’s already popular, what the influential early adopters chose, what has the most obvious brand. Explains why first-mover advantage in network-effects businesses is so powerful: you become the focal point.
War of Attrition
Two players compete by enduring costs over time. The one who quits first loses, but continuing is expensive for both. The longer the contest, the more both players have wasted. In equilibrium, players randomize their exit times, and the expected total cost often equals or exceeds the prize value.
Application: Startup competition in winner-take-all markets. Two ride-sharing companies burning cash to acquire market share, each hoping the other runs out of funding first. Patent litigation where both sides spend millions hoping the other settles. The model’s key lesson: if you’re in a war of attrition, the rational move is often to exit early or never enter—the expected value of fighting is usually negative.
Ultimatum Game
One player proposes a split of a fixed sum; the other accepts or rejects. Rejection means neither gets anything. Rational theory predicts the proposer offers almost nothing and the responder accepts (something beats nothing). In practice, people reject “unfair” offers below about 30%, and proposers anticipate this.
Application: Negotiation framing. When you have the power to make a take-it-or-leave-it offer (acquisition terms, licensing deals), pure rationality says push hard. But humans reject offers that feel unfair even at personal cost. The model teaches that perceived fairness is a real strategic constraint—not just a nicety. Leaving enough on the table for the other side to feel respected often produces better outcomes than maximizing your theoretical share.
Centipede Game
Players alternate turns, each choosing to “take” (end the game and claim a slightly larger share) or “pass” (let the pot grow). Backward induction says the first player should take immediately, but in practice players pass for several rounds, growing the pot before someone defects.
Application: Venture capital and startup equity negotiations. Early employees could demand large equity grants now (take), but if they trust the company will grow, passing and vesting over time produces more value. Founders and investors face the same logic at each funding round. The model highlights the tension between theoretical rational behavior (grab value now) and the practical value of trust and patience in growing relationships.
III. Market Competition Models
Cournot Competition (Quantity)
Firms simultaneously choose how much to produce. More production means lower market prices. Each firm’s optimal quantity depends on what competitors produce. The Cournot equilibrium sits between monopoly (too little output, high prices) and perfect competition (maximum output, lowest prices).
Application: Commodity markets—oil, steel, memory chips. OPEC members deciding production quotas is a Cournot game. The model predicts that as the number of competitors increases, the equilibrium approaches perfect competition (lower margins). It also shows why capacity decisions are strategically important: building a factory commits you to output levels that affect the entire market price.
Bertrand Competition (Price)
Firms simultaneously choose prices for identical products. The lowest-priced firm captures the entire market. In the basic model, even two competitors drive prices down to marginal cost—the Bertrand paradox. Product differentiation, capacity constraints, or switching costs soften this.
Application: Explains why commoditized digital products (cloud storage, basic SaaS tools) face intense price pressure with even one competitor. Also explains why companies invest heavily in differentiation—brand, features, ecosystem lock-in—to escape Bertrand dynamics. If your product is truly interchangeable with a competitor’s, the model predicts you’ll make zero economic profit regardless of how few competitors exist.
Stackelberg Competition (Leader-Follower)
One firm moves first (the leader), choosing quantity or price. Competitors observe and respond optimally. The leader gains an advantage by committing to an aggressive position that constrains the follower’s best response. The leader typically earns more than in a simultaneous game.
Application: First-mover strategy in markets with visible commitments. Amazon’s early investment in fulfillment infrastructure committed it to high volume, forcing competitors to accept a smaller residual market. The model’s insight: moving first only helps if the commitment is visible and irreversible. Announcing plans without committing resources doesn’t earn the Stackelberg advantage—you need sunk costs that change the follower’s calculus.
Hotelling’s Model (Spatial Competition)
Firms choose locations along a line (literal or metaphorical—product positioning). Consumers prefer the nearest option. The classic result: two firms converge to the center (“minimum differentiation”), even though spreading apart would reduce price competition. With more than two firms, clustering breaks down.
Application: Product positioning. Fast food chains cluster at highway exits. Political candidates converge to the median voter. Streaming services offer increasingly similar content libraries. The model explains why competitors often look alike and why “be different” is strategically hard when differentiation means ceding the center. It also shows when differentiation does pay: when price competition is intense enough that the benefit of a defensible niche outweighs the cost of a smaller addressable market.
IV. Information & Incentive Games
Signaling Games
A player with private information takes a costly action to credibly reveal their type. The signal must be costly enough that low-quality types can’t profitably mimic it. Michael Spence’s job-market signaling is the canonical example: a degree signals ability not because education teaches skills, but because completing it is easier for high-ability workers.
Application: Product warranties (only a quality manufacturer can afford to offer long warranties). Startup fundraising (a founder accepting a lower salary signals conviction). Premium pricing as quality signal. Money-back guarantees. Any situation where you need to credibly communicate something the other party can’t directly verify. The key question: is the signal differentially costly? If everyone can afford to send it, it communicates nothing.
Screening
The uninformed party designs a menu of options that causes the informed party to self-select and reveal their type. Unlike signaling (where the informed party moves first), screening has the uninformed party move first by offering a structured choice.
Application: Insurance deductibles. Insurers offer plans with different deductible-premium combinations. Low-risk customers choose high-deductible plans (saving on premiums they’re unlikely to need); high-risk customers choose low-deductible plans. The menu separates types without the insurer needing to know each customer’s risk. SaaS pricing tiers work the same way—the feature set and price point of each tier is designed to make customers self-sort by willingness to pay.
Adverse Selection
When one side of a transaction has better information about quality, the market can unravel. Sellers of high-quality goods exit (the price doesn’t reflect their quality), leaving only low-quality goods—Akerlof’s “market for lemons.” The informed party’s private knowledge degrades trust for everyone.
Application: Used car markets (Carfax and certified pre-owned programs exist specifically to solve this). Health insurance markets without mandates (healthy people opt out, raising premiums, causing more healthy people to opt out). Hiring markets where top candidates have outside options and weaker candidates flood the applicant pool. Talent marketplaces and freelance platforms fight adverse selection with reviews, portfolios, and vetting processes.
Moral Hazard
After a contract is signed, one party’s behavior changes because they don’t bear the full consequences of their actions. The problem isn’t hidden information (that’s adverse selection) but hidden action.
Application: Insurance (once insured, people take more risks). Executive compensation (salary without equity reduces effort incentive). Bailouts (banks take excessive risks expecting government rescue). The solutions are monitoring, incentive alignment (equity, profit-sharing, clawbacks), and co-insurance (deductibles force skin in the game). Any principal-agent relationship—employer-employee, investor-founder, franchisor-franchisee—faces moral hazard.
Principal-Agent Problem
The principal (owner, shareholder, client) delegates to an agent (manager, employee, contractor) whose interests may differ. The principal can’t perfectly observe the agent’s effort or intentions. The challenge: design a contract that aligns incentives despite asymmetric information.
Application: Executive compensation design—how much base salary vs. bonus vs. equity? Real estate agents (they want to close quickly; you want the highest price). Fund managers (they earn fees regardless of performance unless compensated with carried interest). The model provides a framework for any delegation decision: what can you observe, what can you incentivize, and what residual misalignment do you accept?
V. Auction Theory
English Auction (Ascending)
Price rises until one bidder remains. The winner pays just above the second-highest bid. Strategically simple: bid up to your true value and stop. Widely used because it’s transparent and tends to discover true valuations.
Dutch Auction (Descending)
Price starts high and drops until someone claims the item. Strategically equivalent to a sealed-bid first-price auction—you must decide your bid without knowing others’ bids, so you shade below your true value to capture surplus.
First-Price Sealed Bid
Everyone submits one bid simultaneously. Highest bid wins and pays their bid. Bidders shade below their true value (the more competitors, the less shading). Used in government procurement, real estate, and M&A.
Vickrey Auction (Second-Price Sealed Bid)
Everyone submits one bid simultaneously. Highest bid wins but pays the second-highest bid. The dominant strategy is to bid your true value—you can’t gain by bidding higher or lower. This is how eBay’s proxy bidding works and the basis for Google’s original ad auction (generalized second-price).
Revenue Equivalence Theorem
Under standard assumptions (risk-neutral bidders, independent private values), all four standard auction formats generate the same expected revenue for the seller. This means the choice of auction format is about risk preferences, information revelation, and collusion resistance—not raw revenue.
Application across formats: Choosing an auction format for spectrum licenses, ad placement, procurement, or M&A depends on context. Second-price auctions encourage truthful bidding but are vulnerable to shill bidding. First-price auctions are harder to collude in but cause bid shading. Ascending auctions reveal information (useful for common-value goods). The winner’s curse—the tendency for the winner to have overestimated value—is critical in common-value settings like oil lease auctions or M&A: if you won, ask why nobody else was willing to pay that much.
VI. Cooperative Game Theory
Shapley Value
A formula that assigns each player in a coalition their marginal contribution, averaged across all possible orderings in which the coalition could form. It’s the unique allocation satisfying efficiency, symmetry, additivity, and the null-player property. Mathematically principled but computationally expensive for large groups.
Application: Cost allocation in joint ventures. If three airlines share a terminal, how should they split costs? The Shapley value considers what each airline contributes to every possible sub-coalition. Used in transfer pricing, infrastructure cost-sharing, and increasingly in machine learning (SHAP values for feature attribution are derived from the same concept).
The Core
The set of allocations where no subgroup of players would be better off breaking away and forming their own coalition. If the core is empty, no stable allocation exists—some group always has an incentive to defect.
Application: Partnership stability. If three co-founders split equity, is any pair tempted to cut out the third and keep more? If so, the allocation isn’t in the core, and the partnership is unstable. The concept applies to any alliance: OPEC, trade blocs, research consortia. When the core is empty, expect instability—no deal will stick.
Nash Bargaining Solution
Two players negotiate over how to split a surplus. If they fail to agree, each gets their “disagreement point” (BATNA). The Nash solution maximizes the product of each player’s gains over their disagreement point—it’s the unique solution satisfying Pareto efficiency, symmetry, independence of irrelevant alternatives, and invariance to affine transformations of utility.
Application: Any bilateral negotiation. The model’s core insight: your bargaining power comes from your BATNA (best alternative to negotiated agreement), not from your desire or need. A job candidate with another offer has a better disagreement point and gets a better deal. In M&A, the acquirer with alternative targets and the target with alternative suitors both negotiate from strength. Improving your BATNA is often more valuable than improving your negotiation tactics.
Rubinstein Bargaining
Alternating-offer bargaining with time discounting. Player 1 proposes, Player 2 accepts or counter-offers, and so on. Delay is costly for both. The unique equilibrium: the more patient player gets a larger share. The first mover gets a slight advantage (diminishing as the time between offers shrinks).
Application: Any negotiation with time pressure. The side under more deadline pressure (a startup running out of runway, a seller with carrying costs, a company facing a regulatory deadline) concedes more. Strategically: demonstrate patience, create urgency for the other side, and recognize that dragging out negotiations only helps you if your discount rate is lower than theirs.
VII. Mechanism Design
Mechanism Design (Reverse Game Theory)
Instead of analyzing a game as given, you design the rules of a game to produce a desired outcome. You choose the information each player must reveal, the actions available, and the mapping from actions to outcomes—all to align self-interested behavior with your objective.
Application: Marketplace design (how eBay, Airbnb, and Uber structure their platforms to align buyer-seller incentives). Incentive-compatible compensation structures. The design of spectrum auctions, school-choice systems, kidney exchange programs, and carbon markets. The key question mechanism design answers: “What rules would make people voluntarily behave the way we want?”
Revelation Principle
For any mechanism that achieves an outcome through strategic behavior, there exists an equivalent “direct” mechanism where players simply report their private information truthfully. This simplifies analysis enormously—you only need to consider mechanisms where truth-telling is incentive-compatible.
Application: Simplifies the design of auctions, procurement processes, and internal resource allocation. Instead of designing complex multi-round games, you can focus on creating incentive-compatible direct mechanisms. Google’s ad auction, Vickrey-Clarke-Groves mechanisms for public goods, and some internal capital allocation processes are built on this principle.
Myerson–Satterthwaite Theorem
In bilateral trade with private valuations, no mechanism can simultaneously be incentive-compatible, individually rational, budget-balanced, and efficient. Some trades that should happen won’t. This is a fundamental impossibility result—friction in bilateral negotiation isn’t a market failure to fix; it’s a mathematical certainty.
Application: Explains why M&A negotiations often fail even when a deal would create value. Explains the role of intermediaries (brokers, investment banks) who accept budget imbalance in exchange for facilitating more trades. If you’re designing a marketplace, you need to accept that you can’t have everything—you must choose which property to sacrifice.
VIII. Evolutionary & Behavioral Models
Evolutionary Game Theory
Players don’t choose strategies rationally—strategies that produce higher payoffs spread through the population over time. The key solution concept is the Evolutionarily Stable Strategy (ESS): a strategy that, once dominant, can’t be invaded by a small group of mutants. ESS is a refinement of Nash equilibrium—every ESS is a Nash equilibrium, but not vice versa.
Application: Market dynamics and competitive strategy over time. Business practices that “work” spread through imitation (lean manufacturing, subscription pricing, freemium). The model explains why certain business strategies become industry norms and why disruption is hard: the incumbent strategy is an ESS that resists small-scale invasion. To displace it, you need a critical mass—not just a marginally better approach.
Replicator Dynamics
The mathematical model of how strategy frequencies change over time in a population. Strategies that earn above-average payoffs grow; below-average ones shrink. The stable states of replicator dynamics correspond to evolutionary stable strategies.
Application: Technology adoption curves. New technologies (strategies) grow when they outperform the status quo, but face resistance until they reach critical mass. The model predicts S-curve adoption patterns and explains why some superior technologies fail—if the initial population share is below a threshold, the strategy shrinks even though it would dominate at scale. Relevant for platform competition, standards battles, and go-to-market timing.
Prospect Theory & Behavioral Game Theory
Real humans deviate systematically from rational game-theoretic predictions. They overweight losses relative to gains (loss aversion), overweight small probabilities, anchor on irrelevant reference points, and care about fairness even at personal cost. Behavioral game theory incorporates these biases into strategic analysis.
Application: Pricing and framing. A price cut framed as a discount (gain) is perceived differently than the same price positioned as avoiding a surcharge (avoiding a loss). Negotiation: anchoring effects mean the first number on the table disproportionately influences the outcome. Subscription pricing exploits loss aversion—canceling feels like losing something. The behavioral models don’t replace classical game theory; they refine its predictions for contexts where human psychology dominates.
IX. Public Goods & Collective Action
Public Goods Game (Free Rider Problem)
Players choose how much to contribute to a shared resource that benefits everyone equally. The Nash equilibrium is to contribute nothing (free ride), even though universal contribution would make everyone better off. It’s a multi-player Prisoner’s Dilemma.
Application: Open-source software contribution, industry lobbying (everyone benefits from favorable regulation, but each firm wants others to fund the lobbying effort), shared marketing in a shopping district, and climate agreements. Solutions: make contributions visible (reputation), exclude non-contributors from benefits (club goods), or use penalties (mandatory dues, taxes).
Tragedy of the Commons
A shared resource is overexploited because each user captures the full benefit of additional use while bearing only a fraction of the cost. The individually rational choice—take more—leads to collective ruin. Elinor Ostrom’s work showed that communities can self-govern commons without privatization or state regulation, under specific conditions (clear boundaries, local monitoring, graduated sanctions).
Application: Shared infrastructure (API rate limits, shared Slack channels, shared cloud budgets), fisheries, groundwater, and urban parking. Within companies: shared engineering resources, platform teams, and common budgets. Ostrom’s conditions translate directly: define who has access, monitor usage transparently, escalate consequences for overuse, and give users a voice in the rules.
X. Dynamic & Sequential Games
Backward Induction
In sequential games with perfect information, you solve from the end. At each decision node, the player chooses optimally given what follows. Rolling back to the start produces the subgame-perfect equilibrium—a strategy profile that’s a Nash equilibrium in every subgame, eliminating non-credible threats.
Application: Any sequential business decision—market entry, product launch timing, negotiation sequences. If a competitor threatens to match your price cut but would lose money doing so, backward induction says the threat isn’t credible and you should ignore it. M&A negotiation: understanding the end-state (regulatory approval, integration challenges) informs opening positions. The method is especially useful for multi-stage investments where each stage reveals information.
Commitment Devices & Credible Threats
A player can improve their position by visibly restricting their own future options. Burning bridges, signing binding contracts, making public announcements—anything that makes it costly to back down. The commitment must be visible, irreversible, and genuinely constraining to affect the other player’s strategy.
Application: Most-favored-nation clauses (committing to not offering better terms to others). Public product launch dates (creating reputational cost of delay). Guaranteed price-match policies (committing to aggressive pricing that deters competitors). The model explains why sometimes reducing your own flexibility is strategically valuable—it changes what the other side believes you’ll do.
Real Options
Treating strategic investments as options: paying a small cost now to preserve the right (but not obligation) to invest more later, after uncertainty resolves. The value of waiting and the value of flexibility are formally quantified.
Application: Staged venture capital investment (Series A buys the option to do Series B). R&D spending (buys the option to commercialize). Land banking (buys the option to develop). Platform investments (buying optionality across multiple use cases). The model disciplines the “just do it now” instinct by quantifying the value of waiting for information, especially in high-uncertainty environments.
XI. Network & Platform Games
Two-Sided Markets
Platforms that serve two distinct user groups whose value to each other is interdependent. More buyers attract more sellers and vice versa. Pricing must balance both sides—often subsidizing the more price-sensitive side to attract the other.
Application: Credit cards (subsidize cardholders, charge merchants). App stores (subsidize developers with free tools, monetize through consumer purchases). Marketplaces (subsidize supply or demand depending on which side is the bottleneck). The model explains why platforms often appear to “leave money on the table” on one side—they’re investing in the cross-side network effect.
Network Effects & Tipping
Each user’s value increases with the number of other users. Beyond a critical mass, positive feedback makes the leading platform pull away (“tipping”). Below critical mass, the platform can collapse. Multiple equilibria exist: universal adoption and universal non-adoption are both stable.
Application: Social networks, messaging apps, payment systems, developer platforms. The strategic imperative is reaching critical mass fast—hence the prevalence of subsidized growth, freemium, and land-grab strategies in platform businesses. The model also predicts when incumbents are vulnerable: if a new platform can segment a high-value niche and build density there, it can tip that segment and then expand.
Quick Reference
When choosing which model to apply:
- Are incentives misaligned? → Prisoner’s Dilemma, moral hazard, principal-agent
- Is the interaction repeated? → Repeated games, tit-for-tat, Folk Theorem
- Is information asymmetric? → Signaling, screening, adverse selection
- Are you designing the rules? → Mechanism design, auction theory
- Is the problem coordination, not conflict? → Stag hunt, coordination game, focal points
- Is there a first-mover advantage? → Stackelberg, commitment devices, backward induction
- Are you negotiating a split? → Nash bargaining, Rubinstein, ultimatum game
- Is the market winner-take-all? → Network effects, war of attrition, two-sided markets
- Are you allocating shared costs or value? → Shapley value, core, cooperative game theory
- Are you competing on price or quantity? → Bertrand, Cournot, Hotelling