Universität Bonn

Department of Economics

Seminars & Workshops

Micro Theory Seminar &  BGSE Workshop

Rooms and dates for the winter term 2023/24

  • BGSE Workshop: room 0.017, wednesday at 12 p.m.
  • Micro Theory Seminar: faculty room, wednesday at 4.30 p.m.

All dates for download:


Micro Theory Seminar

The seminar usually takes place on Wednesday, 16:30-18:00, in the faculty room, Adenauerallee 24-42, 53113 Bonn.

Please find below the guests and dates for upcoming seminars:

Winter Term 2023/24

The Costly Wisdom of Inattentive Crowds

Incentivizing the acquisition and aggregation of information is a key task of the modern economy (e.g., financial markets). We study the design of optimal mechanisms for this task. A population of rationally inattentive (RI) agents can flexibly learn about a common state of nature, subject to uniformly posterior separable (UPS) information costs. A principal, who aims to procure a given information structure from the agents at minimal cost, can design general dynamic mechanisms with report- and state-contingent payments. If the agents are risk-neutral, prediction markets implement the first-best. If the agents are risk-averse, no mechanism can approximate the first-best cost—not even those that harness the “wisdom of the crowd” by employing a large number of “informationally small” agents. This inefficiency derives from the combination of agents’ moral hazard and adverse selection. Our characterization of incentive compatibility, which exploits an equivalence between proper scoring rules and UPS information costs, is tractable and portable to other design settings with RI agents (e.g., principal-expert and screening problems).

Optimal Security Design for Risk-Averse Investors

We use the tools of mechanism design, combined with the theory of risk measures, to analyze a model where a cash constrained owner of an asset with stochastic returns raises capital from a population of investors that differ in their risk aversion and budget constraints. The distribution of the asset's cash flow is assumed here to be common-knowledge: no agent has private information about it. The issuer partitions and sells the asset's realized cash flow into several asset-backed securities, one for each type of investor. The optimal partition conforms to the commonly observed practice of tranching (e.g., senior debt, junior debt and equity) where senior claims are paid before the subordinate ones. The holders of more senior/junior tranches are determined by the relative risk appetites of the different types of investors and of the issuer, with the more risk averse agents holding the more senior tranches. Tranching endogenously arises here in an optimal mechanism because of simple economic forces: the differences in risk appetites among agents, and in the budget constraints they face.

Dynamic Contracting with Flexible Monitoring

We study a principal's joint design of optimal monitoring and com- pensation schemes to incentivize an agent by incorporating information design into a dynamic contracting framework. The principal can flexibly allocate her limited monitoring capacity between seeking evidence that confirms or contradicts the agent's e¤ort, as the basis for reward or punishment. When the agent's continuation value is low, the principal seeks only confirmatory evidence. When it exceeds a threshold, the principal seeks mainly contradictory evidence. Importantly, the agent's effort is perpetuated if and only if he is sufficiently productive.

A Measure of Behavioral Heterogeneity

In this paper we propose a novel way to measure behavioral heterogeneity in a population of stochastic individuals. Our measure is choice-based; it evaluates the probability that, over a randomly selected menu, the sampled choices of two sampled individuals differ. We provide axiomatic foundations for this measure and a decomposition result that separates heterogeneity into its intra- and inter-personal
components.

A mechanism-design approach to property rights

We propose a framework for studying the optimal design of rights relating to the control of an economic resource - which we broadly refer to as property rights. An agent makes an investment decision, affecting her valuation for the resource, and then participates in a trading mechanism chosen by a principal in a sequentially rational fashion, leading to a hold-up problem. A designer - who would like to incentivize efficient investment and whose preferences may differ from those of the principal - can endow the agent with a menu of rights that determine the agent's set of outside options in the interaction with the principal. We characterize the optimal rights as a function of the designer's and the principal's objectives, and the investment technology. We find that optimal rights typically differ from a classical property right giving the agent full control over the resource. In particular, we show that the optimal menu requires at most two types of rights, including an option-to-own, which grants the agent control over the resource upon paying a pre-specified price.
 

Time Trumps Quantity in the Market for Lemons

We consider a dynamic adverse selection model where privately informed sellers of divisible assets can choose how much of their asset to sell at each point in time to competitive buyers. With commitment, delay and lower quantities are equivalent ways to signal higher quality. Only the discounted quantity traded is pinned down in equilibrium. With spot contracts and observable past trades, there is a unique and fully separating path of trades in equilibrium. Irrespective of the horizon and the frequency of trades, the same welfare is attained by each seller type as in the commitment case. When trades can take place continuously over time, each type trades all of its assets at a unique point in time. Thus, only delay is used to signal higher quality. When past trades are not observable, the equilibrium only coincides with the one with public histories when trading can take place continuously over time.

How Competition Shapes Information in Auctions

We consider auctions where buyers can acquire costly information about their valuations and those of others, and investigate how competition between buyers shapes their learning incentives. In equilibrium, buyers find it cost-efficient to acquire some information about their competitors so as to only learn their valuations when they have a fair chance of winning. We show that such learning incentives make competition between buyers less effective: losing buyers often fail to learn their valuations precisely and, as a result, compete less aggressively for the good. This depresses revenue, which remains bounded away from what the standard model with exogenous information predicts, even when information costs are negligible. Finally, we examine the implications for auction design. First, setting an optimal reserve price is more valuable than attracting an extra buyer, which contrasts with the seminal result of Bulow and Klemperer (1996). Second, the seller can incentivize buyers to learn their valuations, hence restoring effective competition, by maintaining uncertainty over the set of auction participants.

Optimal testing in disclosure games

We extend the standard disclosure model between a sender and a receiver by allowing the receiver to gather partial information. The receiver can choose any signal with at most k realizations, which we call a test. Since the test choice is observed by the sender, it influences the sender’s disclosure incentives. We characterize the optimal test for the receiver and show how it resolves the trade-off between the informativeness of the test and disclosure incentives. If the receiver would aim at maximizing the informativeness, she would choose a deterministic test. In contrast, the optimal test involves randomization over signal realizations and maintains a simple structure. This structure allows us to interpret this randomization as the strategic use of uncertain evaluation standards for disclosure incentives.

(Un-)Common Preferences, Ambiguity, and Coordination

We study the ‘common prior’ assumption and its implications when agents have differential information and preferences beyond subjective expected utility (SEU). We consider consequentialist interim preferences that are consistent with respect to the same ex-ante evaluation and characterize the latter in terms of extreme limits of higher-order expectations. Notably, agents are mutually dynamic consistent with respect to the same ex-ante evaluation if and only if all the limits of higher-order expectations are the same, extending beyond SEU the classical characterization of the common prior assumption in Samet. Within this framework, we characterize the properties of equilibrium prices in financial beauty contests (and other coordination games) in terms of the agents’ private information, coordination motives, and attitudes toward uncertainty. Differently from the SEU case, the limit price does not coincide in general with the common ex-ante expectation. Moreover, when the agents share the same benchmark probabilistic model, high-coordination motives make their concern for misspecification disappear in equilibrium, exposing them to a divergence between the market price and the fundamental value of the security.

Stationary social learning in a changing environment

We consider social learning in a changing world. With changing states, societies can be responsive only if agents regularly act upon fresh information, which significantly limits the value of observational learning. When the state is close to persistent, a consensus whereby most agents choose the same action typically emerges. However, the consensus action is not perfectly correlated with the state, because societies exhibit inertia following state changes. Phases of inertia may be longer when signals are more precise, even if agents drawn large samples of past actions, as actions then become too correlated within samples, thereby reducing informativeness and welfare.

Auctions vs. Negotiations: The Role of the Payment Structure

We investigate a seller’s strategic choice between negotiating with fewer bidders and running an auction with additional bidders, allowing for general security payments. The key factor favoring negotiations is the seller’s rent-extraction benefit of setting her preferred payment structure; reserve prices are of secondary importance. Negotiations are more valuable if the seller’s asset creates more value at more productive bidders – in which case sellers prefer contingent payments while bidders prefer cash – and if the dispersion and magnitude of bidders’ private valuations are higher. Our results have implications for mergers and acquisitions, patent licensing, and compensation negotiations in tight labor markets.

Informing agents amidst biased narratives

I study the strategic interaction between a benevolent sender (who provides data) and a biased narrator (who interprets data) who compete to persuade a boundedly rational receiver (who takes action). The receiver does not know the data-generating model. She chooses between models provided by the sender and the narrator using the maximum likelihood principle, selecting the one that best fits the data given her prior belief. The sender faces a trade-off between providing precise information and minimizing misinterpretation. Surprisingly, full disclosure can be suboptimal and even backfire. I identify a finite set of models that contain the optimal data-generating model, which maximizes the receiver’s expected utility. The sender can guarantee non-negative value of information, preventing harm from misinterpretation. I apply this framework to information campaigns and employee feedback.

Putting Context into Preference Aggregation

The axioms underlying Arrow's impossibility theorem are very restrictive in terms of what can be used when aggregating preferences. Social preferences may not depend on the menu nor on preferences over alternatives outside the menu. But context matters. So we weaken these restrictions to allow for context to be included. The context as we define describes which alternatives in the menu and which preferences over alternatives outside the menu matter. We obtain unique representations. These are discussed in examples involving markets, bargaining and intertemporal well-being of an individual.

Coordination in Complex Environments

I introduce a framework to study coordination in highly uncertain environments. Coordination is an important aspect of innovative contexts, where: the more innovative a course of action, the more uncertain its outcome. To explore the interplay of coordination and informational complexity, this paper embeds a beauty-contest game into a complex environment. I uncover a new conformity phenomenon. The new effect may push towards exploration of unknown alternatives, or constitute a status quo bias, depending on the network structure of the connections among players. In an application to oligopoly pricing, an increase in complexity results in a higher level of conformity in pricing policies. I study the new coordination problems introduced by complexity and propose an equilibrium selection rule. In an application to multi-division organizations, sufficiently high complexity "implements" the same profits as centralized decision-making. I also study heterogeneity across players in the mapping from decisions to outcomes, and private information about a status quo.

Motivated Misspecification

I propose a model of expectation management to study how an interactive environment breeds and perpetuates a certain type of misperception. This paper provides a novel approach to incentivize effort (perception manipulation), complementary to the usual monetary or informational incentives studied in principal-agent theory. It endogenizes model misspecification in the literature of misspecified learning in a principal-agent framework and can be applied to a wide range of interactions such as mentor-mentee, parent-child, self-manipulation, and emotional abuse in professional or intimate relationships.

tba

Summer Term 2023

Mechanism Design with Restricted Communication

We consider a Sender–Receiver environment where the sender is informed of states and the receiver chooses actions. There is a communication channel between them consisting of sets of input/output messages and a fixed transition probability. The sender reaches out to the receiver through the channel which limits communication in two ways: the number of available messages might be small, messages might be noisy. We consider a mechanism design setup whereby the receiver commits to a mechanism which selects distribution of actions and possibly monetary transfers, contingent on output messages. We aim at characterizing the joint distributions which can be implemented by communication over the channel, given the incentives of the sender. We consider both one-shot problems and series of i.i.d. problems. In particular, we show that when the sender and the receiver are engaged in a series of problems, linking decisions together is a more efficient instrument than monetary transfers.

Information Acquisition in Matching Markets: The Role of Price Discovery

We explore the acquisition and flow of information in matching markets through a model of college admissions with endogenous costly information acquisition. We extend the notion of stability to this partial information setting, and introduce regret-free stability as a refinement that additionally requires optimal student information acquisition. We show regret-free stable outcomes exist, and finding them is equivalent to finding appropriately-defined market-clearing cutoffs. To understand information flows, we recast matching mechanisms as price-discovery processes. No mechanism guarantees a regret-free stable outcome, because information deadlocks imply some students must acquire information sub optimally. Our analysis suggests approaches for facilitating efficient price discovery, by leveraging historical information or market sub-samples to estimate cutoffs. We show that mechanisms that use such methods to advise applicants on their admission chances yield approximately regret-free stable outcomes. A survey of university admission systems highlights the practical importance of providing applicants with information about their admission chances.

Unpaired Kidney Exchange: Overcoming Double Coincidence of Wants without Money

For an incompatible patient-donor pair, kidney exchanges often forbid receipt-before-donation (the patient receives a kidney before the donor donates) and donation-before-receipt, causing a double-coincidence-of-wants problem. We study an algorithm, the Unpaired kidney exchange algorithm, which eliminates this problem. In a dynamic matching model, we show that waiting time of patients under the Unpaired is close to optimal and substantially shorter than widely used algorithms. Using a rich administrative dataset from France, we show that Unpaired achieves a match rate of 63 percent and an average waiting time of 176 days for transplanted patients. The (infeasible) optimal algorithm is only slightly better (64 percent and 144 days); widely used algorithms deliver less than 40 percent and at least 232 days. We discuss a range of solutions that can address the potential practical incentive challenges of the Unpaired. In particular, we extend our analysis to an environment where a deceased donor waitlist can be integrated to improve the performance of algorithms. We show that our theoretical and empirical comparisons continue to hold. Finally, based on these analyses, we propose a practical version of the Unpaired algorithm.

Informationally Robust Cheap-Talk

We study the robustness of cheap-talk equilibria to infinitesimal private information of the receiver in a model with a binary state-space and state-independent sender-preferences. We show that the sender-optimal equilibrium is robust if and only if this equilibrium either reveals no information to the receiver or fully reveals one of the states with positive probability. We then characterize the actions that can be played with positive probability in any robust equilibrium. Finally, we fully characterize the optimal sender-utility under binary receiver’s private information, and provide bounds for the optimal sender-utility under general private information.

Should the Timing of Inspections be Predictable?

A principal hires an agent to work on a long-term project that culminates in a breakthrough or a breakdown. At each time, the agent privately chooses to work or shirk. Working increases the arrival rate of breakthroughs and decreases the arrival rate of breakdowns. To motivate the agent to work, the principal conducts costly inspections. She fires the agent if shirking is detected. We characterize the principal’s optimal inspection policy. Periodic inspections are optimal if work primarily speeds up breakthroughs. Random inspections are optimal if work primarily delays breakdowns. Crucially, the agent’s actions determine his risk attitude over the timing of punishments.

Early-Career Discrimination: Spiraling or Self-Correcting?

Do workers from social groups with comparable productivity distributions obtain comparable lifetime earnings? We study how a small amount of early-career discrimination propagates over time when workers’ productivity is revealed through employment. In breakdown learning environments that track primarily on-the-job failures, such discrimination spirals into a substantial lifetime earnings gap for groups of comparable productivity, whereas in breakthrough learning environments that track successes, early discrimination self-corrects so as to guarantee comparable lifetime earnings. This contrast is robust to large labor markets, flexible wages, inconclusive learning, investment in productivity, and misspecified employers’ beliefs.

Selecting the Best when Selection is Hard: The Persistent Effects of Luck

Many economic institutions and organizational practices make early success have a persistent effect on final outcomes. By granting additional resources, favorable treatment, or other forms of bias to early strong performers, they raise the likelihood with which these early strong performers become final winners. When performance is informative about ability differentials, such bias can serve as a tool to increase “selective efficiency”, i.e. the allocation of resources or decision-making authority to the most talented. However, in situations where noise swamps ability differences in determining relative performance, the use of bias would have the sole effect of making luck persistent. Such an outcome would seem at odds with the meritocratic principle of requiring differences in economic outcomes to be attributable to ability or effort differentials. In this paper, we challenge this view by showing that even as noise swamps ability differences in driving performance, maximization of selective efficiency continues to require bias favoring early leaders. Moreover, inducing greater persistence of outcomes in noisier environments can be consistent with the objective of assigning resources to the most able.

Equilibrium Selection in Repeated Games with Patient Players

What determines the path of play in an infinitely repeated game? Typically the players’ interests are not perfectly aligned but there is scope for cooperation. Potential surplus could be shared in different ways. The folk theorems of repeated games provide no guidance about the outcome. In the more tractable setting where players can sign binding contracts after any history of play, Abreu and Pearce (2007) show that slight reputational perturbations of the game lead to predictions consistent with Nash bargaining with threats (Nash, 1953). In many settings of interest, such contracts are not available. Nonetheless, combining reputational perturbation with modest continuity and renegotiation conditions in two-person repeated games with patient players again isolates play that is consistent with Nash bargaining with threats.

Organizational Change and Reference-Dependent Preferences

Reference-dependent preferences can explain several puzzling observations on organizational change. Loss aversion clarifies why change is often slow or stagnant for long periods followed by a sudden boost in productivity during a crisis. Moreover, it accounts for the fact that different firms in the same industry can have significant productivity differences. The model also demonstrates the importance of expectation management even if all parties have rational expectations. Social preferences explain why it may be optimal to split up a firm in two different entities.

Reputation for a Degree of Honesty

Can reputation replace legal commitment for an institution making periodic public announcements? Near the limiting case of ideal patience, results of Fudenberg and Levine (1992) imply a positive answer in value terms, in the presence of a rich set of behavioral types. Little is known about equilibrium behavior in such reputational equilibria. Computational and analytic approaches are combined here to provide a detailed look at how reputations are managed. Behavior depends upon which of three reputational regions pertains after a history of play. These characterizations hold even far from the patient limit. Near the limit, a novel method of calculating present discounted values, stationary promisingkeeping, helps establish a close connection between the reliability of the institution’s reports and the Kamenica and Gentzkow (2011) commitment benchmark. It is striking that this connection still holds when the benchmark type is not available (in the set of behavioral types) to be imitated.

Two Approaches to Iterated Reasoning in Games

Level-k analysis and epistemic game theory are two different ways of investigating iterative reasoning in games. This paper explores the relationship between these two approaches. An important difference between them is that level-k analysis begins with an exogenous anchor on the players’ beliefs, while epistemic analysis begins with arbitrary epistemic types (hierarchies of beliefs). To close the gap, we develop the concept of a level-k epistemic type structure, that incorporates the exogenous anchor. We also define a complete level-k type structure where the exogenous anchor is the only restriction on hierarchies of beliefs. One might conjecture that, in a complete structure, the strategies that can be played under rationality and (m − 1)th-order belief of rationality are precisely those strategies played by a level-k player, for any k ≥ m. In fact, we prove that the strategies that can be played are the m-rationalizable strategies (i.e., the strategies that survive m rounds of elimination of strongly dominated strategies). This surprising result says that level-k analysis and epistemic game theory are two genuinely different approaches, with different implications for inferring the players’ reasoning about rationality from their observed behavior.

Posterior-Mean Separable Costs of Information Acquisition

We analyze a problem of revealed preference given state-dependent stochastic choice data in which the payoff to a decision maker (DM) only depends on their beliefs about posterior means. Often, the DM must also learn about or pay attention to the state; in applied work on this subject, it is also often assumed that the costs of such learning are linearly dependent in the distribution over posterior means. We provide testable conditions to identify whether this assumption holds. This allows for the use of information design techniques to solve the DM's problem.


BGSE Workshop


Winter Term 2023/24

Equivalence of Strategy-proofness and Directed Local Strategy-proofness under Preference Extensions

Coarse Information Design

We study an information design problem with continuous state and discrete signal space. Under convex value functions, the optimal information structure is interval-partitional and exhibits a dual expectations property: each induced signal is the conditional mean (taken under the prior density) of each interval; each interval cutoff is the conditional mean (taken under the value function curvature) of the interval formed by neighbouring signals. This property enables examination into which part of the state space is more finely partitioned and facilitates comparative statics analysis. The analysis can be extended to general value functions and adapted to study coarse mechanism design.

Search Disclosure

Recent advances in online tracking technologies have enabled online firms to inform their rivals that a consumer has obtained an offer from them. We call the provision of this information search disclosure and integrate this into the Wolinsky (1986) model of sequential search. We show that firms only voluntarily conduct search disclosure if search costs are low or price revisions are infeasible. The information exchange that can emerge in equilibrium enables price discrimination that reduces consumer surplus and total welfare. By contrast, mandating firms to use search disclosure at all times can raise consumer surplus and total welfare.

Data Linkage between Markets: Does the Emergence of an Informed Insurer Cause Consumer Harm?

A merger of two companies active in seemingly unrelated markets creates data linkage: by operating in a product market, the merged company acquires an informational advantage in an insurance market where companies compete in menus of contracts. In the insurance market, the informed insurer earns rent through cream-skimming. Some of this rent is passed on to consumers in the product market. Overall, the data linkage makes consumers better off when the insurance market is competitive and, under some conditions, even when the insurance market is monopolistic. The data-sharing requirement and concerns of long-term monopolization are discussed.

Moral hazard and adverse selection under generalized distribution approach (brown bag)

We study the design of optimal contracts through which a risk-neutral principal motivates a risk-averse agent to produce outputs. This principal-agent problem is formulated under the generalized distribution approach, where the agent can choose an arbitrary distribution of the output, at a Kullback-Leibler divergence cost. We focus on the case where the agent has private information about the production environment and show that the optimal menu of contracts employs a standard ‘no distortion at the top’ property. Under further assumptions, the optimal menu features full screening.

Advocacy and cheap talk (brown bag)

We study advocacy in a model of information investigation and communication with the latter taking place via cheap talk. The question of interest is whether to assign the task of investigating (a piece of) unverifiable information, which is then communicated to a decision maker, to one or two investigators. Conceptually, this is related to Dewatripont and Tirole (1999) in the sense that investigators are a priori unbiased but can be endogenously turned into advocates. However, a key difference in our model is the role of information, which we treat as unverifiable and manipulable, so that communication takes the form of cheap talk. In contrast to Dewatripont and Tirole (1999), we find that assigning one investigator is weakly preferred to two investigators by the decision maker. Applications are the comparison of legal systems or centralized versus decentralized information investigation in multi-divisional organizations.

Eliciting information from multiple experts via grouping

A decision maker (DM) seeks to determine whether to adopt a new policy or maintain the status quo. To do so, she consults (finitely many) experts whose common interests differ significantly from those of the DM. As suggested by Wolinsky (2002), partial communication ("grouping mechanisms") among experts can – neither requiring transfers nor commitment – result in revelation of more information than full communication: by allowing for communication within groups of experts only and, hence, changing the events in which votes are pivotal, the DM may be able to manipulate experts' strategies to her advantage. We elaborate on this, characterising optimal group mechanisms and conditions under which grouping can improve upon full communication.

Feed for good? On the effects of personalization algorithms in social platforms

In this paper, a social media platform governs the exchange of information among users with preferences for sincerity and conformity by providing personalized feeds. We show that the pursuit of engagement maximization results in the proliferation of echo chambers. A monopolistic platform implements an algorithm that disregards social learning and provides feeds that primarily consist of content from like-minded individuals. We study the consequences on learning and welfare resulting from transitioning to this algorithm from the previously employed chronological feed. While users' experience improves under the platform's optimal algorithm, social learning is worsened. Indeed, learning vanishes in large populations. However, the platform could create value by using its privileged information to design an algorithm that balances learning and engagement, maximizing users' welfare. We discuss interoperability as a possible regulatory solution that would eliminate entry barriers in platform competition caused by network effects, thereby inducing competing platforms to adopt the socially optimal algorithm.

Decentralized Many-to-One Matching with Random Search

I analyze a canonical many-to-one matching market within a decentralized search model with frictions, where a finite number of firms and workers meet randomly until the market clears. I compare the stable matchings of the underlying market and equilibrium outcomes when time is nearly costless. In contrast to the case where each firm has just a single vacancy, I show that stable matchings are not obtained as easily. In particular, there may be no Markovian equilibrium that uniformly implements either the worker- or the firm-optimal stable matching in every subgame. The challenge results from the firms' ability to withhold capacity strategically. Yet, this is not the case for markets with vertical preferences on one side, and I construct the equilibrium strategy profile that leads to the unique stable matching almost surely. Moreover, multiple vacancies enable firms to implicitly collude and achieve unstable but firm-preferred matchings, even under Markovian equilibria. Finally, I identify one sufficient condition on preferences to rule out such opportunities.

Optimal Dynamic Allocation of Attention with Exogenous Terminal Rewards
We study optimal dynamic information acquisition from conclusive Poisson news with prior-independent cost. The decision maker dynamically allocates limited attention across news sources, before stopping endogenously and obtaining a belief-dependent terminal reward. Confirmatory learning is identified as optimal in the absence of incentives for hastening decision-making. Building upon the work of Che and Mierendorff (2019), we analyze various terminal reward structures and embed our findings in a management application focused on a consultant's optimal learning strategy within a prescribed compensation scheme.

Recruitment and Information Provision in Auctions with Learning

In auctions, I explore the interaction between buyers' flexible information acquisition and the seller’s incentives for recruitment and information provision. Contrary to the literature on entry costs, I find that limiting participation is never optimal. The seller’s incentive for information provision is extremal. Different recruitment costs induce distinct auction settings: high recruitment costs deter an active auction; intermediate costs lead to a two-buyer auction without information provision and potential obfuscation; low costs induce an auction with many participants and maximal information provision.

Multidimensional Learning with Misspecified Interactions

We investigate long-term learning outcomes in an exogenous learning environment with multidimensional states and signals under misspecification. We provide a convergence result and general properties of limit beliefs. Focusing on assessing the value of additional information, we find that there is no universally beneficial source: For every possible structure, there exists a scenario where incorporating the information results in long-term beliefs that are worse for the agent. Understanding the true signal structure does not necessarily help in determining which structures are beneficial in a concrete situation, but understanding the agent's (mis-)perception can do so.

Robust Equilibria in Generic Extensive-form Games

We prove the 2-player, generic extensive-form case of the conjecture of Govindan and Wilson (1997a,b) and Hauk and Hurkens (2002) stating that an equilibrium component is essential in every equivalent game if and only if the index of the component is nonzero. This provides an index-theoretic characterization of the concept of hyperstable components of equilibria in generic extensive-form games, first formulated by Kohlberg and Mertens (1986).

Compound Lotteries without Compound Independence

Compound lotteries are useful modelling tools for information preferences, ambiguity, and dynamic choice. A key assumption in dealing with preferences over such objects in the past has been ‘Compound Independence’, a weakened version of Independence which surprisingly led back to expected utility, even under weaker assumptions. I present two representation theorems that do away with Compound Independence and offer new recursive utility functions that represent wider preference over multi-stage lotteries. I characterize risk and information attitudes for such preferences and offer an application to investor behavior which rationalizes changing information preferences and myopic loss aversion.

A Model of Decision Confidence Formation

We study informational dissociations between decisions and decision confidence. We explore the consequences of a dual-system model: the decision system and confidence system have distinct goals, but share access to a source of noisy and costly information about a decision-relevant variable. The decision system aims to maximize utility while the confidence system monitors the decision system and aims to provide good feedback about the correctness of the decision. In line with existing experimental evidence showing the importance of post-decisional information in confidence formation, we allow the confidence system to accumulate information after the decision. We aim to provide a statistical foundation for the post-decisional stage (used in descriptive models of confidence). However, we find that it is not always optimal to engage in the second stage, even for a given individual in a given decision environment. In particular, there is scope for post-decisional information acquisition only for relatively fast decisions. Hence, a strict distinction between one-stage and two-stage theories of decision confidence may be misleading because both may manifest themselves under one underlying mechanism in a non-trivial manner.

Product Differentiation with Partially Informed Consumers

We investigate a Hotelling model of spatial competition, featuring two firms and a continuum of consumers with finite reservation prices. Consumers are facing uncertainty about their locations, but obtain a costless signal provided by an information designer. Firms first select locations and subsequently set prices. Our focus is on identifying optimal signal structures that maximize either total surplus or consumer surplus. We find that the signal structures necessary to achieve these objectives highly depend on the reservation prices.

Screening Knowledge

A principal (she) tests an agent’s (he) knowledge of a subject matter. She has preferences over his unobserved quality, which is correlated with his knowledge. Modeling the subject matter as an unknown state and knowledge as beliefs over it, I show that optimal tests are simple: They take the form of True-False, weighted True-False or True-False-Unsure, regardless of the principal’s preferences, the distribution of the agent’s beliefs, its correlation with his quality or his knowledge thereof. The need to elicit knowledge forces the principal to trade-off the efficacy of the test in terms of whom it rewards, against how much it rewards them. If there is an ex-ante “obvious” answer, the optimal resolution of this trade-off leads to a partial penalty for that answer, even if it is correct, or a partial reward for a “counterintuitive” answer, even if it is incorrect. When the principal can pick the subject matter, she picks one that admits no such ex-ante obvious answer. In this case, the highly prevalent True-False test is always optimal, regardless of the principal's preferences, agent’s learning, or the specific optimal choice of the subject matter.

Summer Term 2023

Efficient Mechanisms under Unawareness

We study the design of mechanisms under asymmetric awareness and asymmetric information. Unawareness refers to the lack of conception rather than the lack of information. With limited awareness, an agent's message space is type-dependent because an agent cannot misrepresent herself as a type that she is unaware of. Nevertheless, we show that the revelation principle holds.
The revelation principle is of limited use though because a mechanism designer is hardly able to commit to outcomes for type profiles of which he is unaware. Yet, the mechanism designer can at least commit to properties of social choice functions like efficiency given ex post awareness. Assuming quasi-linear utilities, private values, and welfare isotonicity in awareness, we show that if a social choice function is utilitarian ex post efficient, then it is implementable under pooled agents’ awareness in conditional dominant strategies. That is, it is possible to reveal all asymmetric awareness among agents and implement the welfare maximizing social choice function in conditional dominant strategies without the need of the social planner being fully
aware ex ante. To this end, we develop dynamic versions of the Groves and Clarke mechanisms along which true types are revealed and subsequently elaborated at endogenous higher awareness levels. We explore how asymmetric awareness affects budget balance and participation constraints. 

Simultaneous bidding in second-price auctions

In this paper, we analyze a model of competing sealed-bid second-price auctions where bidders have unit demand and can bid on multiple auctions simultaneously. We show that there is no symmetric pure equilibrium with strictly increasing strategies, unlike in standard auction games. However, a symmetric mixed-strategy equilibrium exists, where all bidders will bid on all available auctions with probability one. This holds true for any mixed equilibrium. We then focus on two specific scenarios: one with two auctions and three bidders, and the other with two auctions and two bidders. For the case of three bidders, we identify a pure equilibrium. In contrast, for the case of two bidders, we find a continuum of mixed equilibria.

Informing to Divert Attention

We study a multidimensional Sender-Receiver game in which Receiver can acquire limited information after observing the Sender's signal. Depending on the parameters describing the conflict of interest between Sender and Receiver, we characterise optimal information disclosure and the information acquired by Receiver as a response. We show that in case of partial conflict of interests (aligned on some dimensions and misaligned on others) Sender uses multidimensionality of the environment to divert Receiver's attention away from the dimensions of misalignment of interests. Moreover, there is negative value of information in the sense that Receiver would be better off if she could commit not to extract private information or to have access to information of lower quality. We present applications to informational lobbying and optimal bonus policies.

Incentives and Efficiency in Constrained Allocation Mechanisms

We study private-good allocation mechanisms where an arbitrary constraint delimits the set of feasible joint allocations. This generality provides a unified perspective over several prominent examples that can be parameterized as constraints in this model, including house allocation, roommate assignment, and social choice. We characterize the set of two-agent strategy- roof and Pareto efficient mechanisms, showing that every mechanism is a form of “local dictatorship.” For more agents, we show that an N-agent mechanism is group strategy-proof if and only if all its two-agent marginal mechanisms (defined by holding fixed all but two agents’ preferences) are individually strategy-proof and Pareto efficient, allowing us to leverage the two-agent characterization for more general problems. To illustrate their usefulness, we apply these results to the roommates problem to provide the first characterization of all group strategy-proof and Pareto efficient mechanisms, that turn out to be sequential dictatorships. Our results also yield a novel proof of the Gibbard–Satterthwaite Theorem. We finally introduce a new class of mechanisms, that we call “local priority” mechanisms, that exists for all constraints and subsumes many important classes of existing mechanisms.

Screening: A Unified Geometric Perspective

We investigate single-agent mechanism design with arbitrary restrictions on the agent’s vNM preferences over a finite set of outcomes. This covers many standard problems with or without transfers, including the (multi-good) monopolistic seller problem. We characterize incentive-compatible mechanisms through their associated delegation sets, convex bodies within the unit simplex. Every extreme point of the set of incentive-compatible mechanisms grants the agent a veto, allowing them to choose, for any outcome, a lottery that excludes it. Determining whether a veto mechanism is an extreme point corresponds to solving the indecomposability problem for convex bodies as introduced by Gale (1954). In one-dimensional type spaces, we find that the principal’s ex-ante expected utility is maximized by offering a menu with at most three options. However, for multi-dimensional type spaces, no such simplification exists: the set of (exposed) extreme points is dense in the set of veto-granting mechanisms. We apply these insights to derive known and novel results about the monopolistic seller problem.

A Robust Characterization of Nash Equilibrium

We give a robust characterization of Nash equilibrium by postulating coherent behavior across varying games. Nash equilibrium is the only solution concept that satisfies consequentialism, consistency, and rationality. It follows that every equilibrium refinement violates at least one of these properties. We moreover show that every solution concept that approximately satisfies consequentialism, consistency, and rationality returns approximate Nash equilibria. The latter approximation can be made arbitrarily good by increasing the approximation of the axioms. This result extends to various natural subclasses of games such as two-player zero-sum games, potential games, and graphical games.

How to get advice from reputation concerned experts: A mechanism design approach

We examine how a decision maker (DM) should organize the communication with experts who are only concerned about improving their own reputation rather than helping her per se. Employing a mechanism design approach, we consider all possible ways how this communication could be organized. We characterize when the expert’s reputation concerns prevent the DM from learning the information necessary to make a first best choice. We show that when the first best is not achievable, then it is never optimal for the DM to meet with the experts privately. She obtains better results when she uses a communication protocol where the experts engage in a debate but the DM is left in the dark about the contribution of each expert towards the final recommendation.

Optimal testing in disclosure games

We study a disclosure game between an informed sender and a receiver where the receiver has the option to gather partial information through a test. We characterize the optimal binary test and show that the receiver sacrifices informativeness of the test to incentivize disclosure. Specifically, by pooling medium states with low states, the receiver induces disclosure of medium states and thus, in equilibrium, observes more information.

Adversarial forecasters, Suspense, and Randomization

An adversarial forecaster representation sums an expected utility function and a measure of surprise that depends on an adversary’s forecast. These representations are concave and satisfy a smoothness condition, and any concave preference relation that satisfies the smoothness condition has an adversarial forecaster representation. Because of concavity, the agent typically prefers to randomize. We characterize the support size of optimally chosen lotteries, and how it depends on preferences for surprise.

A Theory of Auditability for Allocation and Social Choice Problems

In centralized market mechanisms individuals may not fully observe other participants' type reports. Hence, the mechanism designer may deviate from the promised mechanism without the individuals being able to detect these deviations. In this paper, we develop a theory of auditability for allocation and social choice problems. Namely, we measure a mechanism's auditability by the smallest number of individuals that can jointly detect any deviation. Our theory reveals stark contrasts between prominent mechanisms' auditability properties in various applications. For priority-based allocation problems, we find that the Immediate Acceptance mechanism is maximally auditable, in a sense that any deviation can always be detected by just two individuals, whereas, on the other extreme, the Deferred Acceptance mechanism is minimally auditable, in a sense that some deviations may go undetected unless there is full information about everyone's reports. For a class of mechanisms that can be implemented as Deferred Acceptance in a systematically modified problems, we establish a relation between a mechanism's auditability and the uniqueness of stable outcomes in the modified problems. For the auctions setup, we show that the first-price and the all-pay auction mechanisms have an auditability index of two, whereas the second-price auction mechanism is minimally auditable. For voting problems with a binary outcome, we characterize the dictatorial rule as the unique voting mechanism with an auditability index of one, and we characterize the majority voting rule as the unique most auditable anonymous voting mechanism. Finally, for the choice with affirmative action setting, we compare the auditability indices of prominent reserves mechanisms. We establish that a particular reserves rule implementation has superior auditability properties.

Decentralized Many-to-One Matching with Bilateral Search

I analyze a finite decentralized many-to-one search model, where firms and workers meet randomly and time is nearly costless. In line with the existing literature, stable matchings of the many-to-one market can be enforced as search equilibria. However, in many-to-one search, firms collect workers in a cumulative manner. For this reason, unlike centralized matching markets, the collective structure of the firms affects the search process fundamentally. For instance, dynamically stable matchings may not be sustained as search equilibria because of the strategic usage of seats over time. Furthermore, although stability in many-to-one markets can be analyzed through their related one-to-one markets, the many-to-one search model is essentially different from its related one-to-one counterpart. One sufficient condition for the equilibria in many-to-one markets to coincide with the equilibria of the related one-to-one market is that firms have additively separable utility over workers.

Feed for good? On regulating social media platforms

Social media platforms govern the exchange of information between users by providing personalized feeds. This paper shows that the pursuit of engagement maximization, driven by monetary incentives, results in low-quality communication and the proliferation of echo chambers. A monopolistic platform disregards social learning and curates feeds that primarily consist of content from like-minded individuals. We study the consequences on learning and welfare resulting from transitioning to this algorithm from the previously employed chronological feed. We show that the platform could create value by using its privileged information to design algorithms that balance learning and engagement, maximizing users' welfare. However, incentivizing a monopolist to embrace such an approach presents challenges. To address this, we propose interoperability as a measure to overcome network effects in platform competition, level the playing field, and prompt platforms to adopt the socially optimal algorithm.

Wird geladen