Universität Bonn

Department of Economics

Seminars & Workshops

Micro Theory Seminar &  BGSE Workshop

Rooms and dates for the winter term 2023/24

  • BGSE Workshop: room 0.017, wednesday at 12 p.m.
  • Micro Theory Seminar: faculty room, wednesday at 4.30 p.m.

All dates for download:


Micro Theory Seminar

The seminar usually takes place on Wednesday, 16:30-18:00, in the faculty room, Adenauerallee 24-42, 53113 Bonn.

Please find below the guests and dates for upcoming seminars:

Winter Term 2023/24

The Costly Wisdom of Inattentive Crowds

Incentivizing the acquisition and aggregation of information is a key task of the modern economy (e.g., financial markets). We study the design of optimal mechanisms for this task. A population of rationally inattentive (RI) agents can flexibly learn about a common state of nature, subject to uniformly posterior separable (UPS) information costs. A principal, who aims to procure a given information structure from the agents at minimal cost, can design general dynamic mechanisms with report- and state-contingent payments. If the agents are risk-neutral, prediction markets implement the first-best. If the agents are risk-averse, no mechanism can approximate the first-best cost—not even those that harness the “wisdom of the crowd” by employing a large number of “informationally small” agents. This inefficiency derives from the combination of agents’ moral hazard and adverse selection. Our characterization of incentive compatibility, which exploits an equivalence between proper scoring rules and UPS information costs, is tractable and portable to other design settings with RI agents (e.g., principal-expert and screening problems).

Optimal Security Design for Risk-Averse Investors

We use the tools of mechanism design, combined with the theory of risk measures, to analyze a model where a cash constrained owner of an asset with stochastic returns raises capital from a population of investors that differ in their risk aversion and budget constraints. The distribution of the asset's cash flow is assumed here to be common-knowledge: no agent has private information about it. The issuer partitions and sells the asset's realized cash flow into several asset-backed securities, one for each type of investor. The optimal partition conforms to the commonly observed practice of tranching (e.g., senior debt, junior debt and equity) where senior claims are paid before the subordinate ones. The holders of more senior/junior tranches are determined by the relative risk appetites of the different types of investors and of the issuer, with the more risk averse agents holding the more senior tranches. Tranching endogenously arises here in an optimal mechanism because of simple economic forces: the differences in risk appetites among agents, and in the budget constraints they face.

Dynamic Contracting with Flexible Monitoring

We study a principal's joint design of optimal monitoring and com- pensation schemes to incentivize an agent by incorporating information design into a dynamic contracting framework. The principal can flexibly allocate her limited monitoring capacity between seeking evidence that confirms or contradicts the agent's e¤ort, as the basis for reward or punishment. When the agent's continuation value is low, the principal seeks only confirmatory evidence. When it exceeds a threshold, the principal seeks mainly contradictory evidence. Importantly, the agent's effort is perpetuated if and only if he is sufficiently productive.

A Measure of Behavioral Heterogeneity

In this paper we propose a novel way to measure behavioral heterogeneity in a population of stochastic individuals. Our measure is choice-based; it evaluates the probability that, over a randomly selected menu, the sampled choices of two sampled individuals differ. We provide axiomatic foundations for this measure and a decomposition result that separates heterogeneity into its intra- and inter-personal
components.

A mechanism-design approach to property rights

We propose a framework for studying the optimal design of rights relating to the control of an economic resource - which we broadly refer to as property rights. An agent makes an investment decision, affecting her valuation for the resource, and then participates in a trading mechanism chosen by a principal in a sequentially rational fashion, leading to a hold-up problem. A designer - who would like to incentivize efficient investment and whose preferences may differ from those of the principal - can endow the agent with a menu of rights that determine the agent's set of outside options in the interaction with the principal. We characterize the optimal rights as a function of the designer's and the principal's objectives, and the investment technology. We find that optimal rights typically differ from a classical property right giving the agent full control over the resource. In particular, we show that the optimal menu requires at most two types of rights, including an option-to-own, which grants the agent control over the resource upon paying a pre-specified price.
 





Aufklapp-Text

Aufklapp-Text

Aufklapp-Text

Aufklapp-Text

Summer Term 2023

Mechanism Design with Restricted Communication

We consider a Sender–Receiver environment where the sender is informed of states and the receiver chooses actions. There is a communication channel between them consisting of sets of input/output messages and a fixed transition probability. The sender reaches out to the receiver through the channel which limits communication in two ways: the number of available messages might be small, messages might be noisy. We consider a mechanism design setup whereby the receiver commits to a mechanism which selects distribution of actions and possibly monetary transfers, contingent on output messages. We aim at characterizing the joint distributions which can be implemented by communication over the channel, given the incentives of the sender. We consider both one-shot problems and series of i.i.d. problems. In particular, we show that when the sender and the receiver are engaged in a series of problems, linking decisions together is a more efficient instrument than monetary transfers.

Information Acquisition in Matching Markets: The Role of Price Discovery

We explore the acquisition and flow of information in matching markets through a model of college admissions with endogenous costly information acquisition. We extend the notion of stability to this partial information setting, and introduce regret-free stability as a refinement that additionally requires optimal student information acquisition. We show regret-free stable outcomes exist, and finding them is equivalent to finding appropriately-defined market-clearing cutoffs. To understand information flows, we recast matching mechanisms as price-discovery processes. No mechanism guarantees a regret-free stable outcome, because information deadlocks imply some students must acquire information sub optimally. Our analysis suggests approaches for facilitating efficient price discovery, by leveraging historical information or market sub-samples to estimate cutoffs. We show that mechanisms that use such methods to advise applicants on their admission chances yield approximately regret-free stable outcomes. A survey of university admission systems highlights the practical importance of providing applicants with information about their admission chances.

Unpaired Kidney Exchange: Overcoming Double Coincidence of Wants without Money

For an incompatible patient-donor pair, kidney exchanges often forbid receipt-before-donation (the patient receives a kidney before the donor donates) and donation-before-receipt, causing a double-coincidence-of-wants problem. We study an algorithm, the Unpaired kidney exchange algorithm, which eliminates this problem. In a dynamic matching model, we show that waiting time of patients under the Unpaired is close to optimal and substantially shorter than widely used algorithms. Using a rich administrative dataset from France, we show that Unpaired achieves a match rate of 63 percent and an average waiting time of 176 days for transplanted patients. The (infeasible) optimal algorithm is only slightly better (64 percent and 144 days); widely used algorithms deliver less than 40 percent and at least 232 days. We discuss a range of solutions that can address the potential practical incentive challenges of the Unpaired. In particular, we extend our analysis to an environment where a deceased donor waitlist can be integrated to improve the performance of algorithms. We show that our theoretical and empirical comparisons continue to hold. Finally, based on these analyses, we propose a practical version of the Unpaired algorithm.

Informationally Robust Cheap-Talk

We study the robustness of cheap-talk equilibria to infinitesimal private information of the receiver in a model with a binary state-space and state-independent sender-preferences. We show that the sender-optimal equilibrium is robust if and only if this equilibrium either reveals no information to the receiver or fully reveals one of the states with positive probability. We then characterize the actions that can be played with positive probability in any robust equilibrium. Finally, we fully characterize the optimal sender-utility under binary receiver’s private information, and provide bounds for the optimal sender-utility under general private information.

Should the Timing of Inspections be Predictable?

A principal hires an agent to work on a long-term project that culminates in a breakthrough or a breakdown. At each time, the agent privately chooses to work or shirk. Working increases the arrival rate of breakthroughs and decreases the arrival rate of breakdowns. To motivate the agent to work, the principal conducts costly inspections. She fires the agent if shirking is detected. We characterize the principal’s optimal inspection policy. Periodic inspections are optimal if work primarily speeds up breakthroughs. Random inspections are optimal if work primarily delays breakdowns. Crucially, the agent’s actions determine his risk attitude over the timing of punishments.

Early-Career Discrimination: Spiraling or Self-Correcting?

Do workers from social groups with comparable productivity distributions obtain comparable lifetime earnings? We study how a small amount of early-career discrimination propagates over time when workers’ productivity is revealed through employment. In breakdown learning environments that track primarily on-the-job failures, such discrimination spirals into a substantial lifetime earnings gap for groups of comparable productivity, whereas in breakthrough learning environments that track successes, early discrimination self-corrects so as to guarantee comparable lifetime earnings. This contrast is robust to large labor markets, flexible wages, inconclusive learning, investment in productivity, and misspecified employers’ beliefs.

Selecting the Best when Selection is Hard: The Persistent Effects of Luck

Many economic institutions and organizational practices make early success have a persistent effect on final outcomes. By granting additional resources, favorable treatment, or other forms of bias to early strong performers, they raise the likelihood with which these early strong performers become final winners. When performance is informative about ability differentials, such bias can serve as a tool to increase “selective efficiency”, i.e. the allocation of resources or decision-making authority to the most talented. However, in situations where noise swamps ability differences in determining relative performance, the use of bias would have the sole effect of making luck persistent. Such an outcome would seem at odds with the meritocratic principle of requiring differences in economic outcomes to be attributable to ability or effort differentials. In this paper, we challenge this view by showing that even as noise swamps ability differences in driving performance, maximization of selective efficiency continues to require bias favoring early leaders. Moreover, inducing greater persistence of outcomes in noisier environments can be consistent with the objective of assigning resources to the most able.

Equilibrium Selection in Repeated Games with Patient Players

What determines the path of play in an infinitely repeated game? Typically the players’ interests are not perfectly aligned but there is scope for cooperation. Potential surplus could be shared in different ways. The folk theorems of repeated games provide no guidance about the outcome. In the more tractable setting where players can sign binding contracts after any history of play, Abreu and Pearce (2007) show that slight reputational perturbations of the game lead to predictions consistent with Nash bargaining with threats (Nash, 1953). In many settings of interest, such contracts are not available. Nonetheless, combining reputational perturbation with modest continuity and renegotiation conditions in two-person repeated games with patient players again isolates play that is consistent with Nash bargaining with threats.

Organizational Change and Reference-Dependent Preferences

Reference-dependent preferences can explain several puzzling observations on organizational change. Loss aversion clarifies why change is often slow or stagnant for long periods followed by a sudden boost in productivity during a crisis. Moreover, it accounts for the fact that different firms in the same industry can have significant productivity differences. The model also demonstrates the importance of expectation management even if all parties have rational expectations. Social preferences explain why it may be optimal to split up a firm in two different entities.

Reputation for a Degree of Honesty

Can reputation replace legal commitment for an institution making periodic public announcements? Near the limiting case of ideal patience, results of Fudenberg and Levine (1992) imply a positive answer in value terms, in the presence of a rich set of behavioral types. Little is known about equilibrium behavior in such reputational equilibria. Computational and analytic approaches are combined here to provide a detailed look at how reputations are managed. Behavior depends upon which of three reputational regions pertains after a history of play. These characterizations hold even far from the patient limit. Near the limit, a novel method of calculating present discounted values, stationary promisingkeeping, helps establish a close connection between the reliability of the institution’s reports and the Kamenica and Gentzkow (2011) commitment benchmark. It is striking that this connection still holds when the benchmark type is not available (in the set of behavioral types) to be imitated.

Two Approaches to Iterated Reasoning in Games

Level-k analysis and epistemic game theory are two different ways of investigating iterative reasoning in games. This paper explores the relationship between these two approaches. An important difference between them is that level-k analysis begins with an exogenous anchor on the players’ beliefs, while epistemic analysis begins with arbitrary epistemic types (hierarchies of beliefs). To close the gap, we develop the concept of a level-k epistemic type structure, that incorporates the exogenous anchor. We also define a complete level-k type structure where the exogenous anchor is the only restriction on hierarchies of beliefs. One might conjecture that, in a complete structure, the strategies that can be played under rationality and (m − 1)th-order belief of rationality are precisely those strategies played by a level-k player, for any k ≥ m. In fact, we prove that the strategies that can be played are the m-rationalizable strategies (i.e., the strategies that survive m rounds of elimination of strongly dominated strategies). This surprising result says that level-k analysis and epistemic game theory are two genuinely different approaches, with different implications for inferring the players’ reasoning about rationality from their observed behavior.

Posterior-Mean Separable Costs of Information Acquisition

We analyze a problem of revealed preference given state-dependent stochastic choice data in which the payoff to a decision maker (DM) only depends on their beliefs about posterior means. Often, the DM must also learn about or pay attention to the state; in applied work on this subject, it is also often assumed that the costs of such learning are linearly dependent in the distribution over posterior means. We provide testable conditions to identify whether this assumption holds. This allows for the use of information design techniques to solve the DM's problem.


BGSE Workshop


Winter Term 2023/24

Equivalence of Strategy-proofness and Directed Local Strategy-proofness under Preference Extensions

Coarse Information Design

We study an information design problem with continuous state and discrete signal space. Under convex value functions, the optimal information structure is interval-partitional and exhibits a dual expectations property: each induced signal is the conditional mean (taken under the prior density) of each interval; each interval cutoff is the conditional mean (taken under the value function curvature) of the interval formed by neighbouring signals. This property enables examination into which part of the state space is more finely partitioned and facilitates comparative statics analysis. The analysis can be extended to general value functions and adapted to study coarse mechanism design.

Search Disclosure

Recent advances in online tracking technologies have enabled online firms to inform their rivals that a consumer has obtained an offer from them. We call the provision of this information search disclosure and integrate this into the Wolinsky (1986) model of sequential search. We show that firms only voluntarily conduct search disclosure if search costs are low or price revisions are infeasible. The information exchange that can emerge in equilibrium enables price discrimination that reduces consumer surplus and total welfare. By contrast, mandating firms to use search disclosure at all times can raise consumer surplus and total welfare.

Data Linkage between Markets: Does the Emergence of an Informed Insurer Cause Consumer Harm?

A merger of two companies active in seemingly unrelated markets creates data linkage: by operating in a product market, the merged company acquires an informational advantage in an insurance market where companies compete in menus of contracts. In the insurance market, the informed insurer earns rent through cream-skimming. Some of this rent is passed on to consumers in the product market. Overall, the data linkage makes consumers better off when the insurance market is competitive and, under some conditions, even when the insurance market is monopolistic. The data-sharing requirement and concerns of long-term monopolization are discussed.

Moral hazard and adverse selection under generalized distribution approach

We study the design of optimal contracts through which a risk-neutral principal motivates a risk-averse agent to produce outputs. This principal-agent problem is formulated under the generalized distribution approach, where the agent can choose an arbitrary distribution of the output, at a Kullback-Leibler divergence cost. We focus on the case where the agent has private information about the production environment and show that the optimal menu of contracts employs a standard ‘no distortion at the top’ property. Under further assumptions, the optimal menu features full screening.

Advocacy and cheap talk

We study advocacy in a model of information investigation and communication with the latter taking place via cheap talk. The question of interest is whether to assign the task of investigating (a piece of) unverifiable information, which is then communicated to a decision maker, to one or two investigators. Conceptually, this is related to Dewatripont and Tirole (1999) in the sense that investigators are a priori unbiased but can be endogenously turned into advocates. However, a key difference in our model is the role of information, which we treat as unverifiable and manipulable, so that communication takes the form of cheap talk. In contrast to Dewatripont and Tirole (1999), we find that assigning one investigator is weakly preferred to two investigators by the decision maker. Applications are the comparison of legal systems or centralized versus decentralized information investigation in multi-divisional organizations.

Eliciting information from multiple experts via grouping

A decision maker (DM) seeks to determine whether to adopt a new policy or maintain the status quo. To do so, she consults (finitely many) experts whose common interests differ significantly from those of the DM. As suggested by Wolinsky (2002), partial communication ("grouping mechanisms") among experts can – neither requiring transfers nor commitment – result in revelation of more information than full communication: by allowing for communication within groups of experts only and, hence, changing the events in which votes are pivotal, the DM may be able to manipulate experts' strategies to her advantage. We elaborate on this, characterising optimal group mechanisms and conditions under which grouping can improve upon full communication.

tba

tba

tba

tba

tba

tba

Summer Term 2023

Efficient Mechanisms under Unawareness

We study the design of mechanisms under asymmetric awareness and asymmetric information. Unawareness refers to the lack of conception rather than the lack of information. With limited awareness, an agent's message space is type-dependent because an agent cannot misrepresent herself as a type that she is unaware of. Nevertheless, we show that the revelation principle holds.
The revelation principle is of limited use though because a mechanism designer is hardly able to commit to outcomes for type profiles of which he is unaware. Yet, the mechanism designer can at least commit to properties of social choice functions like efficiency given ex post awareness. Assuming quasi-linear utilities, private values, and welfare isotonicity in awareness, we show that if a social choice function is utilitarian ex post efficient, then it is implementable under pooled agents’ awareness in conditional dominant strategies. That is, it is possible to reveal all asymmetric awareness among agents and implement the welfare maximizing social choice function in conditional dominant strategies without the need of the social planner being fully
aware ex ante. To this end, we develop dynamic versions of the Groves and Clarke mechanisms along which true types are revealed and subsequently elaborated at endogenous higher awareness levels. We explore how asymmetric awareness affects budget balance and participation constraints. 

Simultaneous bidding in second-price auctions

In this paper, we analyze a model of competing sealed-bid second-price auctions where bidders have unit demand and can bid on multiple auctions simultaneously. We show that there is no symmetric pure equilibrium with strictly increasing strategies, unlike in standard auction games. However, a symmetric mixed-strategy equilibrium exists, where all bidders will bid on all available auctions with probability one. This holds true for any mixed equilibrium. We then focus on two specific scenarios: one with two auctions and three bidders, and the other with two auctions and two bidders. For the case of three bidders, we identify a pure equilibrium. In contrast, for the case of two bidders, we find a continuum of mixed equilibria.

Informing to Divert Attention

We study a multidimensional Sender-Receiver game in which Receiver can acquire limited information after observing the Sender's signal. Depending on the parameters describing the conflict of interest between Sender and Receiver, we characterise optimal information disclosure and the information acquired by Receiver as a response. We show that in case of partial conflict of interests (aligned on some dimensions and misaligned on others) Sender uses multidimensionality of the environment to divert Receiver's attention away from the dimensions of misalignment of interests. Moreover, there is negative value of information in the sense that Receiver would be better off if she could commit not to extract private information or to have access to information of lower quality. We present applications to informational lobbying and optimal bonus policies.

Incentives and Efficiency in Constrained Allocation Mechanisms

We study private-good allocation mechanisms where an arbitrary constraint delimits the set of feasible joint allocations. This generality provides a unified perspective over several prominent examples that can be parameterized as constraints in this model, including house allocation, roommate assignment, and social choice. We characterize the set of two-agent strategy- roof and Pareto efficient mechanisms, showing that every mechanism is a form of “local dictatorship.” For more agents, we show that an N-agent mechanism is group strategy-proof if and only if all its two-agent marginal mechanisms (defined by holding fixed all but two agents’ preferences) are individually strategy-proof and Pareto efficient, allowing us to leverage the two-agent characterization for more general problems. To illustrate their usefulness, we apply these results to the roommates problem to provide the first characterization of all group strategy-proof and Pareto efficient mechanisms, that turn out to be sequential dictatorships. Our results also yield a novel proof of the Gibbard–Satterthwaite Theorem. We finally introduce a new class of mechanisms, that we call “local priority” mechanisms, that exists for all constraints and subsumes many important classes of existing mechanisms.

Screening: A Unified Geometric Perspective

We investigate single-agent mechanism design with arbitrary restrictions on the agent’s vNM preferences over a finite set of outcomes. This covers many standard problems with or without transfers, including the (multi-good) monopolistic seller problem. We characterize incentive-compatible mechanisms through their associated delegation sets, convex bodies within the unit simplex. Every extreme point of the set of incentive-compatible mechanisms grants the agent a veto, allowing them to choose, for any outcome, a lottery that excludes it. Determining whether a veto mechanism is an extreme point corresponds to solving the indecomposability problem for convex bodies as introduced by Gale (1954). In one-dimensional type spaces, we find that the principal’s ex-ante expected utility is maximized by offering a menu with at most three options. However, for multi-dimensional type spaces, no such simplification exists: the set of (exposed) extreme points is dense in the set of veto-granting mechanisms. We apply these insights to derive known and novel results about the monopolistic seller problem.

A Robust Characterization of Nash Equilibrium

We give a robust characterization of Nash equilibrium by postulating coherent behavior across varying games. Nash equilibrium is the only solution concept that satisfies consequentialism, consistency, and rationality. It follows that every equilibrium refinement violates at least one of these properties. We moreover show that every solution concept that approximately satisfies consequentialism, consistency, and rationality returns approximate Nash equilibria. The latter approximation can be made arbitrarily good by increasing the approximation of the axioms. This result extends to various natural subclasses of games such as two-player zero-sum games, potential games, and graphical games.

How to get advice from reputation concerned experts: A mechanism design approach

We examine how a decision maker (DM) should organize the communication with experts who are only concerned about improving their own reputation rather than helping her per se. Employing a mechanism design approach, we consider all possible ways how this communication could be organized. We characterize when the expert’s reputation concerns prevent the DM from learning the information necessary to make a first best choice. We show that when the first best is not achievable, then it is never optimal for the DM to meet with the experts privately. She obtains better results when she uses a communication protocol where the experts engage in a debate but the DM is left in the dark about the contribution of each expert towards the final recommendation.

Optimal testing in disclosure games

We study a disclosure game between an informed sender and a receiver where the receiver has the option to gather partial information through a test. We characterize the optimal binary test and show that the receiver sacrifices informativeness of the test to incentivize disclosure. Specifically, by pooling medium states with low states, the receiver induces disclosure of medium states and thus, in equilibrium, observes more information.

Adversarial forecasters, Suspense, and Randomization

An adversarial forecaster representation sums an expected utility function and a measure of surprise that depends on an adversary’s forecast. These representations are concave and satisfy a smoothness condition, and any concave preference relation that satisfies the smoothness condition has an adversarial forecaster representation. Because of concavity, the agent typically prefers to randomize. We characterize the support size of optimally chosen lotteries, and how it depends on preferences for surprise.

A Theory of Auditability for Allocation and Social Choice Problems

In centralized market mechanisms individuals may not fully observe other participants' type reports. Hence, the mechanism designer may deviate from the promised mechanism without the individuals being able to detect these deviations. In this paper, we develop a theory of auditability for allocation and social choice problems. Namely, we measure a mechanism's auditability by the smallest number of individuals that can jointly detect any deviation. Our theory reveals stark contrasts between prominent mechanisms' auditability properties in various applications. For priority-based allocation problems, we find that the Immediate Acceptance mechanism is maximally auditable, in a sense that any deviation can always be detected by just two individuals, whereas, on the other extreme, the Deferred Acceptance mechanism is minimally auditable, in a sense that some deviations may go undetected unless there is full information about everyone's reports. For a class of mechanisms that can be implemented as Deferred Acceptance in a systematically modified problems, we establish a relation between a mechanism's auditability and the uniqueness of stable outcomes in the modified problems. For the auctions setup, we show that the first-price and the all-pay auction mechanisms have an auditability index of two, whereas the second-price auction mechanism is minimally auditable. For voting problems with a binary outcome, we characterize the dictatorial rule as the unique voting mechanism with an auditability index of one, and we characterize the majority voting rule as the unique most auditable anonymous voting mechanism. Finally, for the choice with affirmative action setting, we compare the auditability indices of prominent reserves mechanisms. We establish that a particular reserves rule implementation has superior auditability properties.

Decentralized Many-to-One Matching with Bilateral Search

I analyze a finite decentralized many-to-one search model, where firms and workers meet randomly and time is nearly costless. In line with the existing literature, stable matchings of the many-to-one market can be enforced as search equilibria. However, in many-to-one search, firms collect workers in a cumulative manner. For this reason, unlike centralized matching markets, the collective structure of the firms affects the search process fundamentally. For instance, dynamically stable matchings may not be sustained as search equilibria because of the strategic usage of seats over time. Furthermore, although stability in many-to-one markets can be analyzed through their related one-to-one markets, the many-to-one search model is essentially different from its related one-to-one counterpart. One sufficient condition for the equilibria in many-to-one markets to coincide with the equilibria of the related one-to-one market is that firms have additively separable utility over workers.

Feed for good? On regulating social media platforms

Social media platforms govern the exchange of information between users by providing personalized feeds. This paper shows that the pursuit of engagement maximization, driven by monetary incentives, results in low-quality communication and the proliferation of echo chambers. A monopolistic platform disregards social learning and curates feeds that primarily consist of content from like-minded individuals. We study the consequences on learning and welfare resulting from transitioning to this algorithm from the previously employed chronological feed. We show that the platform could create value by using its privileged information to design algorithms that balance learning and engagement, maximizing users' welfare. However, incentivizing a monopolist to embrace such an approach presents challenges. To address this, we propose interoperability as a measure to overcome network effects in platform competition, level the playing field, and prompt platforms to adopt the socially optimal algorithm.

Wird geladen