Project Details
Description
For more than half a century, the dominant approach for the mathematical analysis of algorithms in computer science has been "worst-case analysis", which evaluates their performance using the worst possible instances. On the positive side, a worst-case guarantee provides a useful signal regarding the robustness of an algorithm. However, it is well-known that this analysis can be unnecessarily pessimistic, often leading to uninformative performance bounds or impossibility results that may not reflect the real obstacles that arise in practice. These crucial shortcomings are making worst-case analysis less relevant, especially in light of the impressive advances in machine learning that give rise to very effective algorithms, most of which do not admit any non-trivial worst-case guarantees. Motivated by this tension between worst-case analysis and machine-learning algorithms, a surge of recent work on "algorithms with predictions" is aiming for the best of both worlds by designing robust algorithms that are guided by machine-learned predictions. In this project, the goal is to extend this learning-augmented framework beyond the analysis of algorithms, toward the more demanding task of designing mechanisms in the presence of self-interested agents (e.g., auctions for selling goods, elections for selecting candidates, or policies for scheduling computational jobs). Much like algorithms, mechanisms follow a sequence of steps that transform an input to an output, but designing mechanisms is more complicated because some of their input and output may be controlled by strategic agents whose goal is to maximize their own utility. Therefore, for a mechanism to be effective, it needs to account for the incentives of the participating agents. The learning-augmented mechanisms resulting from this research will be enhanced with machine-learned predictions regarding the preferences of the agents, allowing them to overcome overly pessimistic impossibility results and to have a transformative impact on the field of mechanism design.An ideal learning-augmented mechanism is one that achieves strong performance guarantees when the predictions it is provided with are accurate, while simultaneously maintaining optimal worst-case guarantees, irrespective of how inaccurate the predictions may be. This project considers the design and analysis of learning-augmented mechanisms in central areas of the algorithmic mechanism design literature, such as auction design, mechanism design without money (where the mechanism is not permitted to use monetary rewards), and online mechanism design (where the mechanism needs to make irrevocable decisions in a dynamic fashion). Furthermore, the project also considers decentralized settings, where the mechanism induces a game among the participating agents and its quality is evaluated over the Nash equilibria of the game. For different canonical problems in each of these areas, the project explores the extent to which predictions can enable improved performance guarantees with respect to social welfare and revenue, which are the two standard objectives in algorithmic game theory.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Active |
---|---|
Effective start/end date | 10/1/22 → 9/30/25 |
Funding
- National Science Foundation
ASJC Scopus Subject Areas
- Computer Science(all)
- Computer Networks and Communications
- Engineering(all)
- Electrical and Electronic Engineering
- Communication
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.