Detalles del proyecto
Description
In the past decade, data science and artificial intelligence (AI) have successfully addressed some of the most important scientific and engineering challenges faced in various domains of life like healthcare, education and autonomous systems. A recent example is the deep learning program AlphaFold developed by Google’s DeepMind which can predict a protein’s 3D structure from its amino acid sequence with accuracy competitive to experiment. Despite remarkable progress made in recent years, the design of efficient statistical learning procedures for big data – a core component in modern data science and AI, has remained ad-hoc, and the precise theoretical understanding of such designed learning schemes is in its infancy. In particular, the following fundamental questions have remained open: (i) how to gain actionable insights into the performance of a learning algorithm without computationally demanding experimental methods? (ii) what is the optimal learning procedure in a given data-intensive environment? Answering such questions will pave the way for the development of the next generation of data science and AI, ultimately contributing to a better quality of life. This project aims to develop a novel analysis approach that can address the above challenges. The new framework is expected to establish quantitatively precise characterizations of the performance of diverse learning algorithms and provide a general recipe for designing optimal learning procedures.Most of the state-of-the-art learning systems consider sophisticated models in which the number of parameters, p, is substantially large. In most cases p is either much larger than or comparable to the number of observations, n, in the data in use. This new routine has challenged our theoretical understanding of ubiquitous statistical models and procedures in science and technology. On one hand, classical analysis techniques based on the assumption that n is large and p is much smaller than n do not provide valid predictions for statistical learning in the aforementioned contemporary scenarios. On the other hand, modern non-asymptotic analysis frameworks which have been very successful in order-wise risk characterizations, often fall short of delivering sharp results. Hence, it remains largely unclear how to solve various learning problems in an optimal fashion for a variety of statistical models. This project aims to fill this gap by providing a precise theoretical understanding of a large family of statistical models including generalized linear models as a subset. It focuses on the medium-dimensional regime where p scales linearly with n, and creates new tools for studying the accuracy of different learning schemes and characterizing the optimal performances. The expected outcomes of this project are: (i) discovering the precise performance limits of a broad class of learning methods and (ii) evaluating the gaps between information-theoretic lower bounds and performance of the existing algorithms. Such results will ultimately shed light on the design of optimal learning procedures. The proposal will also provide numerous opportunities for interdisciplinary research training and professional career development of future generation of statisticians.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Estado | Activo |
---|---|
Fecha de inicio/Fecha fin | 8/1/22 → 7/31/25 |
Financiación
- National Science Foundation
Keywords
- Inteligencia artificial
- Matemáticas (todo)
- Física y astronomía (todo)
Huella digital
Explore los temas de investigación que se abordan en este proyecto. Estas etiquetas se generan con base en las adjudicaciones/concesiones subyacentes. Juntos, forma una huella digital única.