RI: Small: New Directions in Probabilistic Deep Learning: Exponential Families, Bayesian Nonparametrics and Empirical Bayes

  • Blei, David (PI)

Proyecto

Detalles del proyecto

Description

Deep learning (DL) provides a powerful paradigm for modern machine learning (ML), with applications in a range of areas, such as natural language processing, computer vision and robotics. DL has proven powerful because it can capture complex relationships between input and output, and it enjoys efficient algorithms for analyzing massive datasets. But DL also has limitations. Currently, DL often provides 'black box' predictions. Black box indicates that the results do not involve clearly articulated assumptions or the degree of certainty in the results. To use ML in important applications, it is crucial to know from which assumptions the methods are based. Second, basic DL methods provide point predictions, but do not provide uncertainty about them. For ML to be safely deployed in critical decision-making systems, ML methods must provide calibrated measurement about the reliability of its predictions. Finally, all of these issues mean that DL does not provide easily interpretable predictions. Interpretability is important for understanding how ML makes mistakes, for deploying ML in high-stakes settings that require accountability, and when using ML predictions in the service of scientific understanding. This interdisciplinary project addresses these issues by using the rigorous methodology of probabilistic ML and applied Bayesian statistics to form interpretable DL models. Models that will be based on clearly stated assumptions and provide calibrated uncertainty about their predictions. This research aims to solve open problems in DL, provide mathematical clarity to some of its empirically proven ideas, and expand its reach to probabilistic modeling for broad applications in astronomy, language modeling for the computational social sciences, and electronic healthcare records.

The project will adapt modern ideas in DL for modern probabilistic models of complex datasets through research in two topics. The first topic develops the foundations of probabilistic deep learning, clarifying how deep neural network models draw from classical ideas like exponential families and generalized linear models, and expanding DL to Bayesian nonparametric models of infinite depth. The second topic develops empirical Bayes representation learning. Representation learning, a cornerstone of DL, is about finding low-dimensional descriptions of high-dimensional data. But from a statistical perspective, the problem is that many representations can accurately capture the distribution of the data. This project will explore how the powerful concept of empirical Bayes, a classical statistical idea that blends frequentist and Bayesian thinking, provides a natural framework for defining good representations. Through new theory, algorithms, and software, this project will significantly expand the capabilities of DL.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

EstadoActivo
Fecha de inicio/Fecha fin10/1/219/30/24

Financiación

  • National Science Foundation: $499,756.00

Keywords

  • Inteligencia artificial
  • Estadística y probabilidad
  • Informática (todo)

Huella digital

Explore los temas de investigación que se abordan en este proyecto. Estas etiquetas se generan con base en las adjudicaciones/concesiones subyacentes. Juntos, forma una huella digital única.