Revamped Bayesian Inference

  • Goodrich, Benjamin (PI)

Projet

Détails sur le projet

Description

This research project will make Bayesian statistical computation much faster. Bayesian methods have not gained much traction in the social sciences, in part because the approach is so computationally intensive. Many researchers who could usefully apply these techniques choose not to do so because the analysis is too costly. This project will improve the computational efficiency of Bayesian methods by harnessing a critical theorem that has long been overlooked by statisticians but proven by one of the twentieth century's greatest mathematicians. Some pieces will need to be put into place for this approach to work on today's computers. However, once implemented and with a little bit of additional training, scientists will be able to apply state-of-the-art statistical methods regardless of the amount of data. The beauty of the theorem underlying the modified calculation is that it is almost universally applicable and can be leveraged by all scientists. By lowering and flattening the cost function, this project will have a broad and deep impact in the social sciences and elsewhere. The results of this research will facilitate the analysis of large data sets that recently have become prevalent across scientific fields. Graduate students will be involved in the conduct of the project and trained in the use of this approach. The investigators will implement their findings in an existing free and open-source software program.

This research project will leverage the Kolmogorov Superposition Theorem (KST) to increase the speed of computations for projects using Bayesian methods. Most statistical models of scientific phenomena ask: What was the probability, under the model, of observing this collection of data and how would that probability change depending on the values of unknown quantities that are to be estimated? To answer those questions, computers calculate that probability for many possible values of the unknowns and determine what ranges of the estimates are more probable than others. Each observation in a data set affects this probability, so when data sets are large, the calculation is slow and often infeasible. However, the KST demonstrates that there is an alternative way to exactly perform the calculation using only the addition of mathematical functions that each take in just one unknown and output one link in the chain. The number of links in the alternative chain depends only on the number of unknowns, rather than the number of observations in the data set, and thus the calculation can be dramatically accelerated in large data sets. To use this technique, scientists will need to think a little differently about how they build models and estimate the model's unknown quantities, but the investigators will provide a coherent theoretical framework and open-source software tools that will make this process not only faster, but simpler.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatutTerminé
Date de début/de fin réelle9/1/218/31/24

Financement

  • National Science Foundation: 299 999,00 $ US
  • National Science Foundation: 299 999,00 $ US

Keywords

  • Estadística, probabilidad e incerteza
  • Estadística y probabilidad
  • Ciencias sociales (todo)
  • Economía, econometría y finanzas (todo)

Empreinte numérique

Explorer les sujets de recherche abordés dans ce projet. Ces étiquettes sont créées en fonction des prix/bourses sous-jacents. Ensemble, ils forment une empreinte numérique unique.