Project Details
Description
The review process for releasing government records can be time-consuming and error prone. Large Language Models could help reviewers determine whether information is already in the public domain. By developing a prototype system and measuring performance at different stages, this project aims to estimate the additional data and training required to achieve acceptable levels of accuracy. The iterative nature of the system and the involvement of domain experts allows for measuring and minimizing “hallucination.”The project decouples the reasoning ability of Large Language Models from knowledge databases. It develops a semantic query engine optimized for accurate extraction of relevant information. The project also takes an active approach to fine-tuning, whereby domain experts train a model that generates queries to retrieve records from the knowledgebase, and allows them to fine tune the retrieval engines by assessing the passages that are extracted from these records before they are fed into the Large Language Model for analysis. The output includes text descriptions of what is found through record assembly, accompanied by the records themselves for further evaluation and fine-tuning. Recently released records will serve as test data, with experts categorizing the information as new or already known. Performance metrics are analyzed, considering the impact of data size and composition on accuracy.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Finished |
---|---|
Effective start/end date | 10/1/23 → 9/30/24 |
ASJC Scopus Subject Areas
- Computer Networks and Communications
- Engineering(all)
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.