Project Details
Description
Cities are at a technological crossroads. While the rise of generative Artificial Intelligence (AI) promises to reshape how urban residents inhabit, study, work, and conduct their daily lives, adopting cutting-edge technology into socially complex and high-stakes scenarios carries enormous risks. It provides fertile ground for a crisis of public trust in institutions, experts, and technology. Because AI is a particularly abstract and inscrutable “black box,” we offer an approach that fundamentally reimagines what a responsible co-design process for urban AI could be. At the center of this work is the creation of a new “Citizen AI,” built from the bottom up and as the culmination of a plurality of voices, experiences, and forms of expertise. The project team, The Trust Collaboratory (TC), and the Gen-4 NSF Center for Smart Streetscapes (CS3) at Columbia University, together with over ten community-based organizations in Harlem, will create a process toward local use cases of urban AI based on community-driven privacy, safety, reliability, and transparency parameters. At the center of this process will be the co-creation of a community-based conversational engagement tool (teLLMe) that redefines how, when, by whom, and under what conditions AI should be integrated into New York City and its social fabric.AI can play an integral role in how urban residents will inhabit and navigate future cities. This requires that AI designers prioritize their intended users and their needs. To achieve this vision of an urban AI serving the common good, this project presents a complete and self-sustained implementation lifecycle to create a “Citizen AI.” At the center of this process will be the co-creation of a community-based conversational engagement tool (teLLMe) that redefines how, when, by whom, and under what conditions AI should be integrated into our city and its social fabric. This LLM-based system will elevate the principle that responsibly designed urban AI requires modes of technology co-production that bring civic organizations, advocacy groups, small businesses, domain experts, and residents under one umbrella. The team's approach draws on a recent “participatory turn” that goes beyond mere assurances of data security and efforts toward explainability. This co-design sequence will proceed side by side with research on the social dynamics of trusting behavior as well as contributions from engineers and data scientists with expertise in accessibility, data privacy, machine learning, and computer vision to make AI accountable, fair, safe, transparent, and trusted.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Active |
---|---|
Effective start/end date | 10/1/24 → 9/30/27 |
ASJC Scopus Subject Areas
- Social Sciences(all)
- Computer Science(all)
- Engineering(all)
- Mathematics(all)