FMRG: Adaptable and Scalable Robot Teleoperation for Human-in-the-Loop Assembly

  • Song, Shuran (PI)
  • Bel, Tristan (CoPI)
  • Feiner, Steven (CoPI)
  • Mahadeswaraswamy, Chandana (CoPI)
  • Ciocarlie, Matei (CoPI)

Project: Research project

Project Details

Description

The COVID-19 pandemic has accelerated the adoption of remote working in many industries. The ability for employees to work remotely, often from home, has become crucial to an organization's long-term resilience and growth potential. However, while advances in software and networking have made it possible for information workers to work remotely, most manufacturing workers cannot, because the infrastructure that is needed doesn't exist. This Future Manufacturing (FM) project will research an adaptable and scalable robot teleoperation system that allows factory workers to work remotely. The research will benefit both the manufacturing industry and the workforce by increasing access to manufacturing employment and improving working conditions and safety. By combining human-in-the-loop design with machine learning, this research can broaden the adoption of automation in manufacturing to new tasks. Beyond manufacturing, the research will also lower the entry barrier to using robotic systems for a wide range of real-world applications, such as assistive and service robots. The research team is collaborating with NYDesigns and LaGuardia Community College to translate research results to industrial partners and develop training programs to educate and prepare the future manufacturing workforce.

This research suggests three key ideas to enable human-in-the-loop assembly: First, the system uses a physical scene understanding algorithm that converts the real-world robot workspace into a virtual manipulable three-dimensional scene representation. Next, a three-dimensional Virtual Reality user interface will be used to allow users to specify high-level task goals using this scene representation. Finally, the system uses a goal-driven reinforcement learning algorithm to infer an effective planning policy, given the task goals and the robot configuration. This system can overcome several limitations of existing teleoperation systems. By separating high-level task planning from low-level robot control using a physical scene representation, the system allows the operator to specify task goals without having expert knowledge of the robot hardware and configuration. By using reinforcement learning for low-level control, the system is more generalizable to new tasks and hardware.This award is co-funded by the Divisions of Civil Mechanical and Manufacturing Innovation, Electrical, Communications and Cyber Systems, Computer and Network Systems, Undergraduate Education, and Behavioral and Cognitive Sciences and the Cyber Physical Systems, NSF Scholarships in Science, Technology, Engineering, and Mathematics, and Advanced Technological Education Programs.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusActive
Effective start/end date1/1/2112/31/25

Funding

  • National Science Foundation: US$3,749,150.00

ASJC Scopus Subject Areas

  • Artificial Intelligence
  • Education
  • Civil and Structural Engineering
  • Mechanical Engineering
  • Industrial and Manufacturing Engineering

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.