Computations in sensorimotor control.

Project: Research project

Project Details

Description

The key aim of our research is to understand sensorimotor processes for complex, real world behaviours. The field of sensorimotor control currently has a detailed understanding of a narrow range of constrained tasks, such as learning to make a single planar arm movement in the face of simple visual or mechanical perturbations. Understanding naturalistic tasks, which involve an evolving sequence of actions, is more complex -- it requires us to elucidate the computations of a number of interacting elements that include strategy and learning processes and the representations that constrain them. Therefore, to understand naturalistic tasks requires us to develop new theories and experimental devices to address the following questions: a) Strategy: How do high-level processes such as decision-making interact with sensorimotor processes? Real-world tasks involve a sequence of decision-making processes that determine, based on information extracted during the unfolding task, which movement to make next and when to make it. Both decision-making and motor control require acting in real time on streams of noisy sensory evidence. Thus both rely on inference, time constraints, and value/effort costs. Theoretical concepts underlying these two fields have evolved in parallel and a key question is how they can be unified. b) Learning: How do error and reinforcement signals drive motor learning? From simple tasks we understand how an error on one trial updates the movement on the next t rial. However, in real world tasks, from buttoning a shirt to learning to ride a bicycle, there is no single error attributable to a single movement, but rather a sequence of actions that leads to success of failure. Therefore, reinforcement learning is highly important in real world tasks. It allows reinforcement signals to appropriately assign credit back in time to the sequence of actions. An important goal is to understand the algorithms used by the sensorimotor system to learn from reinforc ement signals and how they interact with error based learning. c) Representations: How do structural, parametric and state representations interact during learning? When experiencing a set of real-world tasks there is a hierarchy of representations that need to be learned (Figure 1). At the highest level, the structure represents the form of the transformation between inputs to outputs such that a given structure determines the family of tasks. For example, in object manipulation the structure depends on the dynamics of the object and is the same for objects with similar dynamics (e.g. all drink cans). Next, for a given task the structure will have particular parameter settings (e.g. mass of the can). Finally, the structure and parameters determine which movement variables, termed the state, are relevant for control (e.g. can position and velocity). An important goal is to understand the interaction of these representations during motor learning and how the choice of a movement can b e used to actively learn and extract information relevant to each. Finally, because naturalistic tasks are more complex than the sum of their parts, our goal is to integrate the models developed for each component into a unifying framework for sensorimotor control.
StatusFinished
Effective start/end date1/6/122/28/19

Funding

  • Wellcome Trust: US$2,858,500.00

ASJC Scopus Subject Areas

  • Decision Sciences(all)
  • Social Sciences(all)

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.