File Name: motor control and learning a behavioral emphasis .zip
Motor learning refers broadly to changes in an organism's movements that reflect changes in the structure and function of the nervous system.
- Motor learning
- Motor Control and Learning 6th Edition With Web Resource: A Behavioral Emphasis book download
- Motor Control and Learning: A Behavioral Emphasis
The system can't perform the operation now. Try again later.
While motor neuroscience has recently focused on optimization of single, simple movements, AI has progressed to the generation of rich, diverse motor behaviors across multiple tasks, at humanoid scale. It is becoming clear that specific, well-motivated hierarchical design elements repeatedly arise when engineering these flexible control systems. We review these core principles of hierarchical control, relate them to hierarchy in the nervous system, and highlight research themes that we anticipate will be critical in solving challenges at this disciplinary intersection.
How neural circuits govern motor behavior has long been a central question for neuroscience research. In particular, it is a classical theme that the brain controls motor behavior through hierarchical anatomical structures. Since then, hierarchy both of anatomy and generation of behavior have been revisited in the study of instinct 2 , motivation 3 , 4 , and motor pattern generation 5 , 6.
Across these contexts, the focus has often been neuroethological, detailing the kinds of behaviors produced by species-specific nervous systems in their ecological niches.
These ideas developed through study of the nervous system have inspired other disciplines, including robotics, with clear influence, for example, on the subsumption architecture 7 , 8. In recent decades, the theme of hierarchy has partially receded in motor neuroscience research, and the field has emphasized a largely complementary perspective, emphasizing task-specific optimality of movement 9 , with the contemporary version known as optimal feedback control OFC 10 , OFC is typically applied by postulating a cost function or formal definition of a task and asking what behavior is optimal with respect to that cost function.
This perspective has been productive for motor neuroscience and facilitated the analysis of specific, well-defined motor behaviors. However, despite its great utility and its alignment with the experimental preference to study isolated behaviors in single tasks, the focus on specific movements runs contrary to the deeper interest in understanding the generation of diverse, ethological behaviors produced by nervous systems OFC is a framework closely related to reinforcement learning RL , which contemporary motor control for AI and robotics has widely adopted.
We proceed by briefly reviewing computational approaches to motor control, focusing on the OFC framework, as well as reflecting upon recent developments in research involving control of complex, simulated physical bodies, including attempts to scale up OFC directly. However, as research into artificial control has developed, it has become clear that in addition to task objectives, system architecture design is also critical.
OFC does not provide direct guidance on the design or interpretation of systems that must perform many behaviors or which reuse and compose overlapping skills to solve multiple tasks. We therefore formulate a set of core design principles of hierarchical systems in the context of motor control, which are synthesized from the AI research literature.
In essence, recent work in AI has circled back to themes that were more central in earlier eras of neuroscience. This prompts us to take a fresh look at the neuroscience literature through a focused survey, which highlights how the core design principles help us make sense of hierarchical structure and function in the vertebrate nervous system.
Both AI researchers engaging in the design of motor control systems and motor neuroscientists attempting to understand how specific nervous systems produce movement share many interests; we believe these fields will continue to benefit from interdisciplinary collaboration, so we close by highlighting some of these areas of overlap.
The challenge of motor control, both for animals and artificial systems, is to coordinate a body to produce patterns of adaptive movement behavior that satisfy objectives of the agent. When studying motor control with quantitative models, we consider a body in an environment, governed by a controller. The controller or policy receives observations from sensors, which measure features of the state of the system, and produces control signals that command the effectors.
The controller runs in closed-loop with the body and environment, actuating the effectors based on online feedback from sensory observations to produce temporally extended behavior Fig.
For comparison, we depict a flat controller Fig. Beyond the basic control system elements, specific control schemes may involve forward or inverse models 13 Here we focus on dynamics models.
Internal forward models are used to predict the future consequences of actions. Comparing these predictions with sensory inputs enables filtering-based estimation of body and environment state.
Inverse dynamics models form a special class of controller. They infer the action that takes the animal from the current state to a future outcome state.
OFC frames motor control as an optimization problem and was proposed as a normative theory of biological motor control 10 ; this consolidated principles relatively well understood in movement neuroscience At present, OFC is the dominant framework used by motor neuroscientists to explain volitional control 17 , Earlier frameworks had recognized the value of optimizing movement trajectories 9 , but OFC emphasizes the importance of leveraging sensory feedback to produce task-optimal corrective responses to unexpected perturbations.
As such, the key prediction that differentiated OFC from related proposals was that movements produced by animals correct for perturbations only to the extent needed to optimize the task. The OFC framework was generalized to encompass essentially all approaches that use closed-loop, feedback-based control, where the behavior generated is supposed to optimize a cost function or goal The broadened OFC framework consists of three principles: 1 Motor control is generated to optimize an objective function.
From a contemporary perspective, the principles of OFC, including the utility of feedback and sensory delays, are widely accepted. The commitment in OFC that is perhaps most open to fundamental dispute is whether the controller really optimizes an objective and what objective?
However, at its broadest, the OFC framework is fairly inclusive about what constitutes an objective. Efficient movement need not be a direct objective, but will indirectly emerge out of coordinating movement to rapidly solve tasks.
So, if an animal is optimizing movement for solving a sequence of tasks, the efficiency of the movement is indirectly incentivized in order to facilitate the concrete task goals. Despite this theoretical generality, until recently is has not been widely feasible to consider task objectives more complex than those related to production of specific movements on short horizons. The primary challenge of implementing optimal control approaches is generating the optimal control law i.
For specific control problems described by known equations involving simple dynamics and cost functions, or problems formulated in low-dimensional state and action spaces, optimal controllers can be computed exactly.
Specifically, one of the most fundamental and computationally straightforward ways to derive an optimal controller is through dynamic programming 19 , But for the control of more realistic, high-dimensional bodies, the design of the approximation scheme, learning algorithm, or numerical approach to produce the controller is important.
Specific, contemporary approaches often reformulate or restrict the generic problem in order to make it computationally tractable. A widespread algorithmic technique is to look for locally optimal control laws instead of globally optimal control laws. Examples of locally optimal algorithms include model predictive control 21 or specialized planning methods 22 , 23 , which enable control of humanoid systems.
However, planning approaches such as these are model-based , meaning they require access to the simulator within the planning computation; this is only available to an agent or animal if it possesses a high-quality forward model, possibly learned from previous experience. If there is no pre-existing or learned model of the environment, the alternative is to directly learn the policy or, alternatively, a representation of the values of actions via model-free RL Over the last few years, there has been an explosion of interest in producing Deep RL agents that are trained in simulated environments.
Progress made towards playing Atari games from images 25 and navigating virtual environments 26 have inspired considerable follow-up research. In parallel, there has also been significant effort applied towards control of articulated bodies in simulated physical environments 27 , with broad interest facilitated by the release of research environments 28 , 29 , which build accessible interfaces for underlying physics simulators such as MuJoCo These physics-based control or continuous control problems involve training a controller to produce an action-vector of continuous values, which actuate a physically simulated body, in order to optimize objectives in a task.
Although primarily studied by Deep RL researchers for algorithm development, these challenges essentially amount to motor control. The approaches used in simulated environments also overlap with learning-based approaches for robotics research 31 , 32 , 33 , Of course, although significant development has occurred in recent years, many core ideas in Deep RL research were anticipated by earlier research 35 , including neural network control for graphically rich environments in the NeuroAnimator 36 , as well as design of impressive controllers for physically simulated humanoids 37 , 38 , 39 and animals Robust control of physically simulated humanoids, especially without access to the simulator for planning, is a challenge that has made progress in recent years.
End-to-end learning approaches with relatively simple policy architectures e. In particular, Heess et al. The resulting policy was robust and responded well to random, procedural terrain variations as well as interactive perturbations by a human. In this work, the sensory observations consisted of feature-based height-maps of the terrain, similar to approaches in animation Subsequent work has since demonstrated the ability to solve similar problems from egocentric proprioceptive information and sensory information from touch sensors and egocentric cameras for a more ethologically plausible sensory embodiment Although sensors and effectors of simulated agents are not accurate models of those found in animals, it is nevertheless clear that simulated embodied agents face similar perceptual and motor challenges as real-world animals or robots.
However, although end-to-end Deep RL approaches to motor control have expanded the scope of OFC, there are a number of difficulties. For settings with narrow objectives, such as running forwards, environment variations during training can induce robust behaviors. But for this to work, careful task design using a balanced curriculum is often needed And whereas intrinsic ethological drives of biological organisms are quite varied including feeding, fighting or fleeing, and fornicating , typical Deep RL agents exist in a universe that consists of only a single, comparatively narrow objective.
Broader challenges include dealing with changing objectives, learning behaviors that are reusable, and rapidly adapting to solve novel tasks. So, although there is clear value in scaling up OFC, it is far from the whole story of how animals generate motor behavior, and these broader challenges bring us back to aspects of motor control that were central in earlier work in both AI and neuroscience. To more efficiently solve complex control problems, many recent innovations relating to hierarchical system architecture are being developed.
In the subsequent section, we will present core principles of hierarchical motor control. For a concrete illustration of a simple, contemporary architecture reflecting versions of many of these principles, see Box 1. Researchers engaged in the study of hierarchical control believe that hierarchy can add value for issues ranging from effective exploration and planning to transfer and composition of skills. Synthesizing the literature, we have attempted to clarify and summarize core principles of hierarchical control that we believe facilitate design and interpretation of hierarchical systems.
In particular, the principles we identified are well motivated when considering systems capable of generating a wide range of motor behaviors across multiple settings. Information factorization refers to the property of hierarchical systems that involves providing partial or pre-processed information to certain parts of a system c.
In our simple example Fig. Although a flat policy could, in principle, integrate all available information and produce controls directly, a system with fewer inputs per module is likely to learn more efficiently. Furthermore, by segregating information immediately relevant to the low-level controller from information that only needs to modulate the low-level controller in a low-bandwidth fashion e.
By construction, the information routed to it is invariant to many possible contexts, and it only directly processes the subset of sensory information that the behavior it is responsible for generating depends upon. Concretely in the example in Fig. This idea is connected to a view of reinforcement learning in which subsystems that have access to different information are able to share appropriately abstract behavior across contexts 47 , For example, while visually guided locomotion in the context of a particular task may involve focusing on specific elements in the visual scene that do not transfer entirely to a new task, the locomotor movement patterns may generalize.
In this example, low-level behavior is more invariant owing to information factorization. However, it can also be the case that high-level behavior is invariant. Sufficiently abstract goals or intentions permit many distinct low-level movements to achieve them, so a high-level controller with limited access to body state may communicate an abstract goal that does not fully specify the required details of the movement, leaving it to the lower-levels to sort out the details.
Partial autonomy refers to the property of certain types of hierarchical systems that the lower-levels of the hierarchy can semi-autonomously produce behavior even without input from higher-levels. This principle is related to the intuition underlying the subsumption architecture 7 : build low-level controllers that function autonomously; then add modulatory control layers such that the overall system can produce more behaviors.
The insight reflected in this approach is that robustness can be achieved if lower-layer controllers are sufficiently autonomous albeit for a more limited range of behavior , such that removal of the higher layers leaves the lower-layer generated behavior intact. This style of architecture is evocative of the brain 8 , insofar as for many animals, considerable functionality remains in animals with substantial portions of the central nervous system removed, as we discuss later.
This partial autonomy is related to information factorization insofar as a lower-level system should have adequate information to be partially autonomous.
Motor Control and Learning 6th Edition With Web Resource: A Behavioral Emphasis book download
Tutorials in Motor Neuroscience pp Cite as. The role of augmented information feedback for motor learning has been evaluated recently by an examination of its role on performance on transfer or retention tests. Several lines of evidence from various research paradigms show that, as compared to feedback provided frequently after every trial , less frequent feedback provides benefits in learning as measured on tests of long-term retention. Such effects are of course contrary to most accounts of the learning process in human skills. In this paper, these lines of evidence are first briefly reviewed, and then several interpretations are provided in terms of the underlying processes that are degraded by frequent feedback. Unable to display preview. Download preview PDF.
Motor Control And Learning A Behavioral Emphasis Pdf Plet. Motor Control And Learning A. Behavioral Emphasis. Motor Control And Learning A Behavioral.
Motor Control and Learning: A Behavioral Emphasis
PLOS Biol 17 Van Vugt FT, Ostry DJ Early stages of sensorimotor map acquisition: learning with free exploration, without active movement or global structure. J Neurophysiol
Goodreads helps you keep track of books you want to read. Want to Read saving…. Want to Read Currently Reading Read. Other editions. Enlarge cover.
Par lawson eileen le lundi, juin 8 , - Lien permanent. The text examines the motivational, cognitive, biomechanical, and neurological processes of complex motor behaviors that allow human movement to progress from unrefined and clumsy to masterfully smooth and agile. This updated sixth edition builds upon the foundational work of Richard Schmidt and Timothy Lee in previous editions.
Schmidt, Richard A. Edited by Human Kinetics Publ.. Champaign Ill.
Этого и ждут от меня читатели. Больные на соседних койках начали приподниматься, чтобы разглядеть, что происходит. Беккер нервно посматривал на медсестру. Пожалуй, дело кончится тем, что его выставят на улицу. Клушар продолжал бушевать: - И этот полицейский из вашего города тоже хорош.
Может быть, Стратмор прогоняет что-то в ТРАНСТЕКСТЕ и на это ушло все аварийное питание. - Так почему он не отключит эту свою игрушку. Вдруг это вирус. Ты раньше говорил что-то про вирус. - Черт возьми, Мидж! - взорвался Джабба.
Ее зовут Росио. Консьерж шумно выдохнул, словно сбросив с плеч тяжесть. - А-а, Росио - прелестное создание.