Although bimanual skills are common in daily performance (i.e., opening a cupboard door with one hand while grasping a mug with the other), these skills impose complex constraints on our motor control system. In particular, at the planning level, the brain must organize two separate but coordinated movements. Further, at the sensory level, attention must be divided between two spatially separated targets. Given the functional significance of bimanual movements (i.e. we use our two hands to accomplish a myriad of tasks), understanding how the brain solves this type of movement challenge is important for our overall understanding of human motor behavior. Further, recent interest in bimanual training for the therapy and rehabilitation of movement disorders such as stroke and cerebral palsy necessitates a fundamental understanding of this type of movement control in the healthy population.
Current projects in the motor behavior lab center around understanding the coordination of the two hands in discrete tasks, such as reach-to-grasp movements, where a purposeful goal is being achieved. To understand how the brain plans and controls upper limb movements, we are manipulating characteristics of the targets, such as target size or location; characteristics of the task, such as movement goal and complexity (i.e. aiming versus grasping); and environmental characteristics, such as the availability of sensory information. By manipulating these input characteristics and measuring the motor output, inferences can be made about how the brain uses target, task and environmental information for the planning and on-line control of the separate movements being made by the two limbs.
Recent Published Works:
- Mason, A.H & Bruyn, J.L (2009) Manual asymmetries in bimanual prehension tasks: Maniuplation of object distance and object size. Human Movement Science
- Bruyn, J.L. & Mason, A.H. (2009). The allocation of visual attention during bimanual movements. Quarterly Journal of Experimental Psychology
- Mason, A.H. (2008) Coordination and control of bimanual prehenson: Effects of perturbing object location. Experimental Brain Research
- Mason, A.H., & Bryden, P.J. (2007). Coordination and concurrency in bimanual rotation tasks when mvoing away form and toward the body. Experimental Brain Research
- Mason, A.H. (2007). Performance of unimanual and bimanual multi-phased prehensile movements. Journal of Motor Behavior
Future Directions for Bimanual Performance Research:
Research in the human motor behavior lab over the next five years will focus on further investigating the influence of target, task and environmental characteristics on bimanual performance. In particular, we am using eye movement recording techniques in combination with progressive perturbations of target direction to further investigate how visual guidance influences movement coupling. Further, in collaboration with Dr. Pamela Bryden (Wilfred Laurier University, Canada) we are investigating how the task characteristics of endpoint congruency (i.e. both hands ending in the same versus different orientations) and final goal influence coordination in a sequential bimanual rotation task. We have also begun to look at how bimanual coordination changes as a function of development in children (Mason, Bruyn & Lazarus, to be submitted, Exp. Brain Res). Finally, we have recently begun collaborating with Dr. Leigh Ann Mrotek of UW-Oshkosh on projects related to understanding coordination as objects are passed between people.
Performance in Virtual Environments
The second line of research being pursued in the Motor Behavior lab, which was funded in 2003 by a five year Career Award from the National Science Foundation, explores how the availability of visual information affects the performance of movements in virtual environments (VE). Like a standard desktop computer system, a VE consists of a human operator, a human-machine interface, and a computer. In contrast to a standard desktop computer, the displays and controls in the VE are configured to immerse the operator in a predominantly graphical environment containing three-dimensional objects with locations and orientations in three-dimensional space. The operator can interact with and manipulate virtual objects in real time using their hands as input devices. The ultimate purpose of VEs is to provide the user with a computer-based tool in which a variety of common and novel activities can be performed in several promising areas such as scientific visualization, medical diagnosis, surgical training and, educational tools. Although many technological advances have been made in each of these applications, VEs are still cumbersome and difficult to use. We operate under the hypothesis that virtual environments will only reach their full potential when we have a deeper understanding of how the human user functions in this artificial environment. Therefore, our research has focused on the systematic study of how humans use visual information to perform simple and complex skills in VEs.
Recent Published Works:
- Mason, A.H & Bernardin, B.J. (2008) The role of visual feedback when grapsing and transferring objects in a virtual environment. Proceedings of the 5th International Conference on Enactive Interfaces
- Mason, A.H & Bernardin, B.J. (2007). The role of early or late graphical feedback about oneself for interaction in virtual environments. Proceedings of the 9th Virtual Reality International Conference
- Mason, A.H.(2007). An experimental study on the role of graphical informaiton about hand movements when interacting with objects in virtual reality environments. Interacting With Computers
Future Directions for Research on Performance in VEs:
Our previous research investigated how young, healthy adults use sensory information when they perform tasks in virtual environments. Although this research provides important information about the normative performance of younger individuals in VEs, this subject pool does not match the demographics of our aging population. We see the possibility for applications of VE technology with a broad range of users of all ages and abilities. In particular VEs can be used by the young for skill development, by the healthy elderly for skill maintenance and by the injured for rehabilitation. However, very little information is available on how performance in VEs changes throughout the lifespan as a function of the natural aging process or processes of brain injury, such as stroke. A 3-year National Science Foundation grant proposal (2009-2012) from our lab designed to study movement control in VEs in a population ranging in age from 6 to 80 years was recently recommended for funding with an anticipated start date of July 31st, 2009. With a broad spectrum of participants, studied using a cross-sectional design, we will gain important knowledge about how performance in VEs is influenced by age. We will use basic motor control measurement and predictive modeling techniques to understand how sensory information is used and to suggest methods for improving the presentation of sensory information to users under a variety of target, task and environmental conditions.
We have also recently been awarded seed funding from the Graduate School to collect pilot data on the use of a virtual reality system for hand function rehabilitation after stroke. With this research we will begin the implementation and pilot testing of a virtual reality motor learning paradigm aimed at improving the ability to control whole-hand and individual finger forces during object manipulation tasks in persons in the chronic stage of stroke. These pilot investigations have the potential to generate important baseline and proof of concept data for an NIH proposal with the long-term project goal of developing a virtual reality system for hand function rehabilitation after stroke.