Wednesday, May 15, 1996, 2:00 p.m.
Michael A. Arbib
Center for Neural Engineering, University of Southern California
Linking PET Imaging in Humans to a Model of Neural Mechanisms of Grasping
Our concern is with the brain mechanisms of visually directed reaching and grasping. We approach this key example of visuomotor coordination via a novel synthesis of computer modeling of biologically plausible neural networks, monkey neurophysiology, and studies of human brain activity using Positron Emission Tomography (PET). The key to this synthesis is the technique of synthetic PET imaging developed by Arbib, Amanda Bischoff, Andy Fagg, and Scott Grafton. We first present a computational model of the neural mechanism of grasp generation derived in large part from monkey neurophysiological data. The model is called the FARS (Fagg- Arbib-Rizzolatti-Sakata) model since the model was developed by Andrew Fagg and Arbib to address data from the laboratories of Giacomo Rizzolatti (data on premotor cortex) and Hideo Sakata (data on parietal cortex). Synthetic PET imaging is applied to the model to yield a set of predictions for what we expect to observe in a human experiment. We then describe a human PET experiment coducted by Fagg and Grafton that looks at the processing of instruction stimuli in a conditional task, as well as at the relative representation of different grasp programs. By comparing synthetic PET predictions and the new data, we reflect on how the model may be refined in future work.