Neural interface primer

Primer on neural interfaces for robotic prosthetics:

Clay Lacefield

The movement for development of neural interfaces for robotic prosthetics has been driven recently by three major factors: new techniques in recording the activity of neural populations, new findings about the encoding of information in such neural populations that have arisen from use of these techniques, and last but certainly not least, the input of what may be hundreds of millions of dollars in research funding from DARPA (the US Defense Advanced Research Projects Agency).  The interesting political story behind this is of course the war in Iraq and the large number of amputees that are returning home among the Iraq war veteran population that result from insurgent use of IEDs in this protracted conflict.In order to assuage the severe human toll this conflict has taken on returning wounded veterans, and possibly to quell the rising discontent over the operation, DARPA has heavily funded both robotic engineering projects that seek to develop robotic limb prostheses as well as neuroscientific studies aimed at designing neural interfaces to control the new limbs.

These projects are broken down roughly in the DARPA funding structure depending upon which limb the prosthetic is intended to replace, either an arm or a leg.The projects are distinct by nature, in that both the biomechanics of motions provided by the limbs are different and the neural control of these movements is thought to differ. Leg movements largely function in the context of body motion such as walking, which involves complex neural circuits dealing with balance during movement, oppositional coordination of the right and left legs, and invokes activity both in the brain as well as what are referred to as “central pattern generator” circuits for walking in the spinal cord.  After amputation however, these circuits are largely intact and recent advances in lower limb prosthesis have hinged on the biomechanics of the limb in the context of walking or running. In contrast, arm movements are more independent from each other and from other brain circuitry such as the spinal cord, may deal with more discrete cues such as targets in space for reaching and grasping movements,and require more fine coordination than leg movements. Thus the design of upper limb prostheses becomes one of using the amputee’s intact nervous system for exerting fine control of the complex ranges of motion proposed for the new limbs.

The main term that seems to come up in the field of neural interfaces is that of “control signals”, or how to interpret signals coming from the brain or peripheral nervous system in order to control various aspects of the motion of the synthetic limb.  Just as the robotics engineers are devising mechanical systems to provide flexibility and realistic, complex ranges of motion to the prosthetics, neural engineers must find ways to control the motion in such a way as to make them really useful and natural for the patient.   The basic approach taken by groups working on these projects is to find out how the nervous system normally controls movements with an intact limb and then to tap into these circuits in order to utilize these systems for control of movement in the new limb.  These groups differ however in the particular circuits from which they read out control signals for movement.  While some groups are attempting to interpret information from the brain in order to control the movements, other groups are tying into the peripheral circuits after they leave the spinal cord on the way to the, now missing, limb.  In order to understand this difference, it is helpful to explain how the nervous system generates the plan for a particular movement and then sends this command to the muscles that execute the movement.

The decision to initiate a voluntary movement is thought to come from areas of the brain known as the prefrontal cortex, on the surface of the brain near the front.This area considers different possible actions that one might take at any point in time to further ones larger goals, food, sex, etc., and then selects a basic action to perform.  These areas feed back into an area known as the premotor cortex, which lies just adjacent to the prefrontal cortex and contains a code for a single complex motion.  For example, if the prefrontal cortex says “I am going to eat that cookie over there on the table”, the premotor area says “I am going to reach over with my right arm and grab the cookie from its location, the table.” After this general plan is formulated, the premotor cortex activates neurons in the primary motor cortex whose outputs, called “axons”, reach all the way down to the spinal cord at the level of the arms.  Different areas of the primary motor cortex are mapped to specific areas of the body such as arms or legs.  Thus, activity in the premotor cortex activates neurons in the primary motor cortex corresponding to the muscles in the arm needed to perform the movement, who then send activity to motor neurons at around the level of the arms in the spinal cord.  These motor neurons then send signals out in a peripheral nerve to the muscles in such a pattern as to correctly execute the motor plan (e.g., reaching and grabbing the cookie).

While some groups read out activity in peripheral nerves to control the limb (most notably one group that wires into pectoral muscle control to manipulate the prosthesis, which has been used already in humans and received some press), the most downstream elements of the circuit for producing movement, the research of the authors profiled in the article “Monkeys Think, Moving Artificial Arm as Own” (NYTimes, May 29, 2008 ) have implanted electrodes upstream in the primary motor cortex in order to control the robotic “arm”.  Dr. Andrew Schwartz, a professor of neurobiology at the University of Pittsburgh, used what is called a microelectrode array, in essence a micro-sized hairbrush where each fiber is a recording electrode, to record neural activity in the primary motor cortex, where activity most closely corresponds to single muscle movements.  Along with researchers from Carnegie Mellon, Schwartz and his colleagues Meel Velliste, Sagi Perel, M. Chance Spalding and Andrew Whitford recorded the firing patterns of single neurons in the motor cortex of the monkey and mapped the firing patterns to control signals for the different ranges of motion of the robotic arm.Other groups trying this approach are Miguel Nicolelis (at Duke university, profiled in the NYTimes story on January 15, 2008, “Monkey’s thoughts propel robot, a step that may help humans”), Krishna Shenoy of Stanford, and John Donoghue of Brown, and each of these has performed similar experiments showing that neural activity can be read out to control robotic devices.  I don’t know exactly how the current study improves upon their work (but it presumably must if it was published in such a high-profile journal as Science).

One of the basic problems in designing neural interfaces is that we don’t know how movement information is encoded in our nervous system.  Is the control signal for a single muscle encoded with a single neuron or with a population of neurons?  There is evidence for the latter, where movement direction seems to be encoded as a vector sum of activity in different neurons, but there is also some evidence for the former where activity in a single neuron seems to lead to movement.  One of the cool things about the cortex however, is that information is encoded flexibly by the networks of neurons there.  This means that however information is normally encoded to produce movement, given feedback on whatever task the animal is performing a network will rearrange itself to produce the desired result.

“Biofeedback” as it is called has already been used in a number of different ways to manipulate brain activity.  In one example, children with epilepsy were fitted with caps that recorded large-scale patterns of neural activity which was tied to a video game.  When the brain wave patterns were in a non epileptic-like state, the video game would respond positively, such as moving a race car forward on a track.  When the brain wave patterns were aberrant, such as what happens before an epileptic seizure, the car would stop.In order to play the video game, the children would learn to modify their brain wave patterns into the more normal state.  This shows how even without knowledge of how the brain works, you can use some translation of neural activity into a more conscious, manageable form (such as a video game, or in the case of the present experiment, a robotic arm) as a guide to letting a person or other animal manipulate the firing of neurons in their brain.  This is indeed what happens when we learn any new task in the real world: we throw a basketball at a goal, we see in which direction we miss the goal, and then we compensate in order to do better the next time.While we normally think of this as something like “muscle memory” this is in fact our brain rearranging itself in order to perform the desired task.Modifications that bring us closer to the goal are rewarded while those that bring us away are not.When this goal is reached, the network stops rearranging.  Similarly, the monkey’s goal was to grab food and with its arms restrained the monkey’s only option was to learn how to manipulate the robotic arm in order to feed itself.  Having already learned something about the robotic arm from using it earlier with a joystick, it was able to adapt to the new interface, a brain interface, to move the robotic arm.  As we learned earlier, this is in fact exactly what the brain would do were it to simply be learning to use a new tool.

Research has shown that when primates use tools, the tools themselves develop representations in the brain similar to limbs themselves.  For example, our ability to skillfully use a tool such as a tennis racket comes from an adaptation of the tool into a representation of our own body, as if the racket was just a longer arm with a flat, bouncy surface.  The ability to sense the location and movement of our body is called proprioception, and it is thought that the racket representation would be integrated into a proprioceptive map of our body.  People who are skilled at using any particular tool actually use it like a new limb.  When the monkey is seated with its arms loosely restrained and presented with food accessible only with the robotic arm, it likely makes use of a similar capacity in order to then learn to move the arm with only its brain.  Since the neurons that are recorded by the electrode array are likely different from the ones that were used to move the arm with the joystick, then it must learn to do this a different way, but nonetheless the monkey is able to adapt based upon the feedback of watching the robotic arm as it tries to move it.  This is not to say that such a re-wiring to produce complex movements is necessarily easy, as we can see by even such a simple exercise as trying to write cursive with ones non-dominant hand.  Still, compared with complete loss of movement after amputation, this technique seems promising.

One caveat to this technique however, is that even if an animal has been able to learn how to manipulate a prosthetic based upon the mapping of activity in a particular set of recorded neurons in its brain, it has been very difficult to maintain stable recordings of any set of neurons in the brain for an extended period of time.  This is due to problems with electrode movement, damage of brain tissue by implantation of the electrode array, and buildup of scar tissue around the electrode.  This means that even if one could teach a patient to use a neural interface with a prosthetic limb, they would not necessarily maintain this ability indefinitely and the ability to control the limb would likely degrade over time.  Once a particular set of recorded neurons was lost, the electrode would likely have to be moved to a new location in order to obtain a clean set of recordings from a new population of neurons.  This is because activity that the Pittsburgh group records with the electrode array is the tiny burst of electrical current that happens when a neuron is active, called an action potential.  Even though many neurons are active at about the same time, particular neurons can be isolated through something like triangulation of the spatial location of the neuron’s electrical signal between adjacent electrodes, along with an analysis of how similar the shape of this signal is.  In order to control the different aspects of the robotic limb’s movement, you need many different control signals that come from many neurons that each develop specific firing patterns as the animal learns the task.  If the spatial proximity to the recording electrode changes, as with movement of the electrode array relative to the neuron’s position, or if the shape of the neuron’s signal changes, such as if the neuron becomes damaged by the electrode or if scarring occludes the electrode contact, then it becomes very difficult to determine whether the neuron recorded today is the same one that was recorded yesterday in order to control the arm.  Thus the control signals that guide the robotic arm’s movements degrade over time.  Even if a patient was able to learn to control a robotic prosthetic arm using a neural interface, at best they would have to re-learn how to do so day by day and week by week.  Although advances in recording and decoding techniques have made experiments like this possible, it is the reality of problems like these that will make development of stable and reliable neural interfaces remain a challenge for the near future.

Advertisements
  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: