Pages

Thursday, February 27, 2014

BCI - Brain Computer Interface Part 2

BCI Input and Output

One of the biggest challenges facing brain-computer interface researchers today is the basic mechanics of the interface itself. The easiest and least invasive method is a set of electrodes -- a device known as an electroencephalograph (EEG) -- attached to the scalp. The electrodes can read brain signals. However, the skull blocks a lot of the electrical signal, and it distorts what does get through.
To get a higher-resolution signal, scientists can implant electrodes directly into the gray matter of the brain itself, or on the surface of the brain, beneath the skull. This allows for much more direct reception of electric signals and allows electrode placement in the specific area of the brain where the appropriate signals are generated. This approach has many problems, however. It requires invasive surgery to implant the electrodes, and devices left in the brain long-term tend to cause the formation of scar tissue in the gray matter. This scar tissue ultimately blocks signals.
­Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes measure minute differences in the voltage between neurons. The signal is then amplified and filtered. In current BCI systems, it is then interpreted by a computer program, although you might be familiar with older analogue encephalographs, which displayed the signals via pens that automatically wrote out the patterns on a continuous sheet of paper.
In the case of a sensory input BCI, the function happens in reverse. A computer converts a signal, such as one from a video camera, into the voltages necessary to trigger neurons. The signals are sent to an implant in the proper area of the brain, and if everything works correctly, the neurons fire and the subject receives a visual image corresponding to what the camera sees.
Another way to measure brain activity is with a Magnetic Resonance Image (MRI). An MRI machine is a massive, complicated device. It produces very high-resolution images of brain activity, but it can't be used as part of a permanent or semipermanent BCI. Researchers use it to get benchmarks for certain brain functions or to map where in the brain electrodes should be placed to measure a specific function. For example, if researchers are attempting to implant electrodes that will allow someone to control a robotic arm with their thoughts, they might first put the subject into an MRI and ask him or her to think about moving their actual arm. The MRI will show which area of the brain is active during arm movement, giving them a clearer target for electrode placement.
So, what are the real-life uses of a BCI? Read on to find out the possibilities.
Part 1 | Part 3

BCI - Brain Computer Interface Part 3


BCI Applications

One of the most exciting areas of BCI research is the development of devices that can be controlled by thoughts. Some of the applications of this technology may seem frivolous, such as the ability to control a video game by thought. If you think a remote control is convenient, imagine changing channels with your mind.
However, there's a bigger picture -- devices that would allow severely disabled people to function independently. For a quadriplegic, something as basic as controlling a computer cursor via mental commands would represent a revolutionary improvement in quality of life. But how do we turn those tiny voltage measurements into the movement of a robotic arm?
Early research used monkeys with implanted electrodes. The monkeys used a joystick to control a robotic arm. Scientists measured the signals coming from the electrodes. Eventually, they changed the controls so that the robotic arm was being controlled only by the signals coming form the electrodes, not the joystick.
A more difficult task is interpreting the brain signals for movement in someone who can't physically move their own arm. With a task like that, the subject must "train" to use the device. With an EEG or implant in place, the subject would visualize closing his or her right hand. After many trials, the software can learn the signals associated with the thought of hand-closing. Software connected to a robotic hand is programmed to receive the "close hand" signal and interpret it to mean that the robotic hand should close. At that point, when the subject thinks about closing the hand, the signals are sent and the robotic hand closes.
A similar method is used to manipulate a computer cursor, with the subject thinking about forward, left, right and back movements of the cursor. With enough practice, users can gain enough control over a cursor to draw a circle, access computer programs and control a TV [source: Ars Technica]. It could theoretically be expanded to allow users to "type" with their thoughts.
Once the basic mechanism of converting thoughts to computerized or robotic action is perfected, the potential uses for the technology are almost limitless. Instead of a robotic hand, disabled users could have robotic braces attached to their own limbs, allowing them to move and directly interact with the environment. This could even be accomplished without the "robotic" part of the device. Signals could be sent to the appropriate motor control nerves in the hands, bypassing a damaged section of the spinal cord and allowing actual movement of the subject's own hands.
On the next part we'll learn about cochlear implants and artificial eye development.
Part 1Part 2

Chip planted in player's shirt - Future Football

 Does Adidas hold the key to football's future (©GettyImages)
Soccer’ has come a long way in America since the birth of Major League Soccer In 1993. Formerly the dumping ground for old pros trying to make a quick buck before they retire, the American competition can at least lay some claim to being a competitive professional league.

Now it seems, it is the Americans leading the way in at least one aspect of the game; technology.

While the English game wrestles with its own technological conundrum – Clint Hill’s disallowed header last month in Bolton’s 2-1 victory over QPR shunted the issue of goal line technology to the forefront of minds once more – American football is taking another two-footed leap in to the future.

Never ones to feign shyness over embracing sport as an entertainment format, the MLS will soon be privy to the first ever ‘smart soccer’ game – and it is something which could conceivably be used in the Premier League before long.

Adidas, pioneers of the technology, claim managers of the All-Star MLS game will be able to access real time information through a tablet computer in the dugout, via a chip planted into player’s shirts.

They claim it as the “next step in player performance analysis technology” – coaches will be able to view data such as heart rate, player position, power output, speed, distance covered, intensity of play, acceleration and GPS heat mapping, by using tablet devices on the bench.

So what are the implications of this new technology?

BCI - Brain Computer Interface Part 1


Brain Image Gallery
As the power of modern computers grows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone's brain that would allow them to see, hear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn't about convenience -- for severely disabled people, development of a brain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we'll learn all about how BCIs work, their limitations and where they could be headed in the future.

 

The Electric Brain

The reason a BCI works at all is because of the way our brains function. Our brains are filled with neurons, individual nerve cells connected to one another by dendrites and axons. Every time we think, move, feel or remember something, our neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph [source: Walker]. The signals are generated by differences in electric potential carried by ions on the membrane of each neuron.
­Although the paths the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind. It can also work the other way around. For example, researchers could figure out what signals are sent to the brain by the optic nerve when someone sees the color red. They could rig a camera that would send those exact signals into someone's brain whenever the camera saw red, allowing a blind person to "see" without eyes.
In the 2nd part, we'll learn about the basics of the interface itself.
Part 2 | Part 3