Pages

Wednesday, May 7, 2014

Facial Recognition (Part II): How Does it Work?

Detection

Acquiring an image can be accomplished by digitally scanning an existing photograph (2D) or by using a video image to acquire a live picture of a subject (3D).

Alignment

Once it detects a face, the system determines the head's position, size and pose. As stated earlier, the subject has the potential to be recognized up to 90 degrees, while with 2D, the head must be turned at least 35 degrees toward the camera.

Measurement

The system then measures the curves of the face on a sub-millimeter (or microwave) scale and creates a template.

Representation

The system translates the template into a unique code. This coding gives each template a set of numbers to represent the features on a subject's face.

Matching

If the image is 3D and the database contains 3D images, then matching will take place without any changes being made to the image. However, there is a challenge currently facing databases that are still in 2D images. 3D provides a live, moving variable subject being compared to a flat, stable image. New technology is addressing this challenge. When a 3D image is taken, different points (usually three) are identified. For example, the outside of the eye, the inside of the eye and the tip of the nose will be pulled out and measured. Once those measurements are in place, an algorithm (a step-by-step procedure) will be applied to the image to convert it to a 2D image. After conversion, the software will then compare the image with the 2D images in the database to find a potential match.

Verification or Identification

In verification, an image is matched to only one image in the database (1:1). For example, an image taken of a subject may be matched to an image in the Department of Motor Vehicles database to verify the subject is who he says he is. If identification is the goal, then the image is compared to all images in the database resulting in a score for each potential match (1:N). In this instance, you may take an image and compare it to a database of mug shots to identify who the subject is.

NEW: Did you know that Facebook now uses Facial Recognition Technology? Read more>>>

Facial Recognition (Part I) - Now You See Me!

facial recognition 

Facial recognition data points: 'While facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but.'
This summer, Facebook will present a paper at a computer vision conference revealing how it has created a tool almost as accurate as the human brain when it comes to saying whether two photographs show the same person – regardless of changes in lighting and camera angles. A human being will get the answer correct 97.53% of the time; Facebook's new technology scores an impressive 97.25%. "We closely approach human performance," says Yaniv Taigman, a member of its AI team.

Thursday, February 27, 2014

BCI - Brain Computer Interface Part 2

BCI Input and Output

One of the biggest challenges facing brain-computer interface researchers today is the basic mechanics of the interface itself. The easiest and least invasive method is a set of electrodes -- a device known as an electroencephalograph (EEG) -- attached to the scalp. The electrodes can read brain signals. However, the skull blocks a lot of the electrical signal, and it distorts what does get through.
To get a higher-resolution signal, scientists can implant electrodes directly into the gray matter of the brain itself, or on the surface of the brain, beneath the skull. This allows for much more direct reception of electric signals and allows electrode placement in the specific area of the brain where the appropriate signals are generated. This approach has many problems, however. It requires invasive surgery to implant the electrodes, and devices left in the brain long-term tend to cause the formation of scar tissue in the gray matter. This scar tissue ultimately blocks signals.
­Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes measure minute differences in the voltage between neurons. The signal is then amplified and filtered. In current BCI systems, it is then interpreted by a computer program, although you might be familiar with older analogue encephalographs, which displayed the signals via pens that automatically wrote out the patterns on a continuous sheet of paper.
In the case of a sensory input BCI, the function happens in reverse. A computer converts a signal, such as one from a video camera, into the voltages necessary to trigger neurons. The signals are sent to an implant in the proper area of the brain, and if everything works correctly, the neurons fire and the subject receives a visual image corresponding to what the camera sees.
Another way to measure brain activity is with a Magnetic Resonance Image (MRI). An MRI machine is a massive, complicated device. It produces very high-resolution images of brain activity, but it can't be used as part of a permanent or semipermanent BCI. Researchers use it to get benchmarks for certain brain functions or to map where in the brain electrodes should be placed to measure a specific function. For example, if researchers are attempting to implant electrodes that will allow someone to control a robotic arm with their thoughts, they might first put the subject into an MRI and ask him or her to think about moving their actual arm. The MRI will show which area of the brain is active during arm movement, giving them a clearer target for electrode placement.
So, what are the real-life uses of a BCI? Read on to find out the possibilities.
Part 1 | Part 3