The vast majority of people today have reasonable
vision. They use their eyes
to see things around them so they can interact with their surroundings.
However, partially sighted people who are considered 'legally'
blind have extremely blurry vision that at most lets them see areas of
light and darkness.
The visual enhancement system presented here uses
wearable computing and image processing to enable partially sighted people
to use their vision effectively. Live
video is captured using a digital camera and processed with a computer
that is worn on the body. The
resulting video is displayed directly to the user's eye (with the EyeTap
invention), creating a 'mediated reality' that can enhance vision.
The raw video is analyzed mathematically to perform
an imaging technique called edge detection.
This process discovers high-contrast areas in a digital picture and
outputs an map of these areas - the outlines of objects in the scene - in
black and white.
The edge detection algorithm operates by
differentiating the input image in the x and y directions using a
2-dimensional convolutional kernel, and combining the resulting images
pixel-by-pixel with a Pythagorean operator.
Several different variants on the edge detection
exist. The one chosen for the
project is called the Prewitt operator, which has speed and texture
advantages over the Sobel and Canny operators.
The algorithm was further optimized through the use of lookup
tables, integerization, and adaptive dynamic range correction.
The success of this project will have important
implications for partially sighted people.
There is much work to be done to make the system workable in the
visually impaired community, but when it is, partially sighted people will
be able to use their vision more effectively and so lead happier lives.