The implementation of the project was a success.
The edge-detection algorithm that was used output a maximum of two
frames per second without adaptive correction, and one frame per second in
the adaptive case.
This performance was possible on the Xybernaut
wearable computer, running at 233 MHz with a fast parallel port (for more
information, see the Appendix).
As can be seen from the pictures in the Appendix, the
edge detection does enhance the blurred image so that objects can be
discerned. The
partially sighted person will still have blurred vision, but the edges
will be blurred instead of an amorphous blob.
The obvious application for this project is to
enhance vision for the partially sighted.
When wearable computing is cost-effective enough, and projects such
as this one are refined and improved, it will be possible for
‘legally’ blind people to use their remaining vision in a functional
context.
If the system were adapted to other image processing
algorithms, it could be used for entertainment purposes in which modified
video is an asset. It
could also be used for academic purposes as a real-time teaching tool for
video and image processing.
Experiments in mediated reality are also possible
with the edge-detecting system. It
would be interesting to see whether the outlines of objects create an
attention focus to which the person’s visual field is drawn, much like
the successful experiments in ASCII art.
There are several improvements that can be made on
the design. They are:
·
Implementing the image capture and edge detection on a
hardware chip, especially an FPGA for quick reprogramming.
·
The project could be combined with the Linux Media Labs
video capture card which operates at 30 frames per second.
·
The project could be integrated with the Linaccess project
to make a complete open source system for the blind.
The other applications as discussed above are also
candidates for further research and system improvement.