We perform research on "Neuromorphic Vision", which is a biologically inspired approach to visual sensing. In Neuromorphic Vision we capture data using silicon retinae cameras which mimic the biological eye, and we compute on this data using spiking neural processors, which mimic how computation is performed in the brain. I work in close collaboration with overseas colleagues who are further developing silicon retinae and spiking neural processors. My own work focuses on integrating these silicon retinae and neural processors into small embedded systems, and the designing algorithms these neural processors need to run in order to extract information from the output of the silicon retinae.

What is the Silicon Retina

The first cameras invented were designed to capture photographs, but not video. As technology progressed, people realized you could create the illusion of motion (video) by capturing a lot of photographs (frames) very quickly. Even the best modern video cameras still rely on this brute force approach of taking lots of photographs (frames) in rapid sequence. However, when taking photographs at this rate (about 30 per second for a regular camera) almost all of them look exactly the same. Time and power are wasted repetitively recapturing redundant information again and again. Once we know the sky is blue, we do not need to be reminded 30 times per second, and we know this is not how the biological eye works. It would be far efficient and useful if the camera could instead inform us when the sky is no longer blue, in other words, when a change has occurred.

This is the approach taken in my research. I rely on "cameras" which mimic biology and do away with frames completely (we call these "silicon retinae"). The underlying principle on which they operate is that they accurately detect when and where changes in the scene are occurring. If there is no new information in the scene (if nothing is changing) then the sensor generates no data, but if there is a lot of activity in the scene, the sensor will generate more data. This is very different to frame-based cameras which just capture frames at the same constant rate regardless of what (if anything) is happening in the scene.

The below image shows a comparison between Computer Vision (frame-based), Biological Vision, and Neuromorphic Vision (our approach). In Computer Vision (left column), a regular camera is used to capture photographs which are then processed by a CPU or GPU, consisting of a small number of high power computing cores. In Biological Vision (middle column), data is captured by the retina, and transmitted as spikes (digital pulses) to the rest of the visual cortex for processing using billions of low power biological neurons. Although sensor, data, and processing appear separately, the lines between them are actually blurred in biology. The retina is considered to be part of the brain, and there are multiple layers of neurons performing analog computation within the retina itself. The retina communicates with visual cortex using spikes, which is the same language neurons in the visual cortex use to communicate with each other. Neuromorphic Vision (right column) is more similar to Biological Vision than Computer Vision. Each pixel in the silicon retina captures and performs analog computation on the incoming visual signal. The results of this computation are communicated by the sensor using events, similar to biological spikes. Further computation is performed on these events using silicon neurons, which are designed to mimic biological neurons found in the brain. Computer Vision sensors aim to create an accurate image or copy of what is being seen, whereas Biological and Neuromorphic sensors aim to capture information which can be used to efficiently perceive and interpret what is being seen.


The concept behind the sensors is illustrated in the videos below. The first video shows a black bar rotating as time (horizontal axis) progresses. The entire sequence is 400 milliseconds long, but has been slowed down by 100 times. The top left plot shows the position of the bar in time. The top right shows the photographs which would be captured by a regular 24 frame per second camera. The bottom left shows actual data recorded using a silicon retina. Blue points indicate when and where pixels are getting brighter, while red points indicated when and where pixels are getting darker. Notice that data are not constrained to lie inside frames, data can arrive at any time from any pixel as soon as it changes. The bottom right shows a history of the data from both the silicon retina and frame based camera superimposed, illustrating how the silicon retina captures data even between frames from the regular camera, and the silicon retina does not re-capture data about the static background.



The second video shows a spinning pen thrown in the air and recorded with a regular camera and silicon retina side by side. The video contains 1.2 seconds of data and is slowed down by 100 times at the peak of the pen's trajectory. The regular camera (left) shows the pen jumping from one position to another between frames. The silicon retina (center) accurately shows the location of the pen at each point in time. Grey areas indicate no data was received from the sensor (because the background is constant), white dots show where pixels are getting brighter, and block dots show where pixels are getting darker. On the right we plot a time history of the data from the silicon retina, clearly showing the trajectory of the pen.


How do we use the Silicon Retina?

Each time a pixel in the silicon retina detects a change in the scene, it outputs a brief digital pulse. These pulses can be likened to "spikes", which are digital pulses used by biological neurons for communication. The silicon retina therefore speaks a similar "language" to the brain, making it a natural choice for use in retinal prostheses intended to restore sight for the blind. Although retinal prostheses are being researched by my collaborators, my own research focuses on applications of the sensors to artificial systems.

Similarly to how the silicon retina mimics the biological retina, silicon neurons mimic biological neurons found in the brain. In my work I combine the silicon retina with silicon neurons to develop a fully bio-inspired system where the retina captures visual data, and the neurons compute on that visual data to extract information about the scene. These neurons perform tasks such as recognizing objects, determining where the objects are located, and detecting how both the sensor and objects in the scene are moving. This information is useful for an unmanned ground or aerial vehicle to navigate its environment in an autonomous manner. In fact, many argue that the primary reason for the biological brain's existence is to control motion.

The reason for using the biologically inspired approach is that it promises to be far more efficient than other modern approaches. We know that humans and other animals can sense and react to their environment almost effortlessly and using very little energy. Biology is about 100 000 times more power efficient than even the most efficient modern computers. Through developing our artificial vision system, we hope to take a step closer to achieving the impressive performance of biological systems. We demonstrate our systems by using them to aid unmanned aerial and ground vehicle in navigating their environment.