Nanocamera Operates at the Speed of Light but Only Costs $500

A new nano-camera operates at "the speed of light," and it only costs $500.

The camera has a wide range of possible applications including use as a medical imaging device, a crash-control device for vehicles, or even as a mechanism in gesture-recognized gaming consoles, a Massachusetts Institute of Technology news release reported.

The camera works similarly to Microsoft's new Kinect. It takes long light signals and reflects them off a surface before returning them to the sensor. This new device is unique because it is particularly resistant to the elements, such as fog and rain.

"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," Achuta Kadambi, who worked on the project, said. "That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3-D models of translucent or near-transparent objects."

A traditional "Time of Flight" camera a "light signal is fired at a scene, where it bounces off an object and returns to strike the pixel. Since the speed of light is known, it is then simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from," the news release reported.

The only problem is the device is highly susceptible to semitransparent surfaces and changing environments; this can affect the measurements.

This new device uses an encoding technique often used in telecommunication to measure the distance the signal has traveled.

We use a new method that allows us to encode information in time," researcher Ramesh Raskar said. "So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."

"People with shaky hands tend to take blurry photographs with their cellphones because several shifted versions of the scene smear together. By placing some assumptions on the model - for example that much of this blurring was caused by a jittery hand - the image can be unsmeared to produce a sharper picture," Ayush Bhandari, a graduate student in the Media Lab, said.

Real Time Analytics