How eye imaging technologies might improve the vision of robots and vehicles

Although robots do not have retinas in their eyes, optical coherence tomography (OCT) equipment commonly seen in ophthalmologists’ clinics may hold the key to allowing them to perceive and interact with their surroundings more naturally and securely.

Light Detection and Ranging, or LiDAR, is an imaging technology that several robotics companies are incorporating into their sensor packages. The technology, which is currently attracting a lot of attention and investment from self-driving car manufacturers, works essentially like radar. Instead of sending out vast radio waves and looking for reflections, it uses short light pulses from lasers.

Traditional time-of-flight LiDAR has several flaws that make it inappropriate for many 3D vision applications. Other LiDAR systems or ambient sunlight can easily overwhelm the detector since it detects feeble reflected light signals. It also has a low depth resolution and can take an inordinately long time to scan a large area, such as a highway or factory floor. To address these issues, researchers are turning to frequency-modulated continuous-wave (FMCW) LiDAR.

According to Ruobing Qian, a Ph.D. In the lab of Michael J. Fitzpatrick, Distinguished Professor of Biomedical Engineering Joseph Izatt, biomedical engineers developed optical coherence tomography (OCT) in the early 1990s, which is the basis for FMCW LiDAR. “However, no one expected self-driving cars or robots to exist 30 years ago, so the focus was on tissue imaging. We’ll have to give up some of its high-resolution capabilities to get the distance and speed we need for these new developing domains.”

An article published in Nature Communications on March 29th by Duke University’s Department of Electrical and Computer Engineering shows how a few OCT methods can increase previous FMCW LiDAR data throughput by 25 times while maintaining submillimeter depth accuracy.

OCT is the optical equivalent of ultrasound, which works by sending sound waves into objects and measuring how long it takes for them to return. Researchers can use optical coherence tomography (OCT) instruments to determine how much the phase of light waves has changed after traveling the same distance but not colliding with another object.

FMCW LiDAR uses a similar method with a few differences. The laser beam sent out by this technology is constantly changing frequencies. For high-speed operation under a wide range of illumination conditions, the detector can discriminate between a single, specified frequency pattern and any other light source while collecting light to measure its reflection time. Compared to conventional LiDAR systems, it evaluates any phase change versus unobstructed beams, which yields significantly more precise distance measurements.

According to Izatt, “it’s been incredibly thrilling to see how the biological cell-scale imaging technology we’ve been working on for decades is instantly translatable for large-scale, real-time 3D vision”. “These are exactly the qualities needed for robots to securely view and interact with humans or even replace avatars with live 3D video in augmented reality.”

Most earlier work employing LiDAR has relied on rotating mirrors to scan the laser across the landscape. While this strategy works effectively, it is restricted by the speed of the mechanical mirror, no matter how strong the laser it’s utilizing.

Instead, the Duke researchers utilize a diffraction grating that operates like a prism, splitting the laser into a rainbow of frequencies that spread out as they move away from the source. Because the initial laser is still swiftly sweeping over various frequencies, this translates into sweeping the LiDAR beam considerably quicker than a mechanical mirror can rotate. This allows the system to swiftly cover a vast region without sacrificing much depth or location accuracy.

While OCT instruments are used to profile tiny features up to several millimeters deep into an item, robotic 3D vision systems only need to locate the surfaces of human-scale objects. Scientists restricted OCT’s bandwidth to detect just the most vital signals emitted by objects’ surfaces. This costs the device a small resolution but with a considerably better imaging range and speed than typical LiDAR.

An FMCW LiDAR system achieves sub-millimeter localization accuracy with data-throughput 25 times better than in prior demonstrations. According to the research findings, the technique is fast and accurate enough to record in real-time the features of moving human body parts, such as a nod or clenched hand.

LiDAR-based 3D cameras that are quick and powerful enough to enable the integration of 3D vision into all kinds of devices, much like electronic cameras, have become widespread in the same way as electronic cameras, Izatt added. We need to be able to observe robots and other automated systems in 3D if we want them to interact with humans naturally and safely.

Leave a Comment