Experts Weigh Impact of Prophesee-Qualcomm Deal

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Prophesee this week announced a collaboration to “allow native compatibility” between its Metavision neuromorphic event-based cameras and Qualcomm’s Snapdragon mobile platforms in a multi-year deal to co-develop their tech.

Prophesee’s brain-inspired sensors inherently compress data by only detecting pixels that change their brightness level at any given time. This means the sensors can work at very high, effective frame rates but, for most tasks, at very low power and bandwidth. The technology is already routinely used in automation and inspection applications to measure vibrations as well as count and track objects.

The data provided by the sensors can be used to enhance images from a conventional frame-based camera: removing blur where the light is low or the subjects are moving quickly. This is particularly important for small cameras that have less light-gathering power.

Frédéric Guichard, CEO and CTO of DXOMARK, a French company that specializes in testing cameras and other consumer electronics, and that is unconnected with Paris-based Prophesee, told EE Times that the ability to deblur in these circumstances could provide definite advantages.

“Reducing motion blur [without increasing noise] would be equivalent to virtually increasing camera sensitivity,” Guichard said, noting two potential benefits: “For the same sensitivity [you could] reduce the sensor size and therefore camera thickness,” or you could maintain the sensor size and use longer exposures without motion blur.

Rendition of Prophesee and Qualcomm technology collab.
By combining a conventional frame camera with data from an event-based imager, motion blur can be eliminated. See this video. (Source: Prophesee)

Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies, told EE Times that they can get this image enhancement with probably a 20-30% increase in power consumption to run the extra image sensor and execute the processing.

“The processing can be done slowly and offline because you don’t really care about how long it takes to complete,” Heape added.

A richer feature set

Artist rendition of the dual event/frame sensor camera to be produced by Prophesee and Qualcomm.
Concept of the dual event/frame sensor camera. (Source: Prophesee)

Event sensors, however, should make other functionalities possible, too.

Tobi Delbruck, a professor at the Institute of Neuroinformatics in Zurich, Switzerland, and founder of Prophesee competitor IniVation, told EE Times that a big group at Samsung was looking at “trying to integrate something like a DVS [event-based camera] into smartphones, and they successfully demonstrated a whole bunch of cool [features] like gesture recognition.”

At the time, Delbruck explained, it wasn’t technically feasible to execute the signal processing required to make an event-based camera work on a phone, but now, with the neural accelerators that have become increasingly powerful and efficient in mobile platforms (as on Qualcomm’s Snapdragon), this is no longer a barrier.

Qualcomm’s Heape said he is also aware of, and interested in, these other possibilities.

“We have many, many low-power use cases,” he said. Lifting a phone to your ear to wake it up is one example. Gesture-recognition to control the car when you’re driving is another.

“These event-based sensors are much more efficient for that because they can be programmed to easily detect motion at very low power,” he said. “So, when the sensor is not operating, when there’s no movement or no changes in the scene, the sensor basically consumes almost no power. So that’s really interesting to us.”

Eye-tracking could also be very useful, Heape added, because Qualcomm builds devices for augmented and virtual reality. “Eye-tracking, motion-tracking of your arms, hands, legs… are very efficient with image sensors,” he said. “In those cases, it is about power, but it’s also about frame rate. We need to track the eyes at like 900 frames per second. It’s harder to do that with a standard image sensor.”

Toward mass production

Heape explained how the collaboration will work: Qualcomm’s OEMs, such as Oppo, Vivo, Xiaomi, OnePlus Honor, and Samsung, can “purchase a chipset and the software from [Qualcomm] and then, from Prophesee, they would also purchase the image sensor and the software… but they would have both been pre-tested by us.”

Product lines, however, are not being combined. “We’re working together to pre-integrate them before they get incorporated into the final product,” he said.

This highlights another advantage of the collaboration with Qualcomm, one that Delbruck points out: It gives Prophesee access to integrate with Mobile Industry Processing Interface (MIPI), making it possible for the company to move into these mobile applications. Licensing this technology is expensive, so this would otherwise be a barrier to entering the mobile market.

Prophesee CEO Luca Verre told EE Times the company is close to launching its first mobile product with one OEM. “The target is to enter into mass production next year,” he said.

However, Delbruck cautioned that an intellectual property battle could get in the way—because there has long been contention about whether the Prophesee camera is too similar to earlier designs, particularly those invented at INI Zurich.

“It’s not an issue at all right now because nothing is in mass production,” he said, “But it could become an issue for them later, as happened with Caltech and the basic APS [Active Pixel Sensor] patent.”

Source link