Why Today’s AR Displays Need Holographic Modulators


//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

The tremendous success of Meta Ray-Ban AI smart glasses and new AI smart glasses from Rokid, RayNeo, Xiaomi, and many other vendors has demonstrated there is demand for these devices. Customers are buying these devices because, unlike previous attempts at smart glasses, this latest generation is fashionable, lightweight, and augments daily life in a very convenient manner. You can listen to music, take photos, and get navigation directions through an audio interface. 

The next step to make these devices even more useful (and perhaps the next major mobile computing platform) will be to add an augmented reality (AR) display. However, mass-market adoption of these AR + AI smart glasses will be severely limited by the capabilities of current display technologies, such as DLP, LCoS, and MicroLED.

In addition to being power hungry, bulky, difficult to manufacture, and expensive, these displays definitely do not behave the way the human vision system expects. If the goal is to wear AR + AI smart glasses all day, these human factors must be addressed. 

The human factor challenge for AR + AI smart glasses

DLP, LCoS, and MicroLED displays all use waveguide-based systems. Waveguides are thin, transparent lenses that use total internal reflection to direct light from a small display onto the wearer’s eye, combining digital images with real-world views. These waveguide systems create stereoscopic images using one display per eye, but they require a fixed-focus plane, typically around one meter deep. This restricts natural alignment of display images to a single depth, limiting use cases where accurate image placement at various depths is needed. For instance, as illustrated in Figure 1, many AR applications, such as navigation, will demand multiple planes for user cues at different times.

How Small Electronics Manufacturers Manage Operational Complexity

By MRPeasy  04.01.2026

Next-generation LED Drivers for Exterior Lighting in the SDV

By Stefan Drouzas, Senior Application Marketing Manager at Rohm Semiconductor GmbH  03.31.2026

DEEPX Sets New Pace in Physical AI Commercialization—27 Global Deals in 7 Months

By DEEPX  03.27.2026

Figure 1: An example AR scenario that requires the display of images at various depth planes. (Source: Swave Photonics)

When a display system presents visuals at only one fixed depth, the resulting experience tends to feel unnatural and uncomfortable. This is primarily due to the well-documented issue of vergence-accommodation conflict (VAC), which occurs when viewers anticipate perceiving an image at a particular real-world distance (Figure 2, left) but, in fact, most focus on a display presenting that image elsewhere (Figure 2, right).

VAC is not only associated with discomfort, such as nausea, but also poses significant challenges for devices intended for extended use throughout the day. Studies have shown that a minimum of three focal planes is needed to achieve user’s visual comfort; and the more focal planes, the better. Unfortunately, for waveguide-based AR displays, there are currently no practical solutions to address this issue. 

Figure 2: An illustration of vergence-accommodation conflict (VAC) in an AR display. The user expects to see an image at some given distance (left) but instead accommodates on the display which is rendering it. The difference between the two is the conflict. (Source: Swave Photonics)

Prescription lenses present another challenge. Nearly 4 billion people worldwide wear glasses, and it is estimated that about 75% of adults globally need some type of vision correction. AR displays must accommodate these users, typically by adding custom corrective optics in waveguide-based systems. This adds cost, weight, and complexity to the supply chain, plus serious inconvenience to the end user. 

The solution to addressing these essential human factor requirements is holography. Holography was invented by Denis Gabor in 1947, the same year the transistor was invented. Holography takes a fundamentally different approach from traditional displays that use waveguides, as it reconstructs the wavefront of light instead of projecting a flat image. Because it recreates the same light that would emanate from a real object, the eye perceives natural depth cues and focuses correctly, resolving vergence-accommodation conflict at its source.

However, while we have seen holography in numerous science fiction movies since it was invented 75+ years ago, a dynamic holographic display has not been practical to implement. 

Why has dynamic holography been so difficult? One of the main reasons is display pixel size. There is a very non-linear relationship between the field of view (diffraction angle) and pixel size. The chart below (Figure 3) shows this relationship and how the field of view increases rapidly once the pixel size is smaller than wavelength of light. This makes sense as the pixel is beam steering, and it needs to be smaller than wavelength of the light it is manipulating. The state of art pixel size for existing display technologies, DLP, LCoS, and MicroLED is 2 microns, an order of magnitude too large. This makes dynamic holography impractical.

Figure 3: The relationship between field of view (diffraction angle) and pixel size for red, green, and blue wavelengths of 640 nm, 520 nm, and 440 nm, respectively. Field of view increases rapidly as the pixel size decreases below the wavelength of light. (Source: Swave Photonics)

Recent advances in semiconductor-based holographic modulators have made dynamic holography feasible for compact form factors. By using phase change materials as a pixel, the pixel size can be smaller than the wavelength of light for the first time. Phase change material goes from one phase, crystalline, to amorphous through rapid heating. This, in turn, changes the material optical properties, such as the refraction index. Phase material has been widely used as the storage material in rewritable DVDs or as an embedded non-volatile semiconductor memory. 

Accordingly, these phase change nano pixels can be fabricated on standard CMOS foundry processes, making a dynamic holographic display very low cost and scalable. This is a photonics approach with CMOS semiconductor economics. 

Figure 4: Dynamic holographic display with 250-nm phase change nano pixels. (Source: Swave Photonics)

Holographic displays use computational diffraction to form images, trading off optical complexity for computational power leveraging Moore’s law. Holography allows for true 3D image placement and dynamic depth control, addressing vergence-accommodation conflict and prescription correction without extra optics.

Hundreds of millions, or even billions, of phase change nanoscopic pixels can now be integrated in a tiny CMOS chip. These dynamic holographic displays can deliver true 3D imagery that the human vision system expects with high optical efficiency, thin form factors, and lower power consumption—all critical for mass market adoption of AI + AR smart glasses.


See also:

AR/VR Take Smart Manufacturing Beyond Automation



Source link