
It seems any expert who can spell “AI” has an opinion on its potential impact. There are countless predictions out there, many made with precision and confidence, and they are often contradictory.
Depending on who you listen to, AI will cause widespread disruption and unemployment, especially at starting and lower middle-levels, open up new vistas and ways of working and getting things done, resulting in the need to hardly do any work, or make us all work harder to stay in place…you get the picture. Whatever answer you want, you can find someone who has provided it.
I’ll jump in and give you my prediction on the impact of AI, with a two-part answer. First, I don’t know, and second, neither does anyone else.
If you look back at the track record of predictions about how past technical advances would unfold, one thing is clear: Most of these prediction underestimate or overestimate the reality, and most of them miss the actual nature of the change that these advances spur.
AI and analog: Round 1
Initially, I thought of doing a “thought experiment” about analog design and AI and go beyond the issues of general analog considerations. But then it made more sense to look at some of the specific stages of analog design, from ICs to circuits and systems, and all the way to final documentation.
However, I realized soon that it was a swamp. There were so many perspectives, so many considerations, and so many exceptions that it would take a lengthy treatise rather than a modest blog to begin to highlight the possibilities. The only meaningful possibility I could think of was using AI to help a beleaguered designer doing “best” component selection.
For example, this might be the task of choosing an op amp that fits the application priorities from among the dozens of vendors and thousands of models. Going further, AI might even help with some trade-off decisions (“show me an op amp that has 10% more dissipation than my stated maximum, if it gives me a 20% improvement in noise”).
AI and analog: Round 2
I then asked myself if it would make sense to instead look at AI and analog from the opposite direction: how can AI help analog-centric systems—meaning those with real-world front-end sensors—do a better job or perhaps implement innovative architectures.
My question was answered when I came across a project from researchers at the University of California, Davis. They used a different approach to miniaturization of a spectrometer that reduced its size to the scale of a grain of sand. This compact spectrometer-on-a-chip is designed for integration into portable devices. Instead of separating light into a spectrum physically, the system relies on computational reconstruction.
Conventional spectrometers rely on dispersive elements such as diffraction gratings or prisms to spatially separate light into its constituent wavelengths. But it requires long path lengths and bulky designs to separate individual wavelengths. The need to spatially disperse the light makes it challenging to miniaturize these delicate and expensive systems, making them unsuitable for portable applications.
On the contrary, the so-called reconstructive spectrometers use a unique set of numerous but compact photoresponsive detectors to directly encode the complex spectral information, which is later extracted using advanced computational algorithms. The team leveraged recent advances in machine learning and computational power, thus enabling further miniaturization toward chip-scale design with reduced manufacturing cost (Figure 1).

Figure 1 Working mechanisms of spectrometers include conventional spectrometers with uniform detector arrays that disperse the light spatially using diffraction gratings, which require long path lengths owing to their bulky nature (a). Then there are reconstructive spectrometers that utilize unique photodetectors capturing the minute variations in the incident light spectrum. The spectral information is then reconstructed using machine learning algorithms (b).
The chip replaces traditional optics with an array comprising 16 silicon detectors, each tuned to respond slightly differently to incoming light. Together, these detectors capture overlapping signals that encode the original spectrum, and they can provide wider bandwidth due to the use of staggered, tailored sensors for each spectrum slice. This process is similar to having multiple sensors that sample different elements of a complex signal, with the full picture emerging only after full analysis.
The analysis is performed using AI where the spectral reconstruction of an unknown spectrum is what has been defined as an inverse problem. The spectral reconstruction of the photon-trapping structures of the spectrometer is performed using a fully connected neural network that solves the inverse problem; an outline of the training and reconstruction process is shown in Figure 2.

Figure 2 Neural network model for spectral reconstruction shows demonstration of the training and reconstruction process of the neural network (a). Training and validation losses plotted against epoch show convergence of the model (b). The model is trained for 2,000 epochs with the loss function converging around 0.03, where comparison of spectral reconstruction uses matrix pseudo-inversion (c), linear combination of Gaussian functions (d), and neural network model (e).
The neural network model outperforms the other two methods in reconstructing the spectral profile of a 3-nm full width at half maximum (FWHM) laser peak. The root-mean-square error (RMSE) and Pearson’s R value (a correlation coefficient) for the neural network model are 0.046 and 0.87, respectively, indicating high accuracy in spectral reconstruction.
The training process involves learning the complex spectral encoding between the photocurrent of photon-trapping structure-enhanced photodetectors and their corresponding spectral information by back-propagating the loss function.
Their detailed modeling, analysis, and experimental results also demonstrated that this approach provided superior noise tolerance compared to traditional spectrometers despite the low photon intensity and small capture area. The fascinating story is presented in a highly readable paper “AI-augmented photon-trapping spectrometer-on-a-chip on silicon platform with extended near-infrared sensitivity” published in Advanced Photonics.
I’ll be honest: When I first saw this paper, my first if somewhat cynical thought was that this was just an attempt to dress up an old analog signal-chain technique with an AI “glow.” There are two basic ways to implement a precision sensor-based path. First, use top-grade components and various circuit topologies such as matched resistors to cancel errors to the extent possible. Second, use lesser components and just calibrate out inaccuracies.
But as I continued to read their paper, I saw that the neural network method added a new level of sophistication and ability to work through inherent weakness in the design and components to deliver an impressive result.
Where do you see AI helping, if at all, in the design cycle of an analog circuit or system? Allowing new topologies for sensor-based systems that were previously not viable or practical.
Related Content
- The Wright Brothers: Test Engineers as Well as Inventors
- Precision metrology redefines analog calibration strategy
- Optical combs yield extreme-accuracy gigahertz RF oscillator
- Achieving analog precision via components and design, or just trim and go
The post What’s the impact of AI on analog design appeared first on EDN.