It is no secret that digital oscilloscopes are sampled data instruments and do not acquire a continuous record of the input signal. We know from sampling theory that the input signal, properly sampled at greater than twice the signal bandwidth, can be recovered from the acquired samples. So, how do the samples stored in the acquisition memory get converted into a continuous signal? Also, how can measurements made on sampled data values be accurate? Finally, how can we measure time intervals smaller than the sampling period? The answer to these questions is simple, interpolation!
Display interpolation
Interpolation is the addition of computed sample points between the acquired signal samples. This increases the effective sample rate but does not improve the bandwidth of the acquired signal. The effect of interpolation is to fill in the gaps in the waveform as shown in Figure 1.
Figure 1 The top trace shows a signal rendered showing only the real sample points. The lower trace shows the same signal with interpolation turned on. The interpolated points fill in the gaps between the real sample points which are highlighted as intensified dots.
Most digital oscilloscopes offer a choice of either of two interpolation processes: linear or sin(x)/x interpolation for display interpolation. The interpolation method is generally selected in the input setup. In the oscilloscope used in the example, interpolation is individually controlled for each input channel, in other oscilloscopes, interpolation is global affecting all acquisition channels. Linear interpolation basically assumes that a straight line connects the real samples. This can be implemented by convolving a triangular window function with the signal. One way to do this is to use an appropriately configured digital filter.
Sin(x)/x interpolation convolves a sin(x)/x function with the signal. The sin(x)/x, or SINC function, in the time domain has a frequency spectrum of a low pass filter as seen in Figure 2.
Figure 2 The sin(x)/x function in the time domain (upper trace) has a low pass filter response in the frequency domain (lower trace).
The bandwidth of the sin(x)/x frequency response is the reciprocal of the period of the oscillation in the sin(x)/x function. Since convolution in the time domain is multiplication in the frequency domain, the sin(x)/x interpolation is basically a low pass filtering operation.
Both the linear and sin(x)/x interpolation methods have an increased validity as the ratio of the sample rate to the bandwidth, or the oversampling ratio, increases. Interpolation always improves as the sample rate is made higher for a given bandwidth. There are, however, some differences in performance. Linear interpolation works very well when the oversampling ratio, is at least ten-to-one. Examples showing linear interpolation with different oversampling ratios are shown in Figure 3.
Figure 3 Examples of the performance of the linear interpolator on a 500 MHz sine wave with oversampling ratios of 20:1 (top left), 10:1 (middle left), 5:1 (bottom left), 2:1 (top right). The persistence display (center right) of the 2:1 case shows that it is still a sinewave.
While not visually ‘pretty’ all versions are technically correct. If infinite display persistence is turned on the discontinuous looking waveforms will trace out the original sinewave as varying phases of the signal are sampled. The use of persistence to view a history of multiple acquisition is an operating hint that can be useful when dealing with sampled waveforms with low oversampling ratios.
Sin(x)/x interpolation works very well with oversampling ratios greater than two-to-one. They do have issues if the oversampling ratio drops below two-to-one as shown in Figure 4.
Figure 4 Comparing linear (top traces) and sin(x)/x interpolation (bottom traces) on a step function with a 27ns risetime with sampling rates of 250MS/s (left traces) and 25 MS/s (right traces).
The step function is a lower frequency signal that has high frequency components due to the transition in the middle. The 27ns rise time of the step has a nominal bandwidth of 13 MHz. Both interpolation methods work fine at the 250 MS/s sampling rate, approximately a 20:1 oversampling. The 25 MS/s rate, with a sample period of 40ns per point, is slightly less than a 2:1 oversampling ratio. The linear interpolator has only a single sample on the edge and will not define the risetime correctly but the waveshape is basically correct. The sin(x)/x interpolator is operating below the Nyquist limit and is showing pre-shoot and overshoot that is not really on the waveform, an effect called “Gibbs Ears”. So, it is important to keep an eye on the sampling rate and make sure it is greater than the Nyquist limit when using any interpolator.
Interpolation math function
The oscilloscope used in this article offers interpolation as a math function as well. The math function version includes linear, sin(x)/x, and cubic interpolation. Cubic interpolation fits a third order polynomial between samples. Its performance, in terms of computational speed, is intermediate between sin(x)/x and linear interpolation. The interpolation math function allows a user selectable interpolation factor between 2 and 50 interpolated samples between acquired sample points. Figure 5 shows an example of a 5:1 interpolation using the math function.
Figure 5 The controls for the interpolator math function setup to increase the number of samples by a factor of five using a cubic interpolator.
The interpolator math function offers greater flexibility with a great range of up-sampling and controls to customize the interpolation filter. Unlike the input channel interpolator, the math function allows viewing of both the input and output of the interpolator simultaneously to check for proper responses.
The interpolation math function allows users to increase the number of samples in a waveform, this can be useful before applying the signal to a digital filter where the cutoff frequency of the filter is a function of the sampling rate. It is also useful in characterizing waveform measurements as discussed in the next section.
Measurement Interpolation
Timing measurement in oscilloscopes are performed by finding the time at a crossing of a voltage threshold of the waveform. The time between crossing of the same slope yields a period measurement. Similarly, the difference in crossing time between edges with opposite slopes give a width measurement. In many cases, the rise time of the signal is very fast and with a sampling rate of say 20 GHz there are only a few samples on the edge. Simply drawing a line between the samples around the threshold is the most obvious choice for finding the crossing, however, this can lead to large errors when the samples are not symmetrically spaced on either side of the threshold. Interpolation is used internally during measurements to locate measurement threshold crossings more exactly with a precision much better than sample period intervals. The measurement process uses a dual interpolation where cubic interpolation is used to add samples between the acquired samples and the threshold crossing time is found by linear interpolation between the two interpolated samples on either side of the threshold as shown in Figure 6.
Figure 6 Using cubic interpolation in combination with linear interpolation to increase the time resolution of the internal timing measurements in a digital oscilloscope.
Time is measured at the point where the waveform amplitude crosses a predefined threshold. Samples are spaced at the sample interval (50 ps at 20 GS/s for this example). Cubic interpolation is used on the waveform followed by linear interpolation of the points nearest the crossing to find the exact time of the threshold crossing. The resultant measurement has much greater time resolution than the raw sample spaced at the sampling period. Cubic interpolation is used because it provides greater computational efficiency, combining accurate sample insertion with greater calculation speed than sin(x)/x interpolation.
Timebase interpolator
A less familiar but even more important interpolator is the one that measures the sub-sample time delay between the trigger event and the sample clock. Generally, the trigger event is asynchronous with the oscilloscope’s sample clock. The sampling phase or horizontal offset of each acquisition is random. If you were to histogram the time from the trigger to the first sample, it would exhibit a uniform distribution between a 0 and 1 sample period. Due to the random horizontal offset, a persistence display of multiple waveforms shows all possible locations of the sample points which was shown in Figure 3.
A stable triggered display requires that each acquired waveform trace be aligned with the trigger point in exactly the same time location. For a timebase with no trigger delay offset, the trigger location is usually at time zero. Measuring the time difference between the trigger and the sample clock is done using a device called a time-to-digital converter (TDC), basically a high-resolution counter, to measure the time delay. This time delay is the horizontal offset of the waveform. When the waveform is displayed, the horizontal offset is used to line up the triggers from multiple acquisitions, Figure 7 shows six acquisitions of a complex waveform.
Figure 7 Six acquisitions of an ultrasonic waveform (top grid) are each horizontally expanded using the zoom traces to show the horizontal offset for each trace in the lower grid. The labels Z1 through Z6 point to the real sample points before each trigger which is marked by the cursor at t=0.
The area about the trigger was expanded using horizontal zoom to see the range of variation in the horizontal offsets of the six acquisitions. The sample period is 20ns (50MS/s). For the six acquisitions, the horizontal offset varied between 2.5ns to 17.7ns before the trigger at t=0. This is within the one sample period range previously discussed. The time resolution of the TDC depends on the specific oscilloscope model and is related to the oscilloscope’s maximum sample rate. The oscilloscope specification that summarizes TDC performance is “trigger and interpolator jitter”. For high performance oscilloscope that specification is typically less than 2ps rms. Oscilloscope designers have improved this using software assisted triggering, reducing this specification to less than 0.1ps. The use of the TDC along with software assisted triggering makes precise measurement of time related events like jitter possible. Without the TDC hardware and software, time measurement resolution would be limited to the sampling period.
Conclusion
Interpolation is an extremely useful tool in an oscilloscope. It is a method of filling the gaps in sampled data records usually applied to improve accuracy in measurements or for better display interpretation.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content