On Incremental Sigma-Delta Modulation with isl. Incremental Sigma-Delta Modulation with Optimal...
Embed Size (px)
Transcript of On Incremental Sigma-Delta Modulation with isl. Incremental Sigma-Delta Modulation with Optimal...
On Incremental Sigma-Delta Modulation withOptimal Filtering
Sam Kavusi, Student Member, IEEE, Hossein Kakavand, Student Member, IEEE,and, Abbas El Gamal Fellow, IEEE
The paper presents a quantization-theoretic framework for studying incremental data conversion systems. Theframework makes it possible to efficiently compute the quantization intervals and hence the transfer function of thequantizer, and to determine the mean square error (MSE) and maximum error for the optimal and conventional linearfilters for first and second order incremental modulators. The results show that the optimal filter can significantlyoutperform conventional linear filters in terms of both MSE and maximum error. The performance of conventional data converters is then compared to that of incremental with optimal filtering for bandlimited signals. It isshown that incremental can outperform the conventional approach in terms of signal to noise and distortion ratio(SNDR). The framework is also used to provide a simpler and more intuitive derivation of the Zoomer algorithm.
Sigma-Delta (), incremental A/D converter, optimal filter, transfer function, time-domain analysis.
Manuscript received March 03, 2005; revised August 31, 2005. This work was partially supported under DARPA Microsystems TechnologyOffice Award No. N66001-02-1-8940.
S. Kavusi, H. Kakavand and A. El Gamal are with the Information Systems Laboratory, Electrical Engineering Department, Stanford University,Stanford, CA 94305 USA (e-mail: email@example.com; firstname.lastname@example.org).
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, 1
On Incremental Sigma-Delta Modulation withOptimal Filtering
data converters, also known as scalar predictivequantizers or oversampling Analog-to-Digital converters,are widely used in audio ,  and communication sys-tems , , . These systems can achieve very largedynamic range without the need for precise matchingof circuit components. This is especially attractive forimplementation in scaled technologies where transistorsare fast but not very accurate. They are also among themost power efficient ADC architectures .
Exact analysis of data converters is difficultdue to their highly nonlinear structure. As a result,optimal decoding (filtering and decimation) algorithmsare generally not known, although there has been muchwork on decoding schemes , , . Conventionally,linear models with quantization noise modeled as addi-tive white noise have been used . This noise model,however, was shown to be quite inaccurate for coarsequantization. In , Candy derived a more accurateapproximation of the noise power spectral density show-ing that it is not white. Subsequently, Gray  derivedthe exact spectrum and determined the optimal MSElinear filter for constant inputs. Grays framework waslater used in multiple studies (e.g., , ). This typeof exact analysis, however, is too complex for mostpractical systems and the conventional linear methodis still used and then verified and tweaked using time-domain simulations , .
Recently, data converters have become popularin sensing applications such as imaging , , , and accurate temperature sensing . In suchapplications, the input signal is very slowly varyingwith time, and as a result can be viewed as constantover the conversion period. Of particular interest isthe special case of incremental , where theintegrator is reset prior to each input signal conversion.In this case, the optimal filter with respect to the mean-squared error criterion denoted by Zoomer was foundby Hein et al. . The approach used in , however,does not naturally lead to full characterization of thequantizer transfer function or the determination of theMSE or maximum error, especially for second and higherorder modulators. Knowledge of the exact transfer
function of the converter can enable the systemdesigner to achieve the system requirements with a loweroversampling ratio. Moreover, as noted by Markus etal. , data converters with high absolute accuracyare often required in instrumentation and measurement.In , bounds on the maximum error of incremental using some conventional linear filters was derived,but there is still a need to derive the maximum error ofthe optimal filter.
In this paper, we introduce a time-domain,quantization-theoretic framework for studyingincremental quantization systems. The frameworkallows us to determine the quantization intervals andhence the transfer function of the quantizer, and toprecisely determine the MSE and maximum error forthe optimal filter. For constant inputs, we demonstratesignificant improvements in both MSE and maximumerror over conventional linear filters . These resultsimply that an incremental with optimal filtering canachieve the same performance as that of conventional systems at lower oversampling ratio and hencelower power consumption. We show that incremental can outperform the conventional approach in termsof SNDR in addition to not having the idle tonesartifacts of conventional . Using our frameworkwe are also able to compare the performance ofconventional data converter to that of incremental with optimal filtering for bandlimited input signals.We also use our framework to provide a simpler andmore intuitive proof of the optimality of the Zoomeralgorithm .
In the following section, we introduce our frameworkand use it to study first-order quantization systems.In Section III, we extend our results to incrementalsecond-order quantization systems. In each case,we compare the performance of the optimal filter tothat of linear filters. In Section IV, we discuss potentialapplications of our results to sensor systems and com-pare the performance of the incremental modulatorusing optimal filtering to conventional systems usinglinear filtering for sinusoidal inputs. We also show thatincremental does not suffer from the problem of idletones.
II. INCREMENTAL FIRST-ORDER A block diagram of a quantization system with
constant input is depicted in Figure 1. The figure alsoprovides the correspondence between the terminologyused in the circuit design literature and the quantization-theoretic literature . The modulator itself cor-responds to the encoder in the quantization-theoreticsetting. It outputs a binary sequence of length m thatcorresponds to the quantization interval that the inputx belongs to. The set of such binary sequences isthe index set in the quantization-theoretic setting. Thefilter, which corresponds to the decoder, provides anestimate x of the input signal x. Given an index, theoptimal decoder outputs the centroid, i.e., the midpoint,of the quantization interval corresponding to the index.Figure 2 is an example that illustrates the terminologyused to characterize a quantization system. Note that thiscorrespondence applies to all data converters, and whilea data converter system performs oversamplinginternally, the entire system can be viewed as a Nyquistrate data converter. Also note that in practice a dataconverter system produces an estimate that has finiteprecision and offset and gain mismatch with the trueestimate x. Finally, note that quantization framework inFigure 1 refers to the data conversion system andnot to the coarse quantizer used in the modulator itself.
Fig. 1. Block diagram of a quantization system with constantinput. The terms under the figure provide the corresponding objectnames in quantization theory.
In this section we study the discrete time first-order modulator (see Figure 3). For simplicity we focuson the single-bit case. However, our results can be easilyextended to multi-bit modulators. Without loss ofgenerality, we assume that the comparator thresholdvalue is 1 and the input signal x [0, 1]. The inputto the modulator is integrated, thus creating a rampwith a slope proportional to the constant input. At timen = 1, 2, . . . ,m, the ramp sample u(n) is comparedto the threshold value of 1 and the output b(n) = 1if u(n) > 1, otherwise b(n) = 0. If b(n) = 1, a oneis subtracted from the ramp value. Thus in effect themodulator is predicting the ramp value by counting thenumber of ones and subtracting off its prediction via thefeedback loop. This operation is illustrated in Figure 4,
Fig. 2. Illustration of the quantization-theoretic terminology. x1 . . . x4are the transition points, [x1, x2) . . . [x3, x4) are the quantizationintervals that correspond to different indices generated by the encoderand should be optimally decoded to (x1 + x2)/2 . . . (x3 + x4)/2.
where the output of the integrator u(n) is plotted versustime for a fixed input value.
The output of the integrator is thus given by
u(n) = x b(n 1) + u(n 1) = nxn1
We shall assume that the integrator is reset to zero atthe beginning of conversion, i.e. u(0) = 0, which is thecase in incremental .
In conventional data converters, the binary se-quence generated by the modulator is decoded using alinear filter, such as a counter or a triangular filter, toproduce an estimate of the input signal . Such linearfilters, however, are not optimal and result in signifi-cantly higher distortion compared to the optimal filter.The optimal filter in the form of Zoomer implemen-tation was derived in . In  McIlrath developed anonlinear iterative filter that achieves significantly betterperformance compared to linear filters.
Fig. 3. Block diagram for the first-order modulator. D refers tothe delay element and Q refers to the one-bit quantizer.
We are now ready to introduce our framework forstudying incremental data converter. Note that tocompletely characterize a quantization system, one needs