Adaptive quantization methods

15
A Seminar on Adaptive Quantization Methods Presented By Mahesh Pawar

Transcript of Adaptive quantization methods

Page 1: Adaptive quantization methods

A Seminar on Adaptive Quantization Methods

Presented By

Mahesh Pawar

Page 2: Adaptive quantization methods

Adaptive Quantization

• Linear quantization • Instantaneous companding : SNR only weakly dependent on for large -law Compression(100-500) • Optimum SNR : minimize is known, non-uniform distribution of Quantization lavels.

Quantization dilemma : want to choose quantization steps-size large enough to accommodate Maximum peak to peak range of x[n]. At the same time need to make the quantization step size small so as to minimize the Quantization error.

Page 3: Adaptive quantization methods

Solutions to Quantization Dilemna

Adaptive Quantization:• Solution 1- Let Δ vary to match the variance of the input Signal :Δ[n]• Solutions 2- use a variable gain, G[n], followed by a fixed

quantizer step size, Δ: keep signal variance of y[n]=G[n]x[n] constant.

Case 1: Δ[n] proportional to Case 2: G[n] proportional to 1/

Click here

Click here

Page 4: Adaptive quantization methods

Types of Adaptive Quantization

• Feed-forward-adaptive quantizers that estimate • Feedback adaptive quantizers that adapt step size , Δ ,on the

basis of the quantized signal, or equivalently codewords ,c[n].• Instantaneous-amplitude changes reflects sample to sample

variations in x[n]: rapid adaptation• Syllabic-amplitude changes reflects syllable to syllable

variations in x[n]=slow adaption• Adaptive quantization with one word memory.• Switched quantization

Page 5: Adaptive quantization methods

Feed Forward Adaptation

Variable Step-size• Assume uniform quantizer with step size Δ[ ] 𝑛• X[n] is quantized using Δ[n] : c[n] and Δ[ ] need to be 𝑛 transmitted to the decoder• if c’[n]= c[n] and Δ’[n] = Δ[n] : no errors in channel, and X’[n] = X[n]• Don’t have x[n] at the decoder to estimate Δ[n] : need to transmit Δ[n]

(a)

(b)

Page 6: Adaptive quantization methods

Feed Forward Quantizer

• Time varying gain is G[n],c[n] and G[n] need to be transmitted to the decoder.• Ideally c’[n]=c[n] and G’[n]=G[n]• Can’t estimate G[n] at the decoder : it has to be transmitted• Feed forward systems make estimates of then make Δ or the

quantization levels proportional to ,or the gain is inversely proportional to .

Page 7: Adaptive quantization methods

Feed Backward Adaptive Quantization

• There is no need to send side information.• The sensitivity of adaptation to the changing statistics will be degraded, however, since instead of the original input, only the output of the quantization encoder used in the statistical in the statistical analysis.

Page 8: Adaptive quantization methods

Adaptive Quantization with a one word Memory (JAYANT QUANTIZER) In Backward Adaptive Quantization we don’t have any value of input

in adapting the quantizer. In order to adapt a quantizer we need to observe quantizer output

for a long time. Nuggehally S. Jayant at Bell Labs showed that we did not need to

observe the quantizer output over a long period of time.In fact, we could adjust the quantizer step size after observing a single output. Jayant named this quantization approach “quantization with one word memory.” The quantizer is better known as the Jayant quantizer.

Mathematically, the adaptation process can be represented as where l(n−1) is the quantization interval at time (n−1).

Page 9: Adaptive quantization methods

Figure -Output levels for the Jayant quantizer.

Page 10: Adaptive quantization methods

Switched Quantization

• This scheme has shown improved performance even when the number quantizers in the bank, L , is two.

• As L ∞, the switched quantization converges to the adaptive quantizer

Fig. : Switched Quantization

Page 11: Adaptive quantization methods

Thank You

Page 12: Adaptive quantization methods

Back

Page 13: Adaptive quantization methods

Back

Page 14: Adaptive quantization methods

Fig. : Forward Adaptive Quantizer

Back

Page 15: Adaptive quantization methods

Reference• The paper “Quantization,” by A. Gersho, in IEEE Communication Magazine,

September 1977

Book• Introduction to Data compression by Khalid Sayood