# Vector Quantization in Speech Coding. Variable Rate, Memory and Lattice Quantization

[Doktorsavhandling]

This dissertation focuses on different quantization methods in speech coding.

*Variable rate quantization* has the potential to substantially reduce the bit rate requirements in speech coding. Due to the highly time-varying characteristics of a speech signal, and the properties of the ear, variable rate coding can lead to performance gains in applications that do not require a fixed bit rate. To determine the possible gains of variable rate coding, different classes of speech signals are studied, both from an information theoretic and a perceptual perspective. A quality criterion for variable rate quantization is proposed, and it is shown that the criterion leads to the lowest possible average distortion under fairly general conditions. We also propose a coder design technique, where the quality criterion is employed to assemble a set of coder modes for a multi-mode variable rate speech coder.

*Memory quantization* is a category of quantization where past quantization instants are exploited to reduce the distortion. Parameters in speech coding are typically correlated from one quantization instant to the next. This correlation makes memory quantization advantageous. An extension of memory VQ systems with a fixed memoryless VQ, denoted *safety-net* VQ, is proposed. This safety-net VQ is shown to improve the performance for quantization of several parameter sets occuring in speech coding, both for transmission over noisy and over noiseless channels. Furthermore, we propose a memory VQ scheme based on variable-length coding of quantizer indices, where the bit allocation of a dual-stage quantizer is dynamically adapted.

*Lattice quantization* is a technique where the regular structure of a *lattice* is incorporated in a vector quantizer. The regularity can be exploited to reduce storage and processor capacity requirements. We propose a training algorithm for design of lattices for vector quantization, which optimizes the quantization performance of the lattice, as given by the *normalized second moment*, G. Several new structures, which improve on previously known values of G, are found. Theory for high-rate lattice quantization of Gaussian variables is also presented, leading to design rules and to new insights in the performance of lattice quantization. Furthermore, lattice quantization is generalized, by allowing modifications on a global scale of the quantizer, while still keeping a locally lattice-similar structure. It is shown that generalized lattice quantization of Gaussian variables can achieve comparable performance to pdf-optimized quantization, and still keep some of the advantages of conventional lattice quantization.

Denna post skapades 2006-09-19. Senast ändrad 2016-02-01.

CPL Pubid: 1040