Lossless predictive coding

Objective of the project

  1. Generating Huffman codeword using Huffman Coding to be transmitted to the decoder.
  2. Compare and analyze quality of 7 linear, fixed different Differential pulse code modulation (DPCM) predictors to find out which one achieves the best compression ratio.
  3. Compare and derive the compressed image against the original image to ensure our result has lossless compression.

Introduction

As we know, there is strong correlation or connection between spatially adjacent pixels. The aim of the predictive coding is to remove redundancy between consecutive pixels to facilitate the encoding of only the residual portion between actual and predicated (only new information). In other way, a pixel is coded as the difference between its actual value and a predicted value, which was computed from previously decoded values. As a result of that, compression ratio depends on the variance of the image, level of quantization of the difference values and quality of the prediction.

After predictive coding, we can start to compress the original file size by using Differential pulse code modulation (DPCM) encoding and make a Huffman entropy codebook before transmitting the image. The decoder uses the Huffman codebook to first decode the Huffman entropy and followed by decoding Differential pulse code modulation (DPCM) to derive the constructed image..

Experimental result

Differential Pulse Code Modulation (DPCM) Encoding

Encode original Lena512.pgm image to DPCM values using 7 linear and fixed different Predictor methods:

  • A
  • (A + C)/2
  • (A + B + C)/3
  • A+B-C
  • A+(B-C)/2
  • B+(A-C)/2
  • (A+B)/2

According to 7 above formulas, we can compute the difference between the previous pixel and current one. The predictor code table result will be sent to the next step, entropy encoder.

The results show that the DPCM predictor B + (A -C)/2 achieves the best compression Ratio

Entropy encoder (Huffman coding):

Below part shows how the approach of Huffman coding is generated:

  • Step 1: Retrieve the output of DPCM encoding.
  • Step 2: Each value from DPCM encoding will be generated according to each occurrence probability in descending order.
  • Step 3: Assign the Huffman codeword for each computed probability value
Read also  Online Shopping Environments In Developing Countries

For example:

Each child probabilities is added to create the parent. The adding of probabilities continues until the root with final probability 1.0 as shown above. Hence, the Huffman Coding Table is formed.

A bit is assigned to every node. The ‘0’ bit is assigned to every left sub-tree of every node and the ‘1’ bit is assigned to every right sub-tree of every node.

Average length per symbol = 1×0.6+2×0.2+3×0.1+3×0.1= 1.6 bits/symbol

In our experiment, since the optimal predictor is B + (A -C)/2, the codebook is generated according to this predictor. The DPCM values ranges from [-75, 87]. The table below shows the first 8 values among 162 DPCM values in Huffman codebook.

RESULT DISCUSSIONS: (Optimal Predictor B + (A -C)/2, Encoding Part)

Average bit/pixel=Compressed Size (Bits) / (columns x rows) =Compressed Size/ (512 x 512)

After generating 162 codewords we are able to achieve 902428 bits. The derived average bit/pixel is approximately 3.442 bits/pixel.

Compression Ratio= Original image size / Compressed image size

Compression Ratio= 8 bits / (Average bit/pixel) (image resolution: 8 bits per pixel)

The original image (Lena512.pgm picture) is a fixed 8 bits/pixel. Therefore, the compression ratio is 8/3.442=2.3239 by using Huffman codes.

In order to write a file into binary bit format, the 8 binary adjacent bits is read and converted an integer. For Huffman decoding, the integers are written into a file.

Entropy decoder (Huffman Decoding):

The Huffman decoder uses the compressed image to regenerate the DPCM table.

  1. The compressed file integers are read and converted into a set of binary bits.
  2. If the binary code is not a prefix of any code, read each and every bit in the codebook to find a match. If no match is found, continue the search till a match is found in the codebook. Retreive the corresponding DPCM value for the binary code set.
  3. Step 2 is repeated until the binary set is exhausted. A DPCM table is generated and this is called the DPCM decoding process.
Read also  Benefits And Difficulties of Enterprise Wide Information System Implementation

For example:

After converting integers into a set of binary bits, we have encoded bit stream of the first few pixels: 110111001101110011011101

According to Huffman lookup table, the corresponding DPCM values are: 87 87 86

DPCM Decoding:

After decoding DPCM, the reconstructed image is retrieved.

According to the above images, it can be concluded that the compression is a lossless one. There is no difference between original image and compressed one, as above images shown in reality or DPCM values encoded and decoded in theory.

Project Discussion

Analyzing Different Predictors for Lena.pgm

Through above diagram, we can see the compression ratio of using one neighboring pixel (Predictor 1: A) for prediction is lower than using two or three neighboring pixels. And predictors which use 2 neighboring pixels have lower compression ratio than ones with 3 neighboring pixels

Analyzing Different Predictors for other images:

To further evaluate performance of the predictors, 4 different images were chosen to compute the effects the performance of the predictors.

From the above diagrams, compression ratio of methods from 3 to 6 (They are: (A + B + C)/3, A+B-C, A+ (B-C)/2, B+ (A-C)/2 ) is higher than the rest. As there is strong connection between adjacent pixels, any predictor which can utilize connection between adjacent pixels produces good compression ratio

As a result of that, the methods which use 1 or 2 neighboring pixels can not utilize this connection well.

For example:

Predictor A is used (only 1 pixel is used) for the first method. The encoded values are dependent on the previous value of the same row. Thus, the first column’s values cannot be predicted or this predictor does not use connection between neighboring pixels appropriately.

Read also  Human Factors in System Design

On the other hand, the images that fall at the center region have a lower compression ratio than ones which spreads over the entire white-grey-black scale..

Since the conducted experiments pertain only on .pgm format images (Black-And-White image), it is unable to determine compression ratio of colored images (RGB, YCbCr, HSV color bases)

Conclusion

According to all above experiments and diagrams, we can assert that there is no one definite predictor for every image to achieve best compression ratio because different images need different predictors to achieve better compression results.

In general, since there is strong correlation between spatially adjacent pixels, any predictor which can utilize connection between adjacent pixels produces good compression ratio. In our experiment compression method from 3 to 6 will produce better compression than the rest.

In reality, Static Huffman Coding is popular and easy to implement but does not obtain theoretically optimum number of bits to encode symbols because of condition: Huffman codeword must be an integer number of bits long, e.g. if probability of 1 symbol is 0.9, the optimum code word size should be 0.15 bits but in Huffman Coding, it is 1 bit. Moreover, if symbol probabilities are unknown or not stable (source changes), Dynamic Huffman coding should be chosen but the implementation is very complicated.

On the other hand, so as to achieve non-integer length coding and probability derived in real-time, Arithmetic coding is a good alternative. However, the implementation of Arithmetic coding is slow due to many multiplications (in some cases, divisions).

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)