Evaluation Of An Error Control Codec Information Technology Essay

The assignments object is to design and evaluate an error control codec. This aims to prove in practice the Hamming code theory.

In the first part there is a design of an encoder and its simulation. From the encoder simulation we can figure how the code words are generated and when a codeword is valid.

The decoder purpose is to recover the codeword from the received word. To accomplish this, syndrome theory was used. A design and a simulation of the decoder is shown in answer 2.

Final, a codec is designed with an addition of XOR gates to introduce errors. The main reason of this is to understand why Hamming code can detect 2 errors and correct only one.

Introduction to Hamming linear block codes

Noise causes errors (data distortion) during transmission. So, a received message has bit errors. A repercussion of noise is the alteration of one or more bits of a codeword. Alteration of a bit means inversion of its situation because signals have binary form. Some examples of noisy communication channels are: a) an analogue telephone line which, over which two modems communicate digital information and b) a disk drive. There are two solutions that can achieve perfect communication over an imperfect noisy communication channel, physical and system solution. Physical modifications increase the cost of the communication channel.

Information theory and coding theory offer an alternative approach: we accept the given noisy channel as it is and add communication systems to it, so that we can detect and correct the errors introduced by the channel. As shown in figure 1, we add an encoder before the channel and a decoder after it. The encoder encodes the source message s into a transmitted message t, adding redundancy to the original message in some way. The channel adds noise to the transmitted message, yielding a received message r. The decoder uses the known redundancy introduced by the encoding system to infer both the original signal and the added noise.

Figure 1 – Error correcting codes for the binary symmetric channel [1]

The only cost for system solution is a computational requirement at the encoder and decoder.

Error detection and error correction

In order to make error correction possible, the bit errors must be detected. When an error has been detected, the correction can be obtained by: a) receiver asks for repeated transmission of the incorrect codeword until a correct one has been received ƒ  Automatic Repeat Request (ARQ) b) using the structure of the error correcting code to correct the error ƒ  Forward Error Correction (FEC). Forward Error Correction is been use for this assignment.

Error Detection

Automatic Repeat Request

Forward Error Correction

Block Code

Block Code

Convolutional Code

Figure 2 – The main methods to introduce error correction coding

Linear block codes

“Linear block codes are a class of parity check codes that can be characterized by the (n, k) notation. The encoder transforms a block of k message digits into a longer block of n codeword digits constructed from a given alphabet of elements. When the alphabet consists of two elements (0 and 1), the code is a binary code comprising binary digits (bits).”[4] The current assignment of linear block codes is restricted to binary codes.

The output of an information source is a sequence of binary digits “0” or “1” (since we discuss about binary codes). In block coding, this binary information sequence is segmented into message blocks of fixed length. “Each block can represent any of 2k distinct messages. The channel encoder transforms each k-bit data block into a larger block of n bits, called code bits. The (n-k) bits, which the channel encoder adds to each data block, are called redundant or parity bits. Redundant or parity bits carry no information. Such a code is referred to as an (n, k) code.” [5] The encoding result is the codeword.

Read also  The Ecommerce Impacts On Supply Chain Management Information Technology Essay

“Any generator matrix of an (n, k) code can be reduced by row operations and column permutations to the “systematic form”.” [6] We call a systematic code the code where the first k digits (information or message bits) of the code word are exactly the same as the message bits block and the last n-k digits are the parity bits as it shown below.

Message – Information bits

Redundant or parity bits

k

n-k

n digit codeword

Figure 3 – (n, k systematic block code)

Encoding and Decoding of Linear Block Codes

The generator matrix is a matrix of basis vectors. The generator matrix G for an (n, k) block code can be used to generate the appropriate n-digit codeword from any given k-digit data sequence. The H and corresponding G matrices for the current assignment block code (6, 3) are shown below:

ƒ  H is the parity check matrix

ƒ  G is the generator matrix

The first three columns are the data bits and the rest three columns are the parity bits. “Systematic code words are sometimes written so that the message bits occupy the left hand portion of the codeword and the parity bits occupy the right hand portion. This reordering has no effect on the error detection or error correction properties of the code.”[4] “Study of G shows that on the left of the dotted partition there is a 3×3 unit diagonal matrix and on the right of the partition there is a parity check section. This part of G is the transpose of the left hand portion of H. As this code has a single error correcting capability then dmin, and the weight of the codeword must be 3. As the identity matrix has a single one in each row then the parity check section must contain at least two ones. In addition to this constraint, rows cannot be identical.” [7] The parity check bits are selected so they are independent of each other.

The Hamming distance between two code words is defined as the number of bits in which they differ. The weight of a binary codeword is defined as the number of ones which it contains (the number of the nonzero elements-bits).

The codeword is given by the multiplication of data bits and the generator matrix. The operations of modulo-2 multiplication (AND) and modulo-2 addition (EXOR) are used for the binary field.

ƒ  EXOR addition ƒ  AND multiplication

The parity check equations are shown below:

If the components of the output transmission satisfy these equations:

then the received codeword is valid.

These equations can be written in a matrix form:

where c is the codeword.

The syndrome

Let c be a code vector which was transmitted over a noisy channel. At the receiver we might obtain a corrupted vector r. Decoder must recover c from r. The decoder computes,

S=Hr

where S is called the syndrome and r is the received vector (arranged as a column vector) then if,

Read also  Noise Cause Of Miscommunication Information Technology Essay

then r is not a code word. The syndrome is the result of a parity check performed on r to determine whether r is a valid member of the codeword set. If r is a member the syndrome S has a value 0. If r contains detectable errors, the syndrome has some nonzero value. The decoder will take actions to locate the errors and correct them in the case of FEC.

“No column of H can be all zeros, or else an error in the corresponding codeword position would not affect the syndrome and would be undetectable.

All columns of H must be unique. If two columns of H were identical, errors in these two corresponding codeword positions would be indistinguishable.” [4]

Hamming code can correct a single bit error and detect two bit errors assuming no correction is attempted.

Answers to assignment questions

Task 1

Design the encoder for a (6,3) Hamming single error correcting codec using the “interleaved” [P1P2D1P3D2D3] format. You can implement your parity generation using XOR gates. Simulate your circuit to check for correct operation.

Answer 1

“An encoder is a device used to change a signal (such as a bitstream) or data into a code. The code may serve any of a number of purposes such as compressing information for transmission or storage, encrypting or adding redundancies to the input code, or translating from one code to another. This is usually done by means of a programmed algorithm, especially if any part is digital, while most analog encoding is done with analog circuitry.” [3] Encoder creates the codeword in a combination of information and parity bits.

Interleaving is a way to arrange data in a non-contiguous way in order to increase performance and avoid burst errors. In our case we use interleaved to protect the data bits from continuous error.

Figure 4 – Encoder design for a (6,3) Hamming single error correcting codec

Since the encoder is for a (6,3) Hamming single error correcting codec, that means there are 3 information bits and 3 parity bits. Thus, 8 code words are generated from the encoder ƒ  2k where k are information bits ƒ  23=8.

The H and G matrix are shown below for a (6, 3) Hamming code:

All possible code words

Message x G =

Codeword

Weight

[000] x G

000000

[001] x G

001011

3

[010] x G

010101

3

[100] x G

100110

3

[011] x G

011110

4

[101] x G

101101

4

[110] x G

110011

4

[111] x G

111000

3

Table 1 – All possible code words

The minimum distance is dmin=3

Figure 5 – Encoder simulation

Checking if c=(D1D2D3P1P2P3) is a codeword

The EXOR gate () is a logic gate that gives an output of ‘1’ when only one of its inputs is ‘1’.

X1 (Input)

X2 (Input)

Y1 (Output)

1

1

1

1

1

1

Table 2 – Truth table of EXOR Gate

ƒ  ƒ 

ƒ  ƒ 

ƒ  ƒ 

c is a valid codeword.

Task 2

Design the decoder for a (6,3) Hamming single error correcting coded using the “interleaved” [P1P2D1P3D2D3] format. You can use a 3-to-8 line decoder for syndrome decoding and XOR gates for the controlled inversion. Simulate your circuit to check for correct operation.

Answer 2

A decoder is a device which does the reverse of an encoder, undoing the encoding so that the original information can be retrieved. The same method used to encode is usually just reversed in order to decode. [3]

Decoder tries to reveal the correct data word from the codeword. Thus means that here is the process where detection and correction of codeword take place.

Figure 6 – Decoder design for a (6.3) Hamming single error correcting codec

Decode the received codeword

Read also  Implementation Of Biometric Application Security Information Technology Essay

Figure 7 – Decoder simulation

r is the received word ƒ  111000

ƒ  ƒ  r is code word

Task 3

Join your encoder to decoder and add an XOR gate with an “input” in each bit transmission line to allow you to introduce errors into the transmission. Simulate your circuit and check that it can cope with the six single errors as expected.

Answer 3

Figure 8 – Codec desing

Figure 9 – Six single errors

As it shown from the above figure the codec can cope with the six single errors. This is possible because:

Message x G =

Encoder

Codeword

Weight

[000] x G

→

000000

[001] x G

→

001011

3

[010] x G

→

010101

3

[100] x G

→

100110

3

[011] x G

→

011110

4

[101] x G

→

101101

4

[110] x G

→

110011

4

[111] x G

→

111000

3

Table 3 – All possible code words and their hamming weight

“The minimum distance of a linear block code is defined as the smallest Hamming distance between any pair of code words in the code.”[5]

The minimum distance is dmin=3.

The error correcting capability t of a code is defined as the maximum number of guaranteed correctable errors per codeword.

where t is the error correcting capability

For dmin=3 we can see that all t=1 bit error patterns are correctable.

“In general, a t-error correcting (n, k) linear code is capable of correcting a total of 2n-k error patterns.”[4]

Task 4

By experimenting with your implemented codec, examine the effect, in terms of additional errors, of (i) all 15 double errors, (ii) all 20 triple errors, (iii) all 15 quadruple errors, (iv) all 6 quintuple errors, (v) the single sextuple error. Note. You only need consider one of the 8 possible input data words. Why?

Answer 4

(i)

Figure 10 – 15 double errors

(ii)

Figure 11 – 20 triple errors

(iii)

Figure 12 – 15 quadruple errors

(iv)

Figure 13 – 6 quintuple errors

(v)

Figure 14 – The single sextuple error

Since the error correcting capability is t≤1, our codec can’t detect or correct more than 1 error. Thus, the above results.

Task 5

Calculate the post codec probability of a code being in error, A(n), for each of the five categories examined in Task 4. Then calculate the overall number of errors per 6 bit word, Eav, given by the following model based on the binomial distribution

as function of the channel bit error probability p. Plot the decoded error probability as function of p. Over what range of p do you conclude that this codec is useful and why?

Answer 5

A(n)=1-(number of correct errors/number of total errors)

A(n) is going to be always 1 except the case where the codec detects and corrects 1 single error ƒ  then A(n)=1

Using matlab for the plot

p=[0:0.01:1];

Eav=[15*p.^2.*(1-p).^4+20*p.^3.*(1-p).^3+15*p.^4.*(1-p).^2+6*p.^5.*(1-p).^1+p.^6.*(1-p).^0];pd=Eav/6;

plot(p,pd)

xlabel(‘Bit error probability (p)’)

ylabel(‘Decoder error probability Pd(p)’)

grid on

Figure 15 – Plot of decoder error probability (pd) as function of p

Conclusions

Parity bits must be added for the error detection and correction.

Hamming distance is the criterion for error detection and correction.

Error detection can be done with addition of one parity bit but error correction needs more parity bits (Hamming code).

Hamming code can detect 2 bit errors assuming no correction is attempted.

Hamming code can correct only a single bit error.

The ability to correct single bit errors comes at a cost which is less than sending the entire message twice. Sending a message twice is not accomplish an error correction.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)