A Brief History Of Microprocessor Development Information Technology Essay

Introduction

This assignment looks at the history of computer development, which is often referred to in many reference books and is likened to the different generations of the computer they are central to. Each of these generations of computers is characterized by a major technological advancement that has fundamentally changed the way in which computers perform and operate, resulting in smaller, cheaper, more powerful and more efficient and reliable devices compared to their predecessors. Below I have tried to allude to this fact about the different generation of microprocessor development.

Microprocessors are made possible by the advent of the microcomputer. Before this, electronic Central Processing Units (CPUs) were typically made from bulky discrete switching devices, and later small-scale integrated circuits which contained the equivalent of only a few transistors. By integrating the processor onto one or a very few large-scale integrated circuit packages which contain the equivalent of thousands or even millions of discrete transistors, the cost of the processor power was thus greatly reduced. Since the advent of the Integrated Circuits (IC’s) in the mid-1970s, the microprocessor has become the most prevalent user of IC in its CPU.

The evolution of microprocessors has been known to follow what is termed ‘Moore’s Law’. This law suggested that the complexity of an integrated circuit, with respect to the minimum component cost, doubles every 24 months. This generalisation has proven true since the early 1970’s. From their beginnings as the drivers for calculators, this continuous increase in power, led to the dominance of microprocessors over every other forms of computer; every system from the largest mainframes of that era to the smallest handheld computers now use a microprocessor at its core.

A microprocessor is a single chip integrating all the functions of a central processing unit (CPU) of a computer. It includes all the logical functions, data storage, timing functions and interaction with other peripheral devices. In some cases, the terms ‘CPU’ and ‘microprocessor’ are used interchangeably to denote the same device. Like every genuine engineering marvel, the microprocessor too has evolved through a series of improvements throughout the 20th century. A brief history of the device along with its functioning is described below.

Working of a Microprocessor

It is the central processing unit (CPU) which coordinates all the functions of a computer. It generates timing signals, sends and receives data to and from every peripheral used inside or outside the computer. The commands required to do this are fed into the device in the form of current variations which are converted into meaningful instructions by the use of a Boolean Logic expressions. The processor divides the functions into two categories these are the logical functions and processing functions. The arithmetic and logical unit (ALU) and the Control Unit handle these functions respectively. Communicated of this data is through wires called buses. The address bus carries the ‘address’ of the location with which communication is need while the data bus carries the data that is being exchanged.

Read also  Project Manager As Uninspired Taskmaster Information Technology Essay

Microprocessors types

Microprocessors are categorised in different ways. They are:

CISC (Complex Instruction Set Computers)

RISC(Reduced Instruction Set Computers)

VLIW(Very Long Instruction Word Computers)

Super scalar processors

Microprocessors Evolution

Bardeen and Brattain received the Nobel Prize in Physics, 1956, together with William Shockley, “for their researches on semiconductors and their discovery of the transistor effect.” The invention of the transistor in was a significant development in the world of technology. It could perform the function of a large component used in a computer in the early years. Soon it was found that the function this large component was easily performed by a group of transistors arranged on a single platform. This platform, known as the integrated chip (IC), this turned out to be a very crucial achievement in computing and brought along a revolution in the use of computers. Jack Kilby of Texas Instruments (TI) was honoured with the Nobel Prize for his of invention of the integrated IC, this paved the way for microprocessors development. Robert Noyce of Fairchild made a parallel development in IC technology he was awarded the patent on his device.

IC’s proved that complex functions could be integrated onto a single chip with a highly developed speed and storage capacity. Both Fairchild and Texas Instruments began the mass manufacture of commercial ICs in the early part of the 60’s. Finally, the Intel corporation’s Hoff and Fagin were credited with the design of the first microprocessor.

The world’s first microprocessor was the Intel 4004, the next in line was the 8 bit 8008 microprocessor and this was developed by Intel in 1972 to perform complex functions in sync with the Intel 4004.

This started a new era in computer applications. The use huge computers mainframes was significantly scaled down to a much smaller device that were relatively cheap. Earlier, use was limited to large organization. With the development of the microprocessors, the use of new computers systems trickled down to the common man. The next processor in line was Intel’s 8080 with an 8 bit data bus and a 16 bit address bus. This was amongst the most popular microprocessors of all time.

At the same time Intel were manufacturing there processors the Motorola corporation developed its own 6800 in competition with the Intel’s 8080. Fagin left Intel and formed his own firm “Zilog”. It launched a new microprocessor Z80 in 1980 that was far superior to the previous two versions, direct competition for the large corporations.

Read also  The Role Of Barcode In Warehouse Information Technology Essay

Intel developed the 8086 which still serves as the base model for all latest advancements in the microprocessor family. It was largely a complete processor integrating all the required features in it. 68000 by Motorola was one of the first microprocessors to develop the concept of micro coding in its instruction set. They were further developed to 32 bit architectures. Similarly, many players like Zilog, IBM and Apple were successful in getting their own products in the market. However, Intel had a commanding position in the market right through the microprocessor era.

The 1990s saw a large scale application of microprocessors in the personal computer applications developed by the newly formed Apple, IBM and Microsoft Corporation. It witnessed a revolution in the use of computers, which by then was a household entity. The growth of the market was expanded upon due to their use of microprocessors.in industry at all levels. Intel brought out its ‘Pentium Processor’ which is one of the most popular processors in use to date. It has been developed into a family series of excellent processors, pushing into the 21st century.

Microprocessor Architectures

Two dominant computer architectures exist for designing microprocessors and microcontrollers. These two architectures include Harvard and von Neumann. The Harvard and von Neumann architectures consist of four major subsystems: memory, input/output (I/O), arithmetic/logic unit (ALU), and control unit (diagrammed in Figures 1a and 1b). The ALU and control unit operate together to form the central processing unit (CPU). Instructions and data are stored in high-speed memories called registers within the CPU. Each of these components interact together to complete the execution of instructions.

Central Processing Unit

Input – Output

Data Memory

Program memory

(the instruction set)

Figure 1a: Harvard Architecture

Program Memory

and Data Memory

Central Processing Unit

Input – Output

Figure 1: von Neumann Architecture

Executing instructions with either architecture follows the fetch/decode/execute cycle. Instructions are fetched from program random access memory (RAM) into instruction registers. The control unit then decodes the instruction and sends it to the ALU. The ALU performs the appropriate operation on the data and sends the result back to the control unit for storage. The efficiency of the fetch/decode/execute cycle is highly dependent upon the architecture of the system.

Organization of the subsystems differs in the two architectures. The von Neumann architecture allows for one instruction to be read from memory or data to be read/written from/to memory at a time. Instructions and data are stored in the same memory subsystem and share a communication pathway or bus to the CPU.

The Harvard architecture alternatively consists of separate pathways for interaction between the CPU and memory. The separation allows for instructions and data to be accessed concurrently. Also, a new instruction may be fetched from memory at the same time another one is finishing execution, allowing for a primitive form of pipelining. Pipelining decreases the execution time of one instruction, but main memory access time is a major bottleneck in the overall performance of the system.

Read also  Advantages And Disadvantages Of Project Management Information Technology Essay

In order to reduce the time required to fetch an instruction or data cell, a fast memory called cache may be used (Figure 2). Cache works with the Principle of Locality. This principle stems from the idea that when a piece of data or instruction is fetched, the same data or instruction will be accessed in the near future or data and instructions nearby will be accessed. Thus, instead of the CPU spending an expensive amount of time accessing program or data memory (main memory) for every piece of data or instruction, it can check cache first and then main memory if the desired data or instruction is not available in cache. Time may be measured using clock cycles.

Storage Size

Register

Cache

RAM

Speed and Cost

Figure 2: Hierarchy of Memory

RISC refers to reduced instruction set computer. It is a design philosophy which tries to speed up the execution of instructions. The philosophy is that most programs that run in RISC environments do not need an excessive number of instructions to perform. Many times the design features implemented in standard CPUs for aiding in the coding process were neglected. Also, each available instruction may be executed in the same amount of time. Common RISC microprocessors include AVR (Advanced Virtual RISC), ARM, PIC, and MIPS. In contrast, complex instruction set computers (CISCs), execute many operations in one instruction. For example, one instruction may encode a load, arithmetic operation, and store. However, the design philosophy tried to cater to a higher level programming language and complex addressing modes. The larger instructions take longer to decode and execute. Programs are smaller as a result and require that main memory is accessed less frequently.

The RISC AVR core architecture is an example of a practical implementation of the Harvard architecture. Figure 3 shows a block diagram of the AVR core architecture. As seen in the diagram, the AVR architecture contains separate memory for data and program instructions. Data is stored in SRAM while program instructions are stored in In-System Reprogrammable Flash memory. Instructions are executed with one level of pipelining, which allows for one instruction to be pre-fetched while another is executing. Instructions are thus executed every clock cycle.

http://www.webopedia.com/DidYouKnow/Hardware_Software/2002/FiveGenerations.asp

John Mauchly and John Presper Eckert

http://inventors.about.com/od/estartinventions/a/Eniac.htm

http://eecs.wsu.edu/~aofallon/ee234/lectures/ComputerArchitectureOverview.pdf

Transistor

http://nobelprize.org/educational/physics/transistor/history/

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)