The Memory Management Of Operating Systems Information Technology Essay

The following documentation is divided into two sections. The first section deals with the memory management of operating system. The operating system chosen for this assignment is Fedora. This section starts with the introduction of Fedora and memory management. The memory management techniques used for this section are virtual memory and garbage collection.

In the second section of this document contains details of various modern microprocessors. This includes introduction of microprocessor and comparison of microprocessor in various devices like laptops, desktops, servers and embedded systems. The last part of this section explains about the trends that affect the performance and design in modern microprocessors.

Section 1

Operating System

1.0 Introduction

Fedora is an open source operating system built on the Linux kernel. The first release of Fedora was on 16th November 2003 as Fedora Core 1. The latest stable version was released on 2nd November 2010 as Fedora 14, codenamed Laughling. Fedora is developed by a community of developers collectively known as the Fedora Project. This project was founded by Warren Togami in December 2002. The Fedora project is sponsored by Red Hat.

The next version of Fedora (Fedora 15 / Lovelock) will be released on 17th May 2011. The features included are:

¿½ Implementation of GNOME 3

¿½ Implementation of Systemd

¿½ OpenOffice is replaced by LibreOffice

Currently, Fedora ranks third as the most popular Linux-based operating system. Ubuntu being first, followed by Mint.

(Fedora 14 screenshot)

1.1 Memory Management

Memory management is a field of computer science which develops techniques to efficiently manage the computer memory. Basically, memory management involves allocation of memory portion to various programs at their request, and then freeing it, so that it can be reused. Good memory management technique maximizes processing efficacy. Memory management is a compromise between quantity (available Random Access Memory) and performance (access time).

A good memory management system must carry out the following tasks

¿½ Allocation of memory space blocks for different tasks

¿½ Allow sharing of memory

¿½ Memory space which is being used should be protected. This is required to prevent a user from changing a task carried out by another user.

¿½ Optimization of available memory

Different operating system implements various techniques for memory management and therefore, their performance varies. Some of the memory management techniques used by Fedora are as follows

¿½ Virtual Memory

¿½ Garbage collection

¿½ Swapping

¿½ Memory hierarchy

¿½ Over commit accounting

¿½ OutOfMemory

¿½ Drop caches

1.2 Virtual Memory

Virtual memory is one the most commonly used memory management technique in modern computers. It was developed for multitasking kernels to address the issue of insufficient RAM for multiple programs to function simultaneously. Virtual memory allows the computer to look at areas of RAM which have not been used for a while and copies those areas onto the hard disk. Only instructions and data used by the processor are stored in the RAM. The operating system carries this task out by creating a temporary file (known as SWAP or exchange file) and placing it on the hard disk when the RAM space is not sufficient. This will increase the space in memory and will allow RAM to load new applications because the areas of RAM which was not used recently was moved to hard disk.

So, virtual memory basically extends the users primary memory by considering the hard disk as if it were an additional RAM.

(Virtual memory)

Virtual memory is implemented in Fedora since it is a multitasking and multiuser operating system. These features require necessary protection and ability to execute different process simultaneously whose cumulated process size can be greater than the primary memory available in the system. These requirements can be met by implementing virtual memory technique.

1.2.1 Advantages and Disadvantages

Advantages of Virtual Memory

1. Virtual memory allows the system to function as if it has more RAM than it actually does. So, the system can run more active applications at a given time because virtual memory increases the amount of primary space available.

2. A program or process can run on a system even though there isn¿½t enough primary memory required for the program or process to run. This is achieved by the implementation of virtual memory because it increases the RAM space by copying areas of RAM which wasn¿½t recently used to the hard disk

3. Since hard disk space is much cheaper than RAM space, users need not spend a lot of money upgrading their RAM.

Disadvantages of Virtual Memory

1. There will be a significant loss of system performance if the system relies too much on the virtual memory. This is because the operating system is required to constantly swap information between RAM and the hard disk. And since the read/write speed of hard disk is much slower than RAM and the hard disk technology does not allow quick access to small bits of data at a time, there will be a significant loss of performance when the system relies heavily on virtual memory. The RAM is much faster than hard disk because RAM is made up of integrated circuit technology, making it faster, while hard disk is built on the magnetic technology, which is much slower than the RAM.

This can be avoided by making sure that system has enough RAM installed in it so that the RAM can handle all the tasks used by the user on a daily basis. Having this setup will ensure the suitable functioning of the system.

2. The implementation of virtual memory requires allocation of certain portion of hard disk space for its use. This will result in less hard disk space the user¿½s use. That is if a system has 20GB hard disk, and 2GB of its space is allocated for virtual memory, then the user cannot use this 2GB of space as it is reserved for virtual memory

Read also  Confronting An Active Shooter Situation Information Technology Essay

This problem can be solved by having a hard disk with enough space for the user¿½s requirement so that the allocation of certain portion of hard disk will not result in insufficient space in the hard disk.

3. The system might be unstable because of the constant swap of information between hard disk and RAM.

1.3 Garbage collection

Garbage collection is a form of automated memory management technique in which the operating system removes objects, data or other regions of the memory which are no longer in use by the system or the program. This technique is necessary for operating system to function well because the memory available in the system is always finite and failure to remove these unwanted data will result in significant performance loss and unnecessary use of memory.

In Fedora, garbage collection is primarily broken down to three stages, they are

¿½ Pruning

¿½ Trashing

¿½ Deleting

1.3.1 Pruning

In this stage, the operating system identifies unwanted objects, builds or data and they are detached from certain tags according to garbage collection policies set by Fedora. These policies allow rules based on tag, signature and package.

No objects, builds or data are deleted in this stage. The unwanted objects, builds or data are identified in this stage, which eventually gets deleted in the upcoming stages.

1.3.2 Trashing

This is the stage in which, the system overlooks the objects, builds or data which was untagged in pruning stage. After this, these objects, builds or data are tagged with a ¿½trashcan¿½ tag which instructs the system to send these builds for deletion. The garbage collector sends a certain objects, builds or data for deletion only if it meets the following requirement

1. The objects, builds or data is untagged for at least 5 days

2. There are not protection key signed on the objects, builds or data

1.3.3 Deleting

This is the final stage in garbage collection, in this stage, all the objects, builds or data are examined one last time for any mistakes in their tags. The objects, builds or data is usually deleted after it has been tagged with the trashcan tag for more than the grace period (4 weeks by default).

Section 2

Computer System Architecture

2.0 Introduction

A microprocessor, also known as a logical chip is an integrated circuit which contains the whole or most of the central processing unit (CPU) of a computer on a single chip. A microprocessor is primarily designed to accomplish logical and arithmetic operations.

(AMD Athlon Processor)

Microprocessors were introduced in the early 1970¿½s and were used in electronic calculators which were using Binary Coded Decimal (BCD) for calculations. These microprocessors were 4 bit microprocessors. These microprocessors were soon used in devices like printers, terminals and different types of automation.

The first general purpose commercially developed microprocessor was introduced by Intel. This microprocessor was called Intel 4004. This was also the first complete CPU on a chip. The technology used for the development of Intel 4004 was the silicon gate technology which increased the number of transistors in each microprocessor which in turn increased the speed of computation. It had a clock speed of 108KHz and 2300 transistors with a port for Input Output (I/O), RAM and Read Only Memory (ROM). Intel 4004 was able to execute about 92,000 instruction every second, making each instruction cycle 10.8 microsecond. This can be inferred because the microprocessor is capable of executing 92,000 instructions per second, so after calculation (1/92000), we can confirm that each instruction takes 10.8 microseconds.

The microprocessor acts as an artificial brain of a computer. It receives and gives out various instructions to other components present is a computer. All microprocessor¿½s work are based on logic. This is achieved from the three following components of microprocessor which forms the main features of the microprocessor, they are

1. Set of digital instruction

2. Bandwidth

3. Clock speed

2.1 Growth of microprocessor

Presently, in the digital age, there are only a negligible about of electronic gadgets that does not have a microprocessor inside it. This is because of the quick development in this field. Today, these gadgets perform a wide variety of advanced task and this only be achieved by the implementation of a microprocessor in it.

The quick development of various fields like automobile, weather forecasting, medicine, communication and space technology can be credited to the development of microprocessor. This is because of the ability of microprocessor¿½s ability to make quick and credible decisions.

Microprocessor also enabled automation of various difficult manual jobs possible. This has resulted in greater speed, efficiency and accuracy in many aspects of our life. The potential of microprocessor is still immense as there is still room for more development of microprocessors.

2.2 Microprocessor design strategy

Microprocessor¿½s design and architecture varies depending upon its manufacture and the machine requirement the microprocessor would be fulfilling. Some of the design strategies are as follows

¿½ Complex Instruction Set Computers

In this architecture, most of the work are done by the microprocessor itself. Here, a single instruction can carry out several low level operations. Low level operation refers operations like load from memory, arithmetic operations and other such operations.

Example: Motorola 68k processors

¿½ Reduced Instruction Set Computers

This is a CPU architecture in which most of the work is done by the software itself. This ensures that the load on the processor is very low and results in faster execution of instruction.

Read also  Online Event Management System

Example: AMD 29k processors.

2.3 Microprocessors in different devices

Based on the different machines and the task it carries out, microprocessors can be broadly classified into

¿½ Desktop microprocessor

¿½ Laptop microprocessor

¿½ Server microprocessor

¿½ Embedded system microprocessor

2.3.1 Desktop Microprocessors

A desktop is a general purpose personal computer which is intended to be used in a single location.

(Desktop PC)

As desktops are used in a single location, there will be a constant supply of power to the system. Also, most of the desktop cases are efficiently ventilated to minimize the temperature rise in the system. As there is no problem with power supply or ventilation and enough space to install cooling devices in desktop, the microprocessors are primarily designed to give out high performance.

As the desktop microprocessors are designed primarily for performance, they have the following features

(Compared to laptop and embedded system microprocessor)

¿½ Higher number of transistors, Higher maximum temperature

¿½ Die size (physical surface area of on the wafer) is higher.

¿½ Processor frequency is higher

¿½ Supports higher cache size

¿½ CPU multiplier is higher

¿½ Bus/core ration is higher

2.3.1 Laptop Microprocessor

A laptop is also a general purpose personal computer intended for mobile purpose. A laptop also has most of the desktop hardware integrated in it. This included a keyboard, display, speakers and a touchpad. A laptop is powered by main electricity using an AC adapter, but it has a rechargeable battery attached to it so that it can function without a constant supply of electricity for an AC adapter until the battery drains out. Modern laptops also include wireless adapter, camera, mic, HDMI port, touchscreen and a GSM SIM slot to aid better communication and user experience.

(A general purpose laptop)

As the laptops are used to be mobile, it is designed to be compact with all the hardware and other peripheral devices attached together, and this eases the portability of laptop from one point to another.

As laptops are very compact and are built to run on battery for most of the time, the processor for such machine needs to address a wide variety of issues. This includes ventilation, power management and performance.

Power management is an important issue as laptops should use battery as efficiently as possible. Therefor laptop processor should use less power than other microprocessors like desktop, server or embedded system microprocessors. Laptop processor should also create minimal heat as all the components are place together is a very compact space. This is because an increase in heat would damage most of the hardware.

As the desktop microprocessors are designed primarily for performance, they have the following features

(compared to desktop and server microprocessor)

¿½ Less number of transistors

¿½ Minimal temperature increase

¿½ Die size (physical surface area of on the wafer) is very small.

¿½ Processor frequency is lower

¿½ Supports mediocre cache size

¿½ CPU multiplier is lower

¿½ Bus/core ration is lower

2.3.2 Server Microprocessor

A server is a combination of single computer, or a series of computers and software that links numerous computers and/or electronic devices together. There are wide varieties of servers, such as mail server, database server, webserver, enterprise server and printing server.

An example of a server would be Dell¿½s PowerEdge servers or HP Superdome, which is an high-end server by HP. Servers have dedicated operating system developed for them, like the Windows Server family and Ubuntu server.

Intel Itanium 9300 is a microprocessor dedicated to work on enterprise servers. It is one of the most advanced processor available today. This processor contains more than two billion transistors on a single die, and supports up to four cores per die and 24MB L3 of on-die cache.

(A typical server room)

As a server perform wide range of tasks and services to numerous clients. And there, the performance of the microprocessor isn¿½t the only important aspect when it comes to microprocessor. The microprocessor should also address the issues like redundancy, availability, scalability and interoperability. To achieve all this, a server microprocessor should have the following attributes,

¿½ High frequency operation

¿½ Simultaneous multithreading

¿½ Optimized memory system

¿½ Large cache

¿½ Sharing of caches

¿½ Excellent remote access service

¿½ Non-uniform memory access

¿½ Clustering

¿½ Excellent power management

As server microprocessors are designed primarily for performance, they have the following features

(compared to embedded system, desktop and laptop microprocessor)

¿½ Highest number of transistors

¿½ Die size (physical surface area of on the wafer) is highest.

¿½ Processor frequency is highest

¿½ Supports very high cache size

¿½ CPU multiplier is very high

¿½ Bus/core ration is higher

2.3.4 Embedded system microprocessor

Embedded system is a computer system which is designed to perform only one or a few dedicated tasks, mostly with real time constraints. These systems are generally embedded as a subsystem in a larger system. Applications of embedded system range from small systems such as microwave oven, watches to aircraft electronics and communication.

As the area of operation of an embedded system varies in different field, the microprocessor used in these systems also varies. But still, there a certain essential areas that the microprocessor should address, they are

¿½ Response time

¿½ Cost

¿½ Portability

¿½ Power management

¿½ Fault tolerance

Motorola¿½s 68k family of processors, which were a popular in personal computers and workstations in the 1980s and early 1990s are now widely used in embedded system.

Attributes of microprocessor on a normal embedded system are

(compared to server, laptop and embedded system microprocessor)

¿½ Lowest number of transistors

¿½ Die size (physical surface area of on the wafer) is small.

¿½ Processor frequency is lowest

¿½ Does not supports cache size

¿½ CPU multiplier is low

¿½ Bus/core ration is very low.

2.4 Microprocessor trends

Since the introduction of the first microprocessor by Intel in 1971, the development of the microprocessor from 1970s to present day is mind boggling.

Read also  Cracks Detection using Digital Image Processing

The table above proves the development of microprocessors over the years.

We can inference from the table above that performance is one the features that increase as new microprocessors are developed. This is due the ever increase of performance requirements by both software and hardware.

The microprocessor¿½s performance can be increased by

1. Increasing number of cores

A core is a part of the processor that perform reading and executing of date. Processors initially had only one processor. Multicore processors were developed because tradition multi-processing was not efficient due to the increased demand of performance. As the numbers of cores are increased, the operating system can allocate application to different cores, which results in better performance and a better multitasking experience.

Even though the numbers of cores are increasing per processor, which is

supposed to enhance the performance, but for many instances, this increased performance is not evident. This is because of the inability of software to cope with the multi-core processors as these software are not designed to work on a multi-core processor. This can be solved by updating these software to be compatible with multicore processors.

2. Implement high speed cache bus

Cache bus is a dedicated bus that the processor uses to communicate with the cache memory. High speed cache will increase the performance because this will reduce the time required to read or edit frequently accessed data.

Microprocessor design is another crucial area of microprocessor development. The trend in design is that the size of these microprocessors keeps getting small and number of transistors in each microprocessor increases as they develop.

According to Moore¿½s theory by Gordon More, the numbers of transistors in a microprocessor will double every 18 months but the size of the processor will remain same. This was achieved by decreasing the size of the transistors. This theory was valid for the last three decades. But presently, the sizes of the transistor and number of transistors per microprocessor have reached a saturation point that any more development would result in electric current leakage. To address this, scientists believe that new materials should be used instead of silicon. This calls for investments into nanotechnology research.

Conclusion

Information technology has created a revolution. Today, most of the organization cannot function efficiently without an IT Department.

In this document, we have discussed about memory management and microprocessors. The document emphasizes the importance of memory management and microprocessor in a computer.

Memory management is a crucial section of any operating system. This is because a poor memory management can result in failure of operating system. If the operating system did not implement appropriate memory management technique, this would also result in your system being crashed constantly.

The document contains elaborate explanation of virtual memory and garbage collection.

Now, microprocessor, on the other had is the most crucial hardware in a system. It is considered as the brain of a system, as it controls and coordinates all other parts of the system. A computer will not function if there is no microprocessor installed on the mother board.

On the latter section of this document, the author explained the emergence of microprocessor and how microprocessors became a necessity in the modern world. The difference of microprocessor of various machines like desktop, server, laptop and embedded system is also included. The last part of this section describes the major trend affecting microprocessors in terms of trend and design.

Frequently Asked Questions

Some of the frequently asked questions are

1. Why is memory management important?

Memory management is important because it ensures that memory installed in the system is efficiently managed. This is to ensure efficient performance of the system.

2. State some commonly used memory management techniques

Virtual Memory

Swapping

Garbage Collection

3. Why is power management important in laptop microprocessor?

As laptops are designed to be used on the move, they are powered by rechargeable batteries which can hold limited power. Therefore, microprocessor for laptop should be power efficient to maximize the usage time of laptop.

4. What is Moore¿½s theory?

Moore¿½s theory is a theory by Gordon Moore which states that the numbers of transistors in a microprocessor will double every 18 months but the size of the processor will remain same.

Limitations and Extension

1. Limitation:

The system slows down because it relies too heavily on the virtual memory

Extension:

There will be a significant loss of system performance when system relies heavily on virtual memory. This is because it takes more time for the data stored in the hard disk to be accessed. This happens because of the insufficiency on RAM. This can be solved by either closing some application or if the user requires these applications to run simultaneously, then the user should install a bigger RAM in his system

2. Limitation:

The amount of transistors that can be added to a microprocessor will eventually reach its saturation point and then it will be unable to add any more transistors without consequences like current leakage

Extension:

It is true that, at one point in the future, number of transistors per processor will hit its maximum and then it will be impossible to add any more transistor because the size of the transistor cannot be reduced any further. So, to address this problem, motherboard which can support multiple processors should be developed.

Appendices

(Fedora Logo)

(Fodora 14 screenshot)

(Virtual Memory Representation ¿½ shows how virtual memory works)

(An AMD 64 Athlon Processor)

(Desktop Computer)

(Laptop)

(Servers)

(Microprocessor development ¿½ image shows the exponential growth of microprocessors)

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)