The History Of Virtualization Information Technology Essay

Introduction

Virtualization is one of the hottest innovations in the Information Technology field, with proven benefits that propel organizations to strategize for rapid planning and implementation of virtualization. As with any new technology, managers must be careful to analyze how that technology would best fit in their organization. In this document, we will provide an overview of virtualization to help shed light on this quickly evolving technology.

History of Virtualization

Virtualization is Brand New – Again! Although virtualization seems to be a hot new cutting edge technology, IBM originally used it on their mainframes in the 1960’s. The IBM 360/67 running the CP/CMS system used virtualization as an approach to time sharing. Each user would run their own 360 machine. Storage was partitioned into virtual disks called P-Disks for each user. Mainframe virtualization remained popular through the 1970’s.

During the 1980’s and 1990’s, virtualization kind of disappeared. During the 1980’s, there were a couple of products made for Intel PC’s. Simultask and Merger/386, both developed by Locus Computing Corporation, would run MS-DOS as guest operating systems. In 1988, Insignia Solutions released Soft PC which ran DOS on Sun and Macintosh platforms.

The late 1990’s would usher in the new wave of virtualization. In 1997, Connectix would release Virtual PC for the Macintosh. Later, Connectix would release a version for the Windows and subsequently be bought by Microsoft in 2003. In 1999, VMware would introduce its entry into virtualization.

In the last decade, every major player in servers has integrated virtualization into their offerings. In addition to VMware and Microsoft, Sun, Veritas, and HP would all acquire virtualization technology.

How Does Virtualization Work?

In the enterprise IT world, servers are necessary to do many jobs. Traditionally each machine only does one job, and sometimes many servers are given the same job. The reason behind this is to keep hardware and software problems on one machine from causing problems for several programs. There are several problems with this approach however. “The first problem is that it doesn’t take advantage of modern server computers’ processing power.”[11] Most servers only use a small percentage of their overall processing capabilities. The other problem is that the servers begin to take up a lot of physical space as the enterprise network grows larger and more complex. “Data centers might become overcrowded with racks of servers consuming a lot of power and generating heat. Server virtualization tries to fix both of these problems in one fell swoop.”[16]

Server virtualization uses specially designed software in which an administrator can convert one physical server into multiple virtual machines. “Each virtual server acts as a unique physical device that is capable of running its own operating system. Until recent technological developments, the only way to create a virtual server was to design special software to “trick” a server’s CPU into providing processing power for several virtual machines. Today, however, processor manufacturers such as Intel and AMD offer processors with the capability of supporting virtual servers already built in. In the virtualized environment, the hardware doesn’t create the virtual servers. Network administrators or engineers still need to create them using the right software. “[11]

In the world of information technology, server virtualization is still a “hot topic”. Still considered a new technology, several companies offer different approaches to server virtualization. There are three ways to create virtual servers; full virtualization, para-virtualization, and OS-level virtualization. In all three variations there are a few common traits. The physical server is always called the host. The virtual servers are called guests. The virtual servers all behave as if they were physical machines. However, in each of the different methods uses a different approach to allocating the physical server resources to virtual server needs. [11]

Full Virtualization

The full virtualization method uses software called a hypervisor. This hypervisor works directly with the physical server’s CPU and disk space. It performs as the “stage” for the virtual servers’ operating system. This keeps each server completely autonomous and unconscious of the other servers running on the same physical machine. If necessary, the virtual servers can be running on different operating system software like Linux and/or Windows.

The hypervisor also watches the physical server’s resources. It relays resources from the physical machine to the appropriate virtual server as the virtual servers run their applications. Finally, because hypervisors have their own processing needs, the physical server must reserve some processing power and resources to run the hypervisor application. If not done properly, this can affect the overall performance and slow down applications. [11]

Para-Virtualization

Unlike the full virtualization method, the para-virtualization approach allows the guest servers to be aware of one another. Because, each operating system in the virtual servers is conscious of the demands being placed on the physical server by the other guests, the Para-virtualization hypervisor doesn’t require as much processing power to oversee the guest operating systems. In this way the entire system works together as a unified organization. [11]

OS-Level Virtualization

The OS-level virtualization approach doesn’t use a hypervisor at all. The virtualization capability is part of the host OS, instead. The host OS executes all of the functions of a fully virtualized hypervisor. Because the OS-level operates without the hypervisor, it limits all of the virtual servers to one operating system where the other two approaches allow for different OS usage on the virtual servers. The OS-level approach is known as the homogeneous environment because all of the guest operating systems must be the same. [11]

With three different approaches to virtualization, the question remains as to which method is the best. This is where a complete understanding of enterprise and network requirements is imperative. If the enterprise’s physical servers all run on the same OS, then the OS-level approach might be the best solution. It tends to be faster and more efficient than the others. However, if the physical servers are running on several different operating systems, para-virtualization or full virtualization might be better approaches.

Virtualization Standards

With the ever-increasing adoption of virtualization, there are very few standards that actually reign as prevalent in this technology. As the migration to virtualization grows, so does the need for open industry standards. This is why the work on virtualization is viewed by several industry observers as a giant step in the right direction. The Distributed Management Task Force (DMTF) currently promotes standards for virtualization management to help industry suppliers implement compliant, interoperable virtualization management solutions.

The strongest standard to be created for this technology was the Standardization of Management in a Virtualized Environment. It was accomplished by a team who builds on standards already in place. This standard lowers the IT learning curve and complexity for vendors implementing this support in their management solutions. Its ease-of-use makes this standard successful. The new standard recognizes supported virtualization management capabilities, including the ability to:

discover inventory virtual computer systems

manage lifecycle of virtual computer systems

create/modify/delete virtual resources

monitor virtual systems for health and performance

Virtualization standards are not suffering as a result of poor development but rather because of the common IT challenge involved in pleasing all users. Until virtualization is standardized, network professionals must continue to meet these challenges within a dynamic data center. For example, before the relationship between Cisco and VMWare was established Cisco’s Data Center 3.0 was best described as “scrawny.” 150 million dollars later, Cisco was able to establish a successful integration that allows the VFrame to load VMware ESX Server onto bare-metal computer hardware – something that previously could only be done with Windows and Linux – and configure the network and storage connections that ESX required.

In addition, Microsoft made pledges only in the Web services arena, where it faces tougher open standards competition. The company’s Open Specification Promise allows “every individual and organization in the world to make use of Virtual Hard Disk Image Format forever,” Microsoft said in a statement. VHD allows the packaging of an application with that application’s Windows operating system. Several such combinations, each in its own virtual machine, can run on a single piece of hardware.

The future standard of virtualization is in Open Virtual machine Format (OVF). OVF doesn’t aim to replace the pre-existing formats, but instead ties them together in a standard-based XML package that contains all the necessary installation and configuration parameters. This, in theory, will allow any virtualization platform (that implements the standard) to run the virtual machines. OVF will set some safeguards as well. The format will permit integrity checking of the VMs to ensure they have not been tampered with after the package was produced.

Virtualization in the Enterprise – Microsoft’s Approach (Tom’s needs references)

” Virtualization is an approach to deploying computing resources that isolates different layers-hardware, software, data, networks, storage-from each other. Typically today, an operating system is installed directly onto a computer’s hardware. Applications are installed directly onto the operating system. The interface is presented through a display connected directly to the local machine. Altering one layer often affects the others, making changes difficult to implement.

By using software to isolate these layers from each other, virtualization makes it easier to implement changes. The result is simplified management, more efficient use of IT resources, and the flexibility to provide the right computing resources, when and where they are needed.”

Bob Muglia, Senior Vice President, Server and Tools Business, Microsoft Corporation

The typical discussions of virtualization focus on server hardware virtualization (which will be discussed later in this article). However, there is more to virtualization than just server virtualization. This section presents Microsoft’s virtualization strategy. By looking at Microsoft’s virtualization strategy, we can see other areas, beside server virtualization, where virtualization can be used in the enterprise infrastructure.

Server Virtualization – Windows Server 2008 Hyper-V and Microsoft Virtual Server 2005 R2

In Server virtualization, one physical server is made to appear as multiple servers. Microsoft has two products for virtual servers. Microsoft Virtual Server 2005 R2 was made to run on Windows Server 2003. The current product is Windows Server 2008 Hyper-V, which will only run on 64-bit versions of Windows Server 2008. Both products are considered hypervisors, a term coined by IBM in 1972. A hypervisor is the platform that enables multiple operating systems to run on a single physical computer. Microsoft Virtual Server is considered a Type 2 hypervisor. A Type 2 hypervisor runs within the host computer’s operating system. Hyper-V is considered a Type 1 hypervisor, also called a “bare-metal” hypervisor. Type 1 hypervisors run directly on the physical hardware (“bare metal”) of the host computer.

A virtual machine; whether we are talking about Microsoft, VMWare, Citrix, or Parallels; basically consists of two files, a configuration file and a virtual hard drive file. This is true for desktop virtualization as well. For Hyper-V, there is a .vmc file for the virtual machine configuration and a .vhd file for the virtual hard drive. The virtual hard drive holds the OS and data for the virtual server.

Read also  The Recent Advances In Data Warehouse Information Technology Essay

Business continuity can be enhanced by using virtual servers. Microsoft’s System Center Virtual Machine Manager allows an administrator to move a virtual machine to another physical host without the end users realizing it. With this feature, maintenance can be carried out without bringing the servers down. Failover clustering between servers can also be enabled. This means that should a virtual server fail, another virtual server could take over, providing a disaster recovery solution.

Testing and development is enhanced through the use of Hyper-V. Virtual server test systems that duplicate the production systems are used to test code. In UCF’s Office of Undergraduate Studies, a virtual Windows 2003 server is used to test new web sites and PHP code. The virtual server and its physical production counterpart have the exact same software installed, to allow programmers and designers to check their web applications before releasing them to the public.

By consolidating multiple servers to run on fewer physical servers, cost saving may be found in lower cooling and electricity needs, lower hardware needs, and less physical space to house the data center. Server consolidation is also a key technology for “Green computing” initiatives. Computer resources are also optimized, for example CPU’s will see less idle time. Server virtualization also maximizes licensing. For example, purchasing one Microsoft Server Enterprise license will allow you to run four virtual servers using the same license.

Desktop Virtualization – Microsoft Virtual Desktop Infrastructure (VDI) and Microsoft Enterprise Desktop Virtualization (MED-V)

Desktop virtualization is very similar to server virtualization. A client operating system, such as Windows 7, is used to run a guest operating system, such as Windows XP. This is usually done to support applications or hardware not supported in the current operating system (This is why Microsoft included “Windows XP mode” in versions of Windows 7). Microsoft’s Virtual PC is the foundation for this desktop virtualization. Virtual PC allows a desktop computer to run a guest operating system (OS) which is independent instance of an OS on top of their host OS. Virtual PC emulates a standard PC hardware environment and is independent of the host’s hardware or setup.

Microsoft Enterprise Desktop Virtualization (MED-V) is a managed client-hosted desktop virtualization solution. MED-V builds upon Virtual PC and adds features to deploy, manage, and control the virtual images. The images can also be remotely updated. The virtual machines run on the client computer. Also, applications that have been installed on the virtual computer can be listed on the host machines Start menu or as a desktop shortcut, giving the end user a seamless experience. MED-V can be very useful to support legacy applications that may not be able to run on the latest deployed operating system.

The virtual images are portable and that makes it useful for a couple of scenarios. Employees that use their personal computers for work can now use a corporate managed virtual desktop. This solves a common problem where the personal computer might be running a “home” version of the operating system that does not allow it to connect to a corporate network. This also means that the enterprise only makes changes to the virtual computer and makes not changes to the personal computer’s OS.

The other scenario where portability plays a factor is that the virtual image could be saved to a removable device, such as a USB flash drive. The virtual image could then be run from the USB drive on any computer that has an installation of Virtual PC. Although this is listed as a benefit by Tulloch, I also see some problems with this scenario. USB flash drives sometimes get lost and losing a flash drive in this scenario is like losing a whole computer, so caution should be exercised so that sensitive data is not kept on the flash drive. Secondly, based on personal experience, even with a fast USB flash drive, the performance of the virtual computer running from the USB flash drive is poor as compared to running the same image from the hard drive.

Virtual Desktop Infrastructure (VDI) is server based desktop virtualization. In MED-V, the virtual image is on the client machine and runs on the client hardware. In VDI, the virtual images are on a Window Server 2008 with Hyper-V server and run on the server. The user’s data and applications, therefore, reside on the server. This solution seems to be a combination of Hyper-V and Terminal Services (discussed later in this section).

There are several benefits to this approach. Employees can work from any desktop, whether in the office or at home. Also, the client requirements are very low. Using VDI, the virtual images can be deployed not only to standard desktops PC’s, but also to thin clients and netbooks. Security is also enhanced because all of the data is housed on servers in the data center. Finally, administration is easier and more efficient due to the centralized storage of the images.

Application Virtualization – Microsoft Application Virtualization (App-V)

Application virtualization allows applications to be streamed and cached to the desktop computer. The applications do not actually install themselves into the desktop operating system. For example, no changes are actually made to the Window’s registry. This allows for some unusual virtual tricks like being able to run two versions of Microsoft Office on one computer. Normally, this would be impossible.

App-V allows administrators to package applications in a self-contained environment. This package contains a virtual environment and everything that the application needs to run. The client computer is able to execute this package using the App-V client software. Because the application is self-contained, it makes no changes to the client, including no changes to the registry. Applications can be deployed or published through the App-V Management server. App-V packages can also be deployed through Microsoft’s System Center Configuration Manager or standalone .msi files located on network shares or removable media.

App-V has several benefits for the enterprise. There is a centralized management of the entire application life cycle. There is faster application deployment due to less time performing regression testing. Since App-V applications are self-contained, there are no software compatibility issues. You can also provide on-demand application deployment. Troubleshooting is also made easier by using App-V. When an application is installed on a client, it creates a cache on the local hard drive. If an App-V application fails, it can be reinstalled by deleting the cache file.

Presentation Virtualization – Windows Server 2008 Terminal Services

Terminal services, which has been around for many years, has been folded into Microsoft’s Virtualization offerings. A terminal server allows multiple users to connect. Each user receives a desktop view from the server in which they will run applications on the server. Any programs run within this desktop view actually execute on the terminal server. The client only receives the screen view from the server. The strategy employed here is that since the application will only use resources on the server, money can be spent on strong server hardware and money saved on lighter strength clients. Also, since the application is only on the server, it is easier to maintain the software, since it only needs to be updated on the server and not all of the clients. Also, since the application runs on the server, the data can be stored on the server as well, enhancing security. Another security feature is that every keystroke and mouse stroke is encrypted. The solution is also scalable and can be expanded to use multiple servers in a farm. Terminal services applications can also be optimized for both high and low bandwidth scenarios. This is helpful for remote users accessing corporate applications from less than optimal connections.

User-State Virtualization – Roaming User Profiles, Folder Redirection, Offline Files

This is another set of technologies that have been around since Windows 95 but have now been folded into the virtualization strategy. A user profile consists of registry entries and folders which define the user’s environment. The desktop background is a common setting that you will find as part of the user profile. Other items included in the user profile are application settings, Internet Explorer favorites, and documents, music, and picture folders.

Roaming user profiles are profiles saved to a server that will follow a user to any computer that the user logs in to. For an example, a user with roaming profiles logs on to a computer on the factory floor and changes the desktop image to a picture of fluffy kittys. When he logs on to his office computer, the fluffy kittys are also on his office computer’s desktop as well.

When using roaming profiles, one of the limitations is that the profile must be synchronized from the server to the workstation each time the user logs on. When the user logs off, the profile is then copied back up to the server. If folders, such as the documents folder, are included, the downloading and uploading can take some time. An improved solution is to use redirected folders. Folders, such as documents and pictures, can be redirected to a server location. This transparent to the user, for the user will still access his documents folder as if they were part of his local profile. This also helps with data backup, since it is easier to backup a single server than document folders located on multiple client computers.

A limitation with roaming user profiles occurs when the server or network access to the server is down. Offline files attempt to address that limitation by providing access to network files even if the server location is inaccessible. When used with Roaming User Profiles and Folder Redirection, files saved in redirected folders are automatically made available for offline use. Files marked for offline use are stored on the local client in a client-side cache. Files are synchronized between the client-side cache and the server. If connection to the server is lost, the Offline Files feature takes over. The user may not even realize that there have been any problems with the server.

Together, Roaming User profiles, Folder Redirection, and Offline Files are also an excellent disaster recovery tool. When a desktop computer fails, the biggest loss are the user’s data. With these three technologies in place, all the user would need to do is to log into another standard corporate issued computer and resume working. There is no downtime in trying to recover or restore the user’s data since it was all safely stored on a server.

Read also  A Review Of The FMCG Sector Information Technology Essay

Review of Virtualization in the Enterprise

Virtualization can enhance the way an enterprise runs the data center. Server virtualization can optimize hardware utilization. Desktop virtualization can provide a standard client for your end users. Application virtualization can allow central administration of applications and fewer chances of application incompatibilities. Presentation virtualization allows central management of applications and allowing low end clients, such as thin clients and netbooks, to run software to perform beyond the hardware limitations. User state virtualization gives the user a computer environment that will follow them no matter what corporate computer they use.

Benefits and Advantages of Virtualization

Virtualization has evolved into a very important entity and a platform for IT to take a step into computing history, being used by countless companies both large and small. This is due to Virtualization’s capability to proficiently simplify IT operations and allow IT organizations to respond faster to changing business demands. Although virtualization started out as a technology used mostly in testing and development environments, in recent years it has moved toward the mainstream in production servers. While there are many advantages of this technology, the following are the top 5.

Virtualization is cost efficient

Virtualization allows a company or organization to save money on hardware, space, and energy. Using existing servers and/or disks to add more performance without adding additional capacity, virtualization directly translates into savings on hardware requirements. When it is possible to deploy three or more servers on one physical machine, it is no longer necessary to purchase three or more separate machines, which may in fact have only been used occasionally. In addition to one-time expenses, virtualization can help save money in the long run as well because it can drastically reduce energy consumption. When there are fewer physical machines this means less energy to power (and cool) them is needed.

Virtualization is Green

GreenIT is not just a fashion trend. Eco-friendly technologies are in high demand and virtualization solutions are certainly among them. As already mentioned, server virtualization and storage virtualization lead to decreased energy consumption; this automatically includes them in the list of green technologies.

Virtualization Eases Administration and Migration

When there are fewer physical machines, this also makes their administration easier. The administration of virtualized and non-virtualized servers and disks is practically the same. However, there are cases when virtualization poses some administration challenges and might require some training regarding how to handle the virtualization application.

Virtualization Makes an Enterprise More Efficient

Increased efficiency is one more advantage of virtualization. Virtualization helps to utilize the existing infrastructure in a better way. Typically an enterprise uses a small portion of its computing power. It is not uncommon to see server load in the single digits. Keeping underutilized machines is expensive and inefficient and virtualization helps to deal with this problem as well. When several servers are deployed onto one physical machine, this will increase capacity utilization to 90 per cent or more.

Improved System Reliability and Security

Virtualization of systems helps prevent system crashes due to memory corruption caused by software like device drivers. VT-d for Directed I/O Architecture provides methods to better control system devices by defining the architecture for DMA and interrupt remapping to ensure improved isolation of I/O resources for greater reliability, security, and availability.

Dynamic Load Balancing and Disaster Recovery

As server workloads vary, virtualization provides the ability for virtual machines that are over utilizing the resources of a server to be moved to underutilized servers.  This dynamic load balancing creates efficient utilization of server resources. In addition, disaster recovery is a critical component for IT, as system crashes can create huge economic losses. Virtualization technology enables a virtual image on a machine to be instantly re-imaged on another server if a machine failure occurs.

Limitations and/or Disadvantages of Virtualization

While one could conclude that virtualization is the perfect technology for any enterprise, it does have several limitations or disadvantages. It’s very important for a network administrator to research server virtualization and his or her own network’s architecture and needs before attempting to engineer a solution. Understanding the network’s architecture needs allows for the adoption of a realistic approach to virtualization and for better judgment of whether it is a suitable solution in a given scenario or not. Some of the most notable limitations and disadvantages are having a single point of failure, hardware and performance demands, and migration.

Single Point of Failure

One of the biggest disadvantages of virtualization is that there is a single point of failure. When the physical machine, where all the virtualized solutions run, fails or if the virtualized solution itself fails, everything crashes. Imagine, for example, you’re running several important servers on one physical host and its RAID controller fails, wiping out everything. What do you do? How can you prevent that?

The disaster caused by physical failure can however be avoided with one of several responsible virtualized environment options. The first of these options is clustering. Clustering allows several physical machines to collectively host one or more virtual servers. “They generally provide two distinct roles, which are to provide for continuous data access, even if a failure with a system or network device occurs, and to load balance a high volume of clients across several physical hosts.”[14] In clustering, clients don’t connect to a physical computer but instead connect to a logical virtual server running on top of one or more physical computers. Another solution is to backup the virtual machines with a continuous data protection solution. Continuous data protection makes it possible to restore all virtual machines quickly to another host if the physical server ever goes down. If the virtual infrastructure is well planned, physical failures won’t be a frequent problem. However, this solution does require an investment in redundant hardware, which more or less eliminates some of the advantages of virtualization. [12]

Hardware and Performance Demands

Server virtualization may save money because less hardware is required thus allowing a decrease the physical number of machines in an enterprise, it does not mean that newer and faster computers are not necessary. These solutions require powerful machines. If the physical server doesn’t have enough RAM or CPU power, performance will be disrupted. “Virtualization essentially divides the server’s processing power up among the virtual servers. When the server’s processing power can’t meet the application demands, everything slows down.” [11] Therefore, things that shouldn’t take very long could slow down to take hours or may even cause the server to crash. Network administrators should take a close look at CPU usage before dividing a physical server into multiple virtual machines. [11]

Migration

In current virtualization methodology, it is only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturer’s processors. “For example, if a network uses one server that runs an Intel processor and another that uses an AMD processor, it is not possible to transfer a virtual server from one physical machine to the other.” [11]

One might ask why this is important to note as a limitation. If a physical server needs to be fixed, upgraded, or just maintained, transferring the virtual servers to other machines can decrease the amount of required down time during the maintenance. If porting the virtual server to another physical machine wasn’t an option, then all of the applications on that virtual machine would be unavailable during the maintenance downtime. [11]

Virtualization Market Size and Growth

Market research reports indicate that the total desktop and server virtualization market value grew by 43% from $1.9 Billion in 2008 to $2.7 Billion in 2009. Researchers estimate that by 2013, approximately 60% of server workloads will be virtualized. This means that 10 percent of the total number of physical servers sold will be virtualized. This also means that the average host server will be configured with between 10 and 11 virtual servers [9]. With the down trend of the global economy and the need for better space and hardware utilization, virtualization is poised for a huge growth in the near future.

Gartner segments the current virtualization market as follows:

Server virtualization holds the biggest chunk of virtualization software at 48% of market value. This share of the market didn’t change much between 2008 and 2009 [9].

Desktop virtualization is not big yet at 11% of market value even though the mutual alliance between Microsoft and Citrix promises a good future. The market for virtual desktop PCs is expected to reach $298.6 million in 2009, over three times the $74.1 million value of the market in 2008 [9].

Server virtualization infrastructure market revenue is expected to reach $1.1 billion in 2009, an increase of about 22.5% over the $917 million estimated for 2008 [9].

Virtualization market competition

The major players in the software virtualization market are as follows by market segment:

The server virtualization market: VMware remains the market leader in server virtualization with approximately 50% share among enterprise users; Microsoft follows with 26% share [7].

Microsoft is the current market leader in application virtualization with a 15% share followed by Citrix with 11% and VMware with 7%. However, the growth perspective is big as nearly two-thirds of businesses have not yet deployed application virtualization [7].

Citrix is the market leader in desktop virtualization with a 19% market share followed by Microsoft with 15% and VMware with 8%. Yet again the growth potential is big as over 60% of corporations have not yet begun to virtualize their desktop environments [7].

Virtualization Technology Users and Skills

Business and Home Users

Currently, most businesses use virtualization to better utilize their unused server hardware, but in the near future, everyone will start using it as a means to save money on space and hardware costs. Some examples of the businesses uses are like:

WAN optimization:- The use of more virtual servers installed on massive hard drives in branch offices to try make client server traffic local as much as possible by hosting applications locally. This contributes greatly in boosting WAN speed and performance.

Improving the performance of servers by using load balancing:- Servers like web, infrastructure and application servers get overloaded with a lot of data traffic, which can put a lot of pressure on the server’s hardware like processors, storage and memory. Server clustering can be used for allocating the traffic load on more than one server and also provide a failback mechanism, but the cost of redundant hardware is high. Therefore, with the powerful processing power of nowadays hardware, a big cluster of virtual servers can be used that is installed on a small hardware cluster, which provides a solution that is fail safe and more cost effective [8].

Desktop virtualization: An emerging trend is the use of desktop virtualization to provide a less costly solution to the increasing number of workstations needed to access servers.

Read also  The Technology In Smart Phones Information Technology Essay

Investments and skills needed for virtualization

To start using virtualization, investment is needed in the following components:

Virtualization software: Virtualization software is the main component as it is developed to take hardware resources such as RAM, CPU and hard disk space to form a fully functioning virtual machine that operates like a single computer.  Each virtual machine has its own complete system so there will be little conflict with adjacent environments. Most virtualization programs works by applying a thin application layer on the host machine or operating system. This layer includes a virtual machine monitor or a hypervisor, which allocates hardware resources in a dynamic and transparent manner. Each machine has access to needed hardware resources on an as needed basis [6].

Virtualization infrastructure: Sometimes companies need to virtualize more than just their servers. This can be done by the ability to virtualize the clients’ infrastructures across not only servers, but networking and storage devices as well. By virtualizing storage and server infrastructure, companies can move away from a vertical approach to a more integrated IT environment. For example, IBM is a leader in developing and installing virtualization infrastructures [10].

Investment in licenses of server operating systems and other needed software: Running more virtual machines means getting a lot more of software licenses to cope with the number of new virtual machines.

Building and maintaining a virtualized infrastructure needs training and skills in:

Constructing and configuring the virtual machines: Setting up a virtual server might sound easy, but many specifications of the virtual machine hardware have to be determined before hand before actually setting up the machine because changing it is very difficult afterwards. For example, specifications like whether a single hard drive or a redundant RAID array is needed, how many CPUs and are they dual or single core, what kind of network cards are needed etc.

Maintaining the virtual machines: Again this might sound easy, but these virtual machines depend on the stability of virtualization software that they reside on so if this software could not be maintained and backed up, disaster could occur.

Securing the virtual machines: It is very obvious, if you need to access all the virtual machines you need to access the host that they are residing on, so security is a big issue here. Virtualization software is equipped with access permission mechanisms to limit access to each individual machine separately, but many doubt that this is enough to secure virtual machines.

Why Virtualization Now?

Virtualization is a revolutionary leap in technology, providing many benefits which can reduce cost, while increasing service and efficiency in an organization. (Figure 1)

Virtualization hides the physical nature of server and its resources, including the number and identity of individual servers, processors and operating systems, from the software running on them. There are so many reasons to use server virtualization. For example, deploying virtualization technique will help the organization to reduce cost of infrastructure. Related benefits of reducing cost are savings on hardware, environmental costs, management, and administration of the server infrastructure. For instance, virtualization enables you to run multiple systems on high-performance hardware, so instead of five machines running at 10% utilization, you run one machine at 50% utilization. Moreover, there are savings in power, space, and the facilities’ cooling system.

In addition to the cost savings, virtualization enables users to react better to changing market conditions by reducing the time required to deploy new servers and applications. The main reason for this is virtual machines are essentially files rather than physical devices. As files they can be replicated, copied, and deployed much faster than physical devices.

Server consolidation is one of the biggest reasons to use virtualization which dedicates each server to a single application. It allows organizations to eliminate multiple, individual servers and maximize available resources by loading several different applications on the same server. “Industry analysts report that between 60 percent and 80 percent of IT departments are pursuing server consolidation projects. It’s easy to see why: By reducing the numbers and types of servers that support their business applications, companies are looking at significant cost savings” [1].

Moreover, virtualization is considered to be the best environment for programmers to develop and test applications. This called a segregation solution, which allows developers to write code and run it in different operating system. Virtualization allows you to run multiple operating systems which provide developers to work on a single workstation to write code that runs in many different environments. Also, it makes the test much easier. Additionally, virtualization provides a way for companies to practice redundancy without purchasing additional hardware.

Other reasons for going with virtualization is the ease of deploying applications allowing organizations to treat application suites as appliances by “packaging” and running each in a virtual machine. This technique will increase data center dynamics by allowing for flexible pooling of resources. In addition, it provides more security with limited administrator access to server. Also, virtualization is the best solution for recovery and backup system.

http://www.technewsworld.com/images/article_images/66061-chart1-small.png

Figure 1: The Impact of Virtualization on Managing Application Performance

Case Study:

“The IT staff for the Chancellor’s office at California State University (CSU) was facing difficult management issues for the 75 applications and 600 desktops it supports” [2]. Each user has different needs such as download, upload, administrator access, different version of applications, and sometimes different operating systems. All these needs cause server problems, and sometimes it causes the server down time and management weakness. CSU decided to apply virtualization techniques which solved the problems and provided many benefits. We can summarize the benefits as simplification and reduced application deployment time. Virtualization also improves desktop security by minimizing the need for local administrator access for applications. In addition, it enables multiple application versions to run on the same computer. Moreover, Virtualization enhances disaster recovery for applications.

New Virtualization Usage Techniques

Virtualization has so many benefits and can be used indirectly to improve other techniques. The following are some of new techniques using virtualization:

Quebs: An open source operating system designed to provide strong security for desktop computing. It is based on Xen, Sandboxing, and Application Virtual Machine.

WAN optimization: Blue Coat Systems describes the concept of how to add virtualization technology to WAN optimization and WAN acceleration systems.

Migration: Refers to moving a server environment from one place to another. Now it is possible to migrate virtual server from one physical machine in a network to another, even if the both machines have different processors, but only if both processors come from the same manufacturer.

Virtualization Alternatives:

The virtualization technique and its alternatives share the same goal, which is the reduction of cost, enhancement of performance, and improvement in efficiency. However, the way to implement these techniques and how they achieve the goal makes them different.

Software as a service (SaaS): SaaS is a model that provides access to application via a network connection, rather than installing it in a company or organization.

Cloud computing: Refers to accessing computing resources that are typically owned and operated by a third-party provider on a consolidated basis in one, or usually more, data center locations [3]. Because a cloud data center can include the servers and storage of multiple physical data centers, it provides a larger pool of resources for applications to share than would be provided by simple server virtualization [4].

Service-Oriented Architectures (SOA): SOA is a software design and development methodology that componentizes applications into modular services that are then assembled in various ways to promote customization to worker needs and reuse of common software elements [3].

Intel’s Atom CPU: Virtualization technique aim to share hardware between multiple Operating Systems by giving each OS a slice of hardware. On the other hand, Intel’s Atom CPU provides each OS a small server rather than a small slice. Of course, this will reduce the cost of storage, network, and increse the speed.

Physicalization: Physicalization involves building smaller servers out of low-power processors, tailoring them to the amount of computing resources which can be efficiently used by one OS instance, so that a one-to-one correspondence between hardware and instances can be maintained, thereby bypassing the management overhead and licensing costs of virtualization [5].

Virtualization Cost:

The cost of implementing virtualization varies based on the size of the organization, the type of virtualization, the technology and vendor chosen. Some vendors, such as VMWare, have developed cost calculators which are available on their sites. However, because organizations have more complex dimensions than the available variables in the calculators, these cannot be completely relied upon for calculation of cost and ROI. Eric Perkins, chief technology officer at Cyberklix, a security and networking tool provider in Chicago states, “…when figuring out how to calculate ROI, the suggested tactic is to take a top-down approach. Start with the biggest and most important factors and then systematically refining the model for your own organization based on individual circumstances.” [18]

The best ROI calculations need to take TCO (total cost of ownership) into account. The following tips are suggested by the Virtual Data Center [19]:

Do ROI calculations before starting virtualization projects

Include hardware, software AND labor in your cost estimate

Identify “hidden costs” such as staff training, software or hardware upgrades, changes to disaster recovery plans, and considerations for downtime. For instance:

Virtualization requires administrators of the virtualized technology to have sharpened skills in performance management. This becomes a critical skill set as virtualization uses more capacity.

Greg Shields, author of The Shortcut Guide to Selecting the Right Virtualization Solution, recommends including purchase of tools to monitor network traffic and providing alerts when that goes above acceptable levels. Cost and implementation of such tools should be considered in a virtualization strategy.

Use case studies to show real cost-savings in other organizations.

Here are some questions which management should answer when considering costs in their planning for implementation of virtualization [17]:

What benefits do you expect?

What is the current state and expected growth?

How does this fit in with your disaster recovery plan?

How can this help reduce your licensing costs?

How flexible is the solution?

What are the costs for training for administrators?

Key stakeholders should be included in the discussion and documentation of these questions in order to choose the best solution to implement and be prepared to best manage costs over time.

Conclusion:

As we have shown above, virtualization is a technological advancement which is changing the way enterprises manage their hardware, software, network and applications. When managed well, the benefits of this technology outweigh the disadvantages, providing return on investment, reducing costs, and simplfying administration.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)