Horizontal and vertical handover in wireless and cellular environment

Chapter 1

Introduction

In 1984, when Marconi had seen practical reality, he had also seen the commercial possibility for a system of telegraphy, which is absolutely free from the limitations of wires. The invention of transistor a century later has led to the ability for everyone to communicate while on the move. Today, it is purely a matter of convenience; to make and receive calls at your leisure, any place and any time. This thesis report on “Horizontal and Vertical Handover in Wireless and Cellular environment – Prone to Failures” reflects a paradigm shift towards the next generation of wireless and cellular environment to ensure seamless connectivity and also a better quality of service (QoS).

The two major modern telecommunication networks are Mobile cellular network and wireless local area network (WLAN). WLANs in a small coverage area provide high data rates, whereas Cellular networks with large coverage area provide a limited data rate. Hence to overcome these limitations of cellular networks, WLANs can be used in the hotspot areas. In the combined network of cellular and WLAN, the major difficult task is providing best quality of service and also to make the system available when the mobile user is moving randomly from one network to the other. Telecommunication has prospered by integration of these two heterogeneous networks, and has hence attracted significant research.

There are two types of handoffs that take place in an integrated network. The horizontal handoff between the same networks and vertical handoff between different networks. Handoff decision, radio link transfer and channel assignment are the three typical stages in a handoff [1]. There are vertical handoffs in two directions, for an integrated cellular/WLAN network, from WLAN to cellular network and the other from cellular network to WLAN. When the cellular network gets into the area of WLAN, to obtain larger bandwidths, it would fancy changing the connection to WLAN. But, on the other hand when user is server by WLAN and moves towards cellular network, coverage is abrupt and hence leading to unwanted voice and data call dropping thus affecting the quality of service (QoS). Now to make sure the communication is seamless, the user must be switched to the cellular network before the WLAN link breaks while reducing the dropping probability of ongoing calls.

This report discusses about two vertical handoff schemes in cellular and WLAN networks, Integrated service based handoff (ISH) and Integrated service based handoff with queue capabilities (ISHQ) [2]. These schemes are considered with comprehensive set of properties for the system like dynamic information on the ongoing calls, mobility of user, vertical handoffs in two directions and various other features of voice and data services. Mean queue length and quality of service are the two most important attributes of these schemes. A mathematical basis to understand and predict systems behaviour is provided by mean queue length.

In these proposed schemes, the code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are considered. A simulation model is developed for M/M/1 and M/M/C queuing models using OPNET, a very powerful simulation tool, simulation is conducted to get the results for performance. These results are compared with analytical modelling results. Also, two M/M/1 queuing models are virtualized as one machine, the performance of this virtualised machine is calculated by simulation techniques and is compared with analytical results.1.1 A brief summary of the remaining chapters

Chapter 2 gives the required outline of Integrated systems. The basics structure of the system is explained in detail. Introduction to Horizontal and Vertical handovers is given and they are explained for integrated system. A brief description is given on previous research works in the same field. ISH and ISHQ are the two proposed vertical handover schemes that are explained. A brief explanation is given on why the proposed schemes are better than the previous ones. Performance modelling and the reasons for preferring for Simulation modelling are also given. The basics of OPNET and advantages of using OPNET modeller is explained. An introduction is given on M/M/1 and M/M/C queuing models, along with Virtualization and benefits of virtualization.

Chapter 3 explains about Research and modelling of M/M/1 and M/M/C systems in OPNET. Also the simulation methodology is explained. Modelling and simulation of two M/M/1 systems as a single virtualized machine is explained. The scenario for virtualization in given in this chapter

Finally, Chapter 4 presents conclusions on the results obtained, graphs represented and comparison with analytical results. It also emphasise on the future work in this area.

Chapter 2

2.1 Introduction to Integrated Systems:

The existing demand to somehow provide higher data rates and seamless connectivity has led to the integration of two major communication networks. The complementary characteristics present in Wireless local area network and cellular network, the integration becomes a promising trend for the future generation wireless networks. 802.11n, 3G are the latest types of technologies in WLAN and Mobile networks respectively, that are widely used. When these technologies are individually considered they have advantages as well as disadvantages. By collective usage of the advantages of these two technologies, i.e. 3G support’s wide coverage area and WLAN support’s high data transfer, quality of service can be improved.

The users in “hot-spot” areas will be provided with high speed wireless networks with the integration of WLAN into Cellular networks and when the user is outside a hot spot coverage area, one can use the cellular network. However, this is not a simple process to implement.

In Figure 1, there are two areas, area A with only the cellular network and the other, area B is covered by both cellular and Wireless Local Area Network (WLAN). Area B has access points for WLAN and in area A, there are base stations for the cellular network. In the study of [Modelling and Analysis of Hybrid Cellular/WLAN System with Integrated Service-Based Vertical Handoff Schemes], a CDMA cellular system is considered along with IEEE 802.11 network. Bandwidths of cellular network and WLAN are Cc and Cw respectively. The total system resources are shared by both data and voice calls in integrated system. Particular channels are assigned to voice calls and hence have more priority over the data calls, in accessing the system resources. The unused bandwidth by current voice calls is shared by the ongoing data calls. Vertical and Horizontal handoffs are also given priority, accordingly as voice and data calls. To prioritize the handovers, the channel reservation approach is used in the cellular network. Vertical handover voice calls are given the highest priority among both the handover calls by limiting the number of accepted horizontal handover calls, when voice calls are greater than or equal to the threshold.[2]

2.2 Horizontal and Vertical Handovers in Integrated Systems:

An essential element of cellular communications is Handoff. This chapter of the thesis gives a summary of handover and mobility among heterogeneous networks. When a mobile station moves from one wireless cell to another wireless cell, it is a Handoff (or handover). It is classified as horizontal (intra-system) and vertical (inter-system). Vertical refers to the overlapping of wireless networks. With the help of a simple tale, the problem of vertical mobility can be illustrated. Consider a relay team of mouse and a rabbit. To carry a carrot as fast as they can is the task. A mouse can go far, but it cannot carry heavy loads. The rabbit, in this example is restricted in the fence. In the fence, it can carry a heavy load of carrots, but it has to give the carrots to the mouse at the fence. The rabbit represents a WLAN, and the mouse a cellular network. The event where the exchange of carrot loads between rabbit and mouse is called handoff, more precisely vertical handoff. This process refers to transition from WLAN to cellular network and vice versa. The carrot represents the payload, which is carried by mouse (cellular network) and the rabbit (WLAN). Payload means data in terms of bits or bytes, which is a part of bigger files. To make the communication seamless, the problem is to make best of mouse and rabbit carrying the carrot together. The access technology changes in vertical handover.

A mobile node moves with the single technology network from one access point to the other one in a horizontal handover. For example, if a mobile user moves from one base station to base station of other area, then the mobile user of GSM network makes a horizontal handover. A horizontal handover is a traditional handover. In other words, the difference between vertical and horizontal handover is that the vertical handover changes the access technology and the latter does not change [3].

In comparison with micro and macro mobility, they are differentiated into subclasses such as vertical and horizontal micro mobility, vertical and horizontal macro mobility. Vertical micro mobility is the handover within the same domain using different wireless technologies. Horizontal micro mobility is the handover among the same domain using the same wireless technology. Vertical macro mobility is handover among different domains using different wireless technologies and horizontal macro mobility is handovers within various domains, using same wireless technology.

The handoff process can be characterized as hard handoff or soft handoff. Before making a handoff if there is a break in the communication, it is referred as a hard handoff (brake before make) and whereas in soft handoff, there is a connection to both of the base stations for some time before making the handoff (make before break).

To enhance the capacity and QoS of cellular systems, efficient handoff algorithms are cost effective ways. In a horizontal handover, the main concern is to maintain the ongoing service even if the IP address changes because of the movement of the mobile node. The ongoing service is done either by dynamically updating the changed IP address or by hiding the change of IP address. Vertical handover takes place when the mobile node moves over heterogeneous access networks. The used access technology is also changed along with the IP address in the vertical handover, as the mobile nodes moves across different networks which use different access technology. In such cases, the main issue is to maintain the on going service even when there is a change not only in IP address but also in network interfaces, QoS characteristics etc.[4]

Parameter

Horizontal handover

Vertical Handover

IP address

Changed

Changed

Access Technology

No change

Changed

Network Interface

No change

Can be changed

QoS Parameter

No change

Can be changed

Table 2.1: Parameters in Horizontal and Vertical handover.

The main capabilities of Vertical handovers over Horizontal handovers are:

1. Vertical handovers use different access technology.

2. Vertical handovers use multiple network interfaces.

3. Multiple IP addresses are used in Vertical handovers.

4. QoS parameters can be changed in Vertical handovers and multiple parameters are used.

5. Multiple network connections are used in Vertical handovers.

2.3 Previous research work:

“Science is not a mechanism but a human progress; and not a set of findings but the search for them.” – J. Bronowski

Read also  Antilock Brake System Abs Model Based Design Computer Science Essay

Many vertical handoff schemes have been studied and have been proposed for the integrated network of cellular and WLAN. One among the proposed model is WLAN – first handoff (WFH) [5] In this proposed scheme, the ongoing data and voice calls are always forwarded to WLAN whenever WLAN bandwidth and the coverage are available, and the originating data and voice calls from the overlapping area of Cellular/WLAN are forwarded to the WLAN. This is known as WFS – WLAN first handoff scheme. This scheme is simple but vertical handoff calls and voice and data horizontal calls are treated the same way. Also this scheme was proposed for integrated voice and data services without sufficiently considering the different supporting abilities of the integrated network of cellular and WLAN. Also handoff schemes studied in [6]and [7] Most of the existing handoff schemes takes account of voice services only, and the resource allocation is done on the basis of user mobility levels, where both macro and micro cells cover the two tire area continuously. Also there was no big progress made between the two handoffs in the analysis of systems performance when a “service-dependent call admission control scheme” was introduced.[8] Few of the other schemes are applicable for indoor environments where, only the voice calls are considered. In this method, the dropping probability of vertical handoffs is minimised by installation of a simple handoff trigger node in the integrated networks transition region. Practical routing protocols are used to implement a multi hop ad hoc relaying handoff (MARH) for vertical handoffs from WLAN to cellular network. In this method, there is more complexity and transmission delay. Also, in this model vertical handoffs are not considered from cellular network to WLAN.[9.]

2.4 ISH and ISHQ Handoff Schemes :

Service dependent handoff schemes are introduced in the integrated system of cellular/WLAN. On the basis of whether the vertical handoff request can be queued or not, the service schemes are of two types. Integrated service based handoff and Integrated service based handoff with queuing capabilities [2]. These two schemes are explained in detail in the section below.

a. ISH Scheme:

In this proposed ISH scheme, the only calls that can request admission to the cellular network are originating voice calls from area A and a handoff voice calls from its neighbouring cells. If < , the originating call from area A is accepted and if ≥≥ , the call is rejected. Also if < , the handoff voice call from the neighbouring cell is accepted and when ≥≥ , it is rejected. From area B, a voice call will try to access the cellular network, and if < , it is accepted. The call will now go to WLAN when ≤ ≤ , and in WLAN if < , it accepts the call and rejects if otherwise. For calls moving into B area, no handoff is executed for ongoing voice calls. A vertical handoff would take place, when a voice call which was accepted by the WLAN moves out of B area. In a cellular network, if < , the vertical handoff voice calls can be accepted.

If < , the data call originating from area B will be allowed to access WLAN and if otherwise is rejected. Similarly, if < , the data call moving into area B is handed to the WLAN, and it remains in the cellular network otherwise. And if < , the data call originating from area A and data vertical handover requests to cellular network is accepted and rejected otherwise.

b. ISHQ Scheme:

Voice calls are handled the same way in both the proposed schemes ISHQ and ISH. In ISHQ handover scheme, there are two queues Qc and Qw with and as their respective capacities. The queue Qc is for handling vertical handover requests of data to cellular networks and the queue Qw is for vertical handoff requests and originating data calls that are entering into the WLAN from area B. There may happen three different probabilities depending upon the arrivals to the queue. If < , the vertical handoff requests to the cellular network is accepted. If = , the request will be queued in Qc , and if the queue Qc is full on the arrival of requests, the requests will be dropped. These requests in the queue Qc will be served on the basis of first-in, first-out (FIFO) basis once the channel becomes available. This data call request is then deleted from Qc once it is handed over to the WLAN. Also if < , data vertical handoff requests to WLAN or data calls originating from area B are processed by the WLAN. If = these requests are queued in Qw and when the queue Qw is full on the arrival of many requests, the requests are blocked. The data call request is deleted from queue Qw when the handoff request is granted before the mobile user is out of area B before it can actually acquire a channel or the user passes through the area B.

In figure 2 the traffic flow of the proposed handover schemes can be seen. The voice calls originating from area A , voice calls from area B, horizontal handover voice calls from the neighbouring cells, vertical handoff requests from WLAN, data calls from area A, data vertical handover requests from WLAN are included in the traffic into the cell. They have respective arrival rates of,,,, and . Also , and are respective arrival rates of originating data calls from area B, data vertical handover request from area A and voice calls that are under flowed. This is traffic into the WLAN.

2.5 Performance Modeling:

Performance modelling and performance measurement are classifications of performance evaluation. If the system is available for measurement and all parameters are accessible, performance measurement is possible. When the actual system does not have any test points to measure in detail or if actual systems are not available for measurement, performance modeling is used. Simulation models and analytical models can be used for performance modeling. Simulation models are further classified depending on the mode/level of detail of simulation.

2.6 Why Simulation?

“Simulation refers to the imitation of a real world system over time” – Banks, Carson and Nelson, 1996.

Simulation is a tool for the evaluation of the performance of the system, which is already existing or proposed, in diverse configurations of interest, and also for a long period of real time. It is used before the existing system is changed or new one being built, to reduce failure chances, to eliminate bottlenecks, and to prevent over – under utilization of resources and performance of the system. Lack of accuracy in analytical models for design decisions, is one of the main reason for simulation techniques. It can be used to analyze complex models of any level. If the simulation model is developed, it can answer a wide range of “what if” questions on the real world system.[11]

2.7 Introduction to OPNET:

Optimized Network Engineering Tools (OPNET) is a very powerful network simulator. Performance, availability and optimizing cost are the major purposes of it. OPNET Modelers offers various tools for designing the model, simulating the model, data mining and various analysis, considering the different alternatives. In OPNET Modeler, simulation of a wide variety of various networks that are linked to each other can be done. Simulation using OPNET gives an experience to build networks in the real world and helps in understanding various layering techniques and protocols. OPNET simulates the behaviour of the system as it models each event occurring in it and processes the event as defined by the user. A hierarchical strategy is employed by OPNET to organize the models to build the desired network [10]. It provides programming tools to define various types of packet formats that are used in protocols.

Models in OPNET are hierarchical. The behaviour of a protocol is encoded by a state diagram, which has C-type language code embedded in them. In the middle layer, various functions such as transmitting and receiving packets, buffering and transmitting are performed by each of separate objects. These separate objects are called modules. In the node editor, these modules are created, modified and edited. A higher level node model is formed when modules are connected together. In the highest level, all the node objects are connected by links to form a network model.

2.8 Advantages of OPNET:

OPNET has a number of advantages over analytical modelling for analyzing performance of the system.

1. The basic concept of simulation is easy to comprehend over analytical methods.

2. Simulation in OPNET is more credible because the behaviour is compared to that of the real system.

3. Once the OPNET model is ready and validated, one can run it on various different sets of parameter values to generate the performance measures in each of the cases.

4. OPNET models are very flexible and have a range that can be simulated and details can be obtained.

5. In OPNET, fewer simplifying assumptions are used and hence true characteristics of the system under study are captured.

6. Bottlenecks in information can be identified, test options to increase flow rates can be identified.

7. Time can be controlled in OPNET. Hence, the system can be operated for several months or years of experience in a matter of few seconds, which allows the user to look at a longer time horizons, or can even slow down the process.

8. The greatest strength of OPNET is its ability to work with new and unfamiliar situations, and hence can answer “what if” questions.

2.9 Queuing Models:

A queuing model is used to represent a queuing system in real situation, so that the behaviour of the queue can be mathematically analysed. Queuing models are represented using Kendall’s notation.

A/B/C/K/M/Z

Where:

A denotes the inter arrival time distribution

B denotes the service time distribution

C denotes number of servers in the system

K denotes capacity of the system

M is the population calling

Z denotes the service discipline.

In general, K = ∞, M = ∞ and Z is First-in-first -out (FIFO).

M/M/1 Queue:

The name “M/M/1” is from the Kendall’s notation for general queuing systems. Packet arrival distribution is denoted by the first symbol, service distributions by the second and the number is for the number of servers. The Markov Processes (M in M/M/1) denoting the packets arrival and departure will be Poisson processes. The inter arrival time follows exponential distribution for Poisson arrivals. M/M/1 refers to the negative exponential arrivals and service times in a single server system.

The simplest queuing system consists of two major components – Queue and Server, along with two attributes – inter arrival time and service time. Randomly arriving customers from an infinite source at the queue have mean inter arrival time. These customers are served at a random amount of time but have mean service time. The service times are independent and are exponentially distributed.

Read also  Solve Location Allocation Problems Using Genetic Algorithm Computer Science Essay

M/M/C Queue:

M/M/C model is a multi server queuing model, where arrivals follow Poisson process, the service time is exponentially distributed and infinite waiting queue, and also the population of users is infinite. An M/M/C queue consists of a first-in-first-out (FIFO) buffer with randomly arriving packets in accordance with Poisson process, and the processors, called as servers, retrieves the packets from the infinite buffer at a specified service rate. The performance of M/M/C queue system depends on three major parameters: packet size, packet arrival rate and capacity of the servers. Queue size will grow indefinitely, if the combined effect of average packet size and average packet arrival rate exceeds the service capacity. By varying packet arrival rates, packet sizes and service capacities, the performance of the queuing system can be clearly observed.

l,1/m, and C represents the mean packet arrival rate, mean packet size and the server capacity, respectively.

2.10 Virtualization

A promising approach to consolidate multiple services onto a smaller number or resources is Virtualization. To improve the efficiency and availability of resources, applications of the system, Virtualization is used. The resources of the system are underutilized under “one server, one application” model. Virtualization allows one to run multiple virtual machines on a single physical machine. Resources of the single computer can be shared by all the multiple virtualized machines. The virtual environments can be called as virtual private servers or can be called as guests or emulations. Server virtualization can be achieved in three popular approaches: virtualization at the OS layer, the paravirtual machine method and virtual machine model.

Virtualization at OS layer is different from paravirtual machine method and virtual machine method. The guests and host use the same OS. The host runs a single kernel as the core and can take the OS functionality of the guests. Common libraries and binaries can be shared on same physical layer. In this method, one OS level virtual server can host up to thousands of guests at the same time. [12]

Host/guest paradigm is the basis for Virtual machine method. In this approach, the guest operating system can run without any modifications. Guests that use different OS can be created by the administrator. The guest does not know about the OS being used by the host.

The PVM model, like the Virtual machine model is based on host/guest paradigm[13]. In the PVM model, the virtual machine model actually changes the guest OS code. This changing of guest OS code is known as porting. Virtual machines and paravirtual machines can run on multiple OS.

2.11 Why Virtualization?

Virtualization allows the user to reduce the IT costs while boosting up the efficiency, utilization and ensuring flexibility of assets. The benefits of adapting to virtualization are[15]:

1. To get more out of the existing resources. The legacy of “one server one application” model can be stopped. All common infrastructure resources can be pooled together with server consolidation.

2. Availability of applications and hardware can be increased for improved business continuity. Virtualization transforms the applications and data in the virtual environment with higher accessibility and no time delays.

3. Operational flexibility can be achieved. Better management of resources is possible along with better application usage and faster server provisioning.

4. Can run multiple operating systems on a single machine.

5. Disaster recovery and Dynamic load balancing is possible. Virtualization provides the ability for the virtual machines that are over utilizing the system resources and moved to the under utilized servers. This is known as dynamic load balancing, which creates efficient utilization of the resources. Virtualization technology also enables a virtual image on a machine to be re images on another server if failure occurs in one server.

6. Security is the other major advantage of virtualization. With compartmentalizing the environments, one is able to select the OS of guest and tools that could be more appropriate in each of the environments. Because of the isolation, attacking on virtual machine can never compromise the others.

Chapter 3

Research and Modelling

3.1 Modelling and simulation of M/M/1 and M/M/C systems in OPNET:

The model requires a means of generating, queuing and serving the packets. All of these mentioned capabilities are provided by the existing modules and models supplied by OPNET. Ideal generator module in OPNET is used to create packets. The M/M/C queue’s Poisson process is expressed as the mean number of arriving packets/second. Exponential distribution of packet inter arrivals is an alternate form of specifying this process. This alternate form is used in the model, as the generator module features the arrival process using the inter arrival distributions. The M/M/C model also requires a distributed service time which is exponential. Assigning exponential distributed packet lengths to the packets is a suitable method. This is an attribute of the generator module. OPNET also provides many queue process models, each with varying capabilities. For M/M/C queue, the packets must follow FIFO discipline with specific rate in bits/second, in order to get exponential distribution of service times. A queue model, acb_fifo_ms process model provides this function. The name acb_fifo_ms, indicates the major queuing characteristics: ‘a’ indicates an active server; ‘c’ indicates that multiple incoming packet streams can be concentrated into its single internal queue; ‘b’ indicates that service time is bit oriented; and service ordering discipline is denoted by ‘fifo’ and ‘ms’ indicates that it is a multi server model. If the queuing model is other than a multi server, say M/M/1, then the queue model to be opted for would be acb_fifo[10]. The process module has OPNET supplied sink process model, which is used to dispose the serviced packets. Packet streams are paths via which the packets are transferred between generator module, the acb_fifo_ms, and the sink. These modules i.e. generator module, process module acb_fifo_ms, and sink takes care of sending and receiving the packets on the stream.[14]

Node editor:

A Node Editor is used to create node models. These node models are then used to create nodes within Project Editor Networks. Each of the modules in the node serves a particular purpose, as in creating packets, queuing of the packets, processing, transmitting and receiving of packets. OPNET models have modular structure. A node can be defined by connections various modules, packet streams and statistic wires. Hence, packets and status information can be exchanged between the modules, with help of these connections between the packets.

Project Editor:

Project editor is the main performing area for creating a network simulation. It is used to create network models, collect the statistics from each network object, execute and run the simulation and view the results. Using the specialized editors from project editor, creating a node and process models, creating filters and parameters to it, building packet formats is easy.

Choose statistics:

The Process Model Editor:

A process editor is used to create process models, which control the functionality of node models in the Node editor. Finite state machines (FSMs) are used to represent the process models. They are created with icons which represent states and lines that show the transitions between the states. Embedded C or C++ code blocks are used to show operations performed in each state.

The Link Model Editor:

New type of link objects can be created with this editor. There are different attribute interfaces and different representations for each new type of link.

The Path Editor:

It is used to create new path objects, where a traffic route can be defined. ATM, Frame Relays, MPLs etc or logical connections which use protocol model can use path to route the traffic.

The Probe Editor:

The statistics to be collected are specified in this editor. Different types of statistics can be collected using different probes, which include node statistics, various types of animation statistics, node and link statistics and global statistics.

The Probe Editor:

The statistics to be collected are specified in this editor. Different types of statistics can be collected using different probes, which include node statistics, various types of animation statistics, node and link statistics and global statistics.

The Simulation Sequence editor:

Additional simulation constraints can be specified in the Simulation Sequence Editor. Simulation icons are used to represent simulation sequences that control the run time characteristics of simulation.

The Analysis Tool:

Additional features such as creating scalar graphics, analysis configurations to view later and to save the data, template defining etc can be done using the Analysis tool.

It is clearly observed from the graphs that in the early stages of simulation, there are large deviations. These depend on the sensitivity of averages that a very small number of samples collected and as the samples increase towards the end, the average values stabilizes.

Form the simulation, the following can be calculated and the results can be validated.

Mean arrival rate λ, Mean service rate 1/μ , mean service rate μC, capacity of server C and the mean delay W. These results can be calculated from the queue delay graph.

And load to capacity ratio, ρ and Average Queue size or mean queue length can be measured from the queue size graph. Conclusions from these graphs can be taken once the system reaches stability.

3.2 Modelling and simulation of two M/M/1 system’s as 1 machine

To improve the efficiency and availability of resources, applications of the system, Virtualization is used. A queuing model is considered to represent a computer system to evaluate the performance of the model. Two M/M/1 queuing models are considered for virtualization into a single machine.

Case 1

For server 1, l1 and m1 be the mean arrival rate and mean service rate respectively. l2 and m2 be the mean arrival rate and mean service rate respectively for server 2. Using simulation techniques, the performance measure Mean queue length, throughput, utilization and response times are calculated. This is for no Virtualization. These results are presented in chapter 4.

Case 2:

Two M/M/1 server’s on a single machine and the conditions to be followed are:

For server 1, mean arrival rate be l1 and the mean service rate < m1 . for server 2,

Mean arrival rate is l2 and mean service rate < m2 .

For the two cases, the performance measures Mean queue length, throughput, utilization and response time are calculated using simulation techniques and these results are compared for both the case and also the results are compared with the analytical results. A scenario is considered where the mean service rate is m/2. The performance measures for the mean service rate m is calculated. Also the performance measures for the second queue are calculated for mean service rate m/2. For different mean arrival rates and mean service rates, the performance measures are calculated and are tabulated the next chapter and the results are clearly explained.

Read also  Attack Tree Of Computer Security

Chapter 4

Testing, Results and Discussions

4.1 Simulation Results and Graphical representation:

To evaluate the performance of the model presented and to show the effectiveness of Simulation techniques, 1, 5, 8, and 10 server systems are considered with an infinite queue and other parameters varying as stated below.

Case 1:

ARRIVAL RATE (jobs/sec)

Mean queue length

(ANALYTICAL RESULTS)

Mean queue length

(SIMULATION IN OPNET)

1

0.11111

0.12

2

0.25

0.25

3

0.4285714

0.42

4

0.66666

0.65

5

1.0

1.05

6

1.5

1.5

7

2.33333

2.35

8

4.0

4.1

9

9.0

9.05

Table 4.1Simulation Results for M/M/1 SYSTEM for (Service rate 10 jobs/sec)

From the graph, one can analyse that as the arrival rate of jobs increases, mean queue length (total number of jobs in the system) increases because it is a single sever system. One can also analyse that the system becomes unstable when the arrival rate of jobs is very high and the service rate is low with a single server. The relationship between mean queue length, mean arrival rate and service rate of the server in a multi server system is shown in case 2[14].

Case 2:

Relationship between mean queue length and arrival rates for multi server system, (5 servers) is shown in figure 4.2 with arrival rate of the jobs varying and the server serves at the rate of 20 jobs in a second. Table 4.2 has the values of mean queue length both in analytical and simulation methods for varying arrival rates of the jobs.

ARRIVAL RATE (jobs/sec)

Mean queue length

(ANALYTICAL RESULTS)

Mean queue length

(SIMULATION IN OPNET)

10

0.500002

0.5

20

1.000958

1

30

1.508631

1.5

40

2.039801

2.05

50

2.630371

2.65

60

3.354227

3.35

70

4.381623

4.4

80

6.21645

6.2

90

11.36244

11.4

Table 4.2.Simulation Results for M/M/5 SYSTEM for service rate of 20 jobs/sec.

From the figure 4.2, as the number of server’s increases, the system tend to be stable. The same is observed even in the case of 8 server system. Table 4.3 gives the values for mean queue length for varying arrival rates of the jobs and the server serving at 30 jobs per second. Mean queue length versus mean arrival rate graph is plotted for the same and is shown in figure 4.3. Even in this case, as the number of servers increase further, the system moves towards stability

ARRIVAL RATE (jobs/sec)

Mean queue length

(ANALYTICAL RESULTS)

Mean queue length

(SIMULATION IN OPNET)

10

0.333

0.35

20

0.6667

0.7

30

1.0

1.1

40

0.66666

0.65

50

1.33

1.35

60

2.003

2.05

80

2.6699

2.65

100

3.3498

3.35

110

3.6988

3.75

140

4.8386

4.8

160

5.775

5.75

180

7.0709

7.0

200

9.3300

9.4

220

15.556722

15.5

230

27.659

27.7

Table 4.3.Simulation Results for M/M/8 SYSTEM for service rate of 30 jobs/sec.

4.2 Comparing OPNET and Analytical Results

To show the effectiveness of OPNET, 1, 5, and 10 server systems are considered with an infinite queue and other parameters varying as stated below. Mean queue length is calculated in each of the case with varying parameters such as Mean arrival rate of jobs. Then the OPNET results are compared with the analytical results and the difference in both results is calculated as error. Graphs have been plotted to show the effectiveness of OPNET.

Case 1

ARRIVAL RATE (jobs/sec)

Mean queue length

(ANALYTICAL RESULTS)

Mean queue length

(SIMULATION IN OPNET)

ERROR

1

0.11111

0.12

8.108108

2

0.25

0.25

3

0.4285714

0.42

1.983664

4

0.66666

0.65

2.490249

5

1.0

1.05

5

6

1.5

1.5

7

2.33333

2.35

0.858369

8

4.0

4.1

2.5

9

9.0

9.05

0.555556

Table 4.4 Simulation Results for M/M/1 SYSTEM and difference in the results for (Service rate 10 jobs/sec)

From figure 4.4, it is very clear that the difference in both the performance models is very negligible, which clearly shows the effectiveness of OPNET simulation.

Case 2:

A multi server system, M/M/10 is considered. Relationship between mean queue length and arrival rates for multi server system, shown in figure 4.5 with arrival rate of the jobs varying and the server serves at the rate of 20 jobs in a second. Effectiveness of OPNET modeller is clearly seen from the figure. Table 4.5 shows the differences in results for OPNET and analytical model.

ARRIVAL RATE (jobs/sec)

Mean queue length

(ANALYTICAL RESULTS)

Mean queue length

(SIMULATION IN OPNET)

ERROR

10

0.5

0.525

4.761905

20

1

1.015

1.477831

30

1.500001

1.50004

0.002618

40

2.000012

2.04

1.960199

50

2.500096

2.505

0.195773

60

3.000496

3.08

2.581299

70

3.501901

3.55

1.354895

80

4.005876

4.1

2.295696

90

4.5154

4.55

0.76044

100

5.0361

5.1

1.252941

110

5.5767

5.67

1.645503

120

6.151949

6.12

0.522039

130

6.7854

6.8

0.214706

140

7.517373

7.5

0.231638

150

8.419834

8.35

0.836331

160

9.636706

9.6

0.382357

170

11.502

11.5

0.017391

180

15.01

14.95

0.401338

190

25.18613

25

0.744504

Table 4.5 Simulation Results for M/M/1 SYSTEM and difference in the results for (Service rate 20 jobs/sec)

From table 4.5, it is very clear that the difference in both the performance models is very negligible, which clearly shows the effectiveness of OPNET simulation model. The same has been shown in the figure 4.5.

4.3 Simulation results for virtualized system

The simulation results for performance measures for various arrival and service rates, in case of no virtualization and virtualization are presented below.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

1

4.75

0.8

0.23

0.265

1

0.28

1

1

4.75

0.8

0.23

0.265

1

0.28

Table 4.6 Performance measures for No virtualization for sigma =1 and mue=4.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

1

2.375

0.7333

0.465

0.3911

0.905

0.6222

1

1

2.375

0.7333

0.465

0.3911

0.905

0.6222

Table 4.7 Performance measures for Virtualized servers for sigma 1 and mue 4.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

2

5.75

0.66

0.356

0.545

2

0.266667

1

2

5.75

0.66

0.356

0.545

2

0.266667

Table 4.8 Performance measures for No virtualization for sigma =2 and mue=5.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

1

2.875

0.4555

0.8

1.45

1.75

0.9666

1

1

2.875

0.4555

0.8

1.45

1.75

0.9666

Table 4.9 Performance measures for Virtualized servers for sigma 2 and mue 5.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

3

6.75

0.56

0.46

0.8

3

0.266667

1

3

6.75

0.56

0.46

0.8

3

0.266667

Table 4.10 Performance measures for No virtualization for sigma =3 and mue=6.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

3

3.375

0.2

0.96

6.88

2.75

2.44

1

3

3.375

0.2

0.96

6.88

2.75

2.44

Table 4.11 Performance measures for Virtualized servers for sigma 3 and mue 6.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

4

8.75

0.56

0.48

0.855

4

0.222

1

4

8.75

0.56

0.48

0.855

4

0.222

Table 4.12 Performance measures for No virtualization for sigma =4 and mue=8.75.

Servers

Sigma

(jobs/sec)

Mue

(jobs/sec)

P0

Utilization

MQL

(jobs)

Gamma

(jobs/sec)

Response time(sec)

4

4.375

0.166

0.9898

9.11

3.744

2.477

1

4

4.375

0.166

0.9898

9.11

3.744

2.477

Table 4.13 Performance measures for Virtualized servers for sigma 4 and mue 8.75.

From the above tabulated results, it is very clear that the probability of the jobs being in the state deceases when the servers are virtualized. The results also clearly reflect that the utilization almost doubles in case of virtualized servers. There is also lot of increases in mean queue length of the system after virtualization. The response time also increases in the case of virtualized servers. Hence one can draw general conclusions as to improve the utilization of the system. Also to decrease the probability of jobs in the system, one can opt for virtualization. These results can be better analysed for higher values of arrival and service rates.

4.4 Comparing Simulation and Analytical results for virtualized systems

To compare the results of simulation and Analytical modelling for Virtualized machine, consider the same scenario where, two M/M/1 queue are virtualized as a single machine.

Performance

Measure

N.V

analytical

N.V

simulation

error

V

analytical

V

simulation

error

Sigma(jobs/sec)

1

1

1

1

Mue (jobs/sec)

4.75

4.75

2.375

2.375

P0(probability)

0.789474

0.8

1.39417

0.733333

0.7333

0.0045

Utilization

0.210526

0.23

9.004739

0.46222

0.465

0.601445

Mean queue length

(jobs)

0.266667

0.265

0.749064

0.389495

0.3911

0.412072

Gamma(jobs/sec)

1

1

0.902222

0.905

0.307906

Response time

0.266667

0.28

4.868914

0.614992

0.6222

1.172048

Table 4.14 Comparison between for analytical and simulation results for Virtualization (sigma 1 and mue 4.75).

(N.V denotes No virtualization and V denotes virtualization)

Results from table 4.14 and 4.15 shows that the difference in values for analytical and simulation methods is very negligible and hence can be ignored. The results also clearly reflect that the utilization almost doubles in case of virtualized servers. There is also lot of increases in mean queue length of the system after virtualization. Hence one can draw general conclusions as to improve the utilization of the system. Also to decrease the probability of jobs in the system, one can opt for virtualization.


Performance

Measure

N.V

analytical

N.V

simulation

error

V

analytical

V

simulation

error

Sigma(jobs/sec)

4

4

4

4

9.11

3.744

2.477

Mue (jobs/sec)

8.75

8.75

4.375

4.375

P0(probability)

0.56

0.5

10.71429

0.166

0.2

20.48193

Utilization

0.48

0.43

10.41667

0.9898

0.9666

2.343908

Mean queue length

(jobs)

0.855

0.76

11.1111

9.11

8.9

2.305159

Gamma(jobs/sec)

4

4

3.744

3.6

3.846154

Response time

0.222

0.244

9.90991

2.477

2.52

1.735971

Table 4.15 Comparison between for analytical and simulation results for Virtualization (sigma 4 and mue 8.75).

(N.V denotes No virtualization and V denotes virtualization)

Conclusion:

Appendices:

A. Source code for simulation M/M/1 Queue

The source codes for simulation of M/M/1 queue are given below. It is taken from Dr. O. Gemikonakli, for comparing the results of M/M/1 queue with analytical and simulation methods.

1. Processes.h

#include <iostream>

#include <stdio.h>

#include <stdlib.h>

#include <math.h>

#include <fstream>

#include <string.h>

#include <conio.h>

#include <time.h>

#include<process.h>

#include<dos.h>

using namespace std;

class Processes

{

public:

double arrivalTime;

double serviceTime;

double serviceStartTime;

double serviceEndTime;

public:

Processes();

//Processes(&p);

double Uniform();

double Exponential(double mean);

};

2. Processes.cpp

#include “Processes.h”

Processes::Processes()

{

arrivalTime = 0.0;

serviceTime = 0.0;

serviceStartTime = 0.0;

serviceEndTime = 0.0;

}

/*

Processes(&p)

{

serviceEndTime = p.serviceEndTime;

}

*/

double Processes::Uniform()

{

double u;

u=0.0;

do{

u = (double)rand()/(double)RAND_MAX;

}while(u==0.0);

return u;

}

double Processes::Exponential(double mean)

{

return -log(Uniform())/mean;

}

3. Main.cpp

// Simulation of M/M/1 – No break-downs

#include “Processes.h”

#define NbOfJobs pow(2,16)

Processes currentJob;

Processes previousJob;

Processes p;

double meanArrivalRate;

double meanServiceRate;

double waitingTime;

double systemTime;

double idleTime;

void init();

double maximumOf(double, double);

void checkResults(int);

void MM1();

void matlab();//MM1 using matlab’s routine

int main()

{

init();

//MM1();

matlab();

cout << endl;

return 0;

}

void init()

{

srand((unsigned)time(NULL));//Seed the random-number generator with current time so that the numbers will be different every time we run.

meanArrivalRate = 2.0;

meanServiceRate = 5.0;

idleTime = 0.0;

waitingTime = 0.0;

systemTime = 0.0;

}

double maximumOf(double a, double b)

{

return (a>b ? a:b);

}

void checkResults(int i)

{

cout << “n” << i << “t” << currentJob.arrivalTime << “t” << currentJob.serviceTime

<< “t” << currentJob.serviceStartTime << “t” << currentJob.serviceEndTime << en

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)