Properties Of Distributed Systems Information Technology Essay
A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer
The word distributed in terms such as “distributed system”, “distributed programming”, and “distributed algorithm” originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing.
While there is no single definition of a distributed system, the following defining properties are commonly used:
There are several autonomous computational entities, each of which has its own local memory.
The entities communicate with each other by message passing.
A distributed system may have a common goal, such as solving a large computational problem. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.
Other typical properties of distributed systems include:
The system has to tolerate failures in individual computers.
The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.
Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.
PROPERTIES OF DISTRIBUTED SYSTEMS
So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system.
The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer.
However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete,[39] i.e., it is decidable, but it is not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
SECURITY ISSUES IN ADAPTIVE DISTRIBUTED SYSTEMS
In the contemporary society, distributed systems have a significant impact on how communication between social, industrial and governmental institutions is achieved. Dealing with the complexity, heterogeneity and dynamics of distributed systems is absolutely among the main concerns of the software industry. In the Internet era, the distribution of information and services on different sites is a common and dominant scenario. Hence, accessing information and services on remote sites requires high-level of system quality: acceptable response time (at least “near real-timeâ€); and security mechanisms. These aspects require inherent adaptation of the system to changes in the environment. In the case of ADSs, the challenge to maintain system quality is even greater. In general, security issues in distributed information systems, whether adaptive or not, is already a serious concern.
There are many types of threats, among them those occurring during communication and those in the form of unauthorized attempts to access stored information. Solutions proposed to address these problems in distributed systems may contribute to the implementation of security mechanisms in ADSs. On the other hand, if a token ring is used to achieve mutual exclusion in data communication, then a loss of token might be a result of unauthorized monitoring of the token, which is a direct consequence of the distributed system being adaptive and having monitoring component.
Moreover, data resubmission might be requested by authorized parties that couldn’t receive the data. Such a request might also come from malicious intruders that are requesting resubmission of data to get a copy.
The kind of environmental changes that can be monitored in ADSs include, but are not limited to, processor and link failures, changes in communication patterns and frequency, changes in failure rates, and changed application requirements.
Security metrics indicate the degree to which security goals such as data confidentiality are being met, they propose actions that should be taken to improve the overall security program, and identify the level of risks in not taking a given action and hence provide guidance in prioritizing the actions. They also indicate the effectiveness of various components of a security program. Developing effective security metrics programs has proven to be very challenging. A number of factors have contributed to this collecting the necessary data is difficult and there are no well-established and standardized guidelines. Swanson et al. (2003) identified elements that must be considered in defining effective security metrics, metrics must yield quantifiable information, supporting data must be readily obtainable, only repeatable processes should be considered for measurement, and metrics must enable tracking of performance. Voas et al. (1996) propose a security assessment methodology, called adaptive vulnerability analysis (AVA), which provides a relative measure of software security.
The methodology is based on measurement of security weaknesses in terms of predetermined set of threats that are frequently encountered. The resulting metrics may vary with different set of threats and hence the methodology is called adaptive. Its major advantages include, among others, its ability to be customized to application-specific classes of intrusions and the fact that it measures dynamic run-time information. The fact that it is based on a predetermined set of threats is among the major limitations of AVA. Payne (2001) proposes a guideline that should be closely followed in the development a security metrics program.
The guideline consists of several steps: clear definition of security goals and objective, decision about what metrics to generate and strategies for generating them; create action plan, and establish a formal program review cycle. Following this guidance enables us to clarify the why, what and how of developing security metrics. In the sequel, we focus on the metrics that should be generated to quantify the level of security threats that could be caused due to monitoring of a target system to achieve the level of adaptation necessary to maintain quality of services.
ADAPTIVE DISTRIBUTED SYSTEMS
Distributed systems that can evolve their behaviors based on changes in their environments are known as Adaptive Distributed Systems (ADSs). Adaptation usually takes place on different sites in a distributed system and needs to be coordinated. Adaptive systems monitor and evaluate their environments and can adapt their own behaviors when there is a change in the environment. On the other hand, adaptive behavior is the field of science where the underlying mechanisms of adaptive behavior of animals, software agents, robots and other adaptive systems are investigated into.
The results from adaptive behavior research are exploited for building artificially intelligent adaptive systems. In this case, we envision distributed systems within the context of artificially intelligent adaptive systems and we therefore believe that the research progress in adaptive behavior will affect the research in ADSs. That is, monitoring, change detection and behavior adaptation components of an adaptive distributed system will become more intelligent in time. An ADS better knows what is happening in its environment by detecting and evaluating the changes in the environments and adjusting their actions to the changes more intelligently.
However, the more intelligent and adaptive a distributed system becomes through its monitoring and other components, the more risky it becomes that the intruders act more severely in a distributed environment if the monitoring component is overtaken by them. In the following paragraphs, we are giving a brief survey on ADSs. Leonhardt et al. (1998) indicate that security is an issue that appears where activity is being tracked, namely by the monitoring system they have proposed. For that reason, in this work, we look into the levels of knowledge a monitoring system might eventually have about its environment while becoming more adaptive, and whether the level of knowledge and the properties of the knowledge being monitored would cause any security issues compared to the distributed systems which are not adaptive. Russello et al. (2005) described how adaptation is done for dynamical replication for managing availability in a shared data space. The idea is that if replication is required, the middleware should offer mechanisms that would allow the application developer to select from different replication policies that can be subsequently enforced at runtime.
There is an adaptation subsystem where the environment conditions are monitored. It is detected when to switch to another replication policy automatically. The execution environment conditions which are monitored are cost of communication latency and bandwidth, especially when external monitoring subsystem is used. Silva et al. (2002) developed a generic framework for the construction of ADSs. The model is composed of three main packages. In the monitoring package, system specific parameters, such as processor utilization, in the various hosts of the distributed system are monitored. This package informs the event detection and notification package whenever values of such parameters change significantly. In addition to this, interceptors as used in the CORBA distributed system standards are inserted into the object invocation path. Each time a client invokes a method of an object, the message corresponding to this invocation is intercepted and later re-dispatched to the target object.
Using interceptors, the system can extract useful information from each method invocation storing it in a log file for analysis by the event detection and notification package. On the other hand, dynamic configuration package, depending on the type of the event, executes the appropriate algorithm that defines actions that should be taken in order to adapt the application to the new environment condition. As stated in (Al-Shaer 1998), monitoring system can be used to detect and report security violations such as illegal logins or attempts of unauthorized access to files. On the contrary, we argue that if the monitoring subsystem is overtaken by an intruder, the monitoring system can also be used for causing security violations once an intruder has knowledge about login information and file authorizations to be able to report illegal logins and attempts of unauthorized access to resources.
Order Now