Machine Learning In Medical Applications Health And Social Care Essay

Machine Learning (ML) aims at providing computational methods for accumulating, changing and updating knowledge in intelligent systems, and in particular learning mechanisms that will help us to induce knowledge

from examples or data. Machine learning methods are useful in cases where algorithmic solutions are not available, there is lack of formal models, or the knowledge about the application domain is poorly defined.

The fact that various scientific communities are involved in ML research led this scientific field to incorporate ideas from different areas, such as computational learning theory, artificial neural networks, statistics, stochastic modeling, genetic algorithms and pattern recognition. Therefore, ML includes a broad class of methods that can be roughly classified in symbolic and subsymbolic (numeric) according to the nature of the manipulation which takes place whilst learning.

2.Technical discussion

Machine Learning provides methods, techniques, and tools that can help solving diagnostic and prognostic problems in a variety of medical domains. ML is being used for the analysis of the importance of clinical parameters and of their combinations for prognosis, e.g. prediction of disease progression, for the extraction of medical knowledge for outcomes research, for therapy planning and support, and for overall patient management. ML is also being used for data analysis, such as detection of regularities in the data by appropriately dealing with imperfect data, interpretation of continuous data used in the Intensive Care Unit, and for intelligent alarming resulting in effective and efficient monitoring. It is argued that the successful implementation of ML methods can help the integration of computer-based systems in the healthcare environment providing opportunities to facilitate and enhance the work of medical experts and ultimately to improve the efficiency and quality of medical care. Below, we summarize some major ML application areas in medicine. Medical diagnostic reasoning is a very important application area of computer-based systems (Kralj and Kuka, 1998; Strausberg and Person, 1999; Zupan et al., 1998). In this framework, expert systems and modelbased schemes provide mechanisms for the generation of hypotheses from patient data. For example, rules are extracted from the knowledge of experts in the expert systems. Unfortunately, in many cases, experts may not know, or may not be able to formulate, what knowledge they actually use in solving their problems. Symbolic learning techniques (e.g. inductive learning by examples) are used to add learning, and knowledge management capabilities to expert systems (Bourlas et al., 1996). Given a set of clinical cases that act as examples, learning in intelligent systems can be achieved using ML methods that are able to produce a systematic description of those clinical features that uniquely characterize the clinical conditions. This knowledge can be expressed in the form of simple rules, or often as a decision tree. A classic example of this type of system is KARDIO, which was developed to interpret ECGs (Bratko et al., 1989).

This approach can be extended to handle cases where there is no previous experience in the interpretation and understanding of medical data. For example, in the work of Hau and Coiera (Hau and Coiera, 1997) an intelligent system, which takes real-time patient data obtained during cardiac bypass surgery and then creates models of normal and abnormal cardiac physiology, for detection of changes in a patient’s condition is described. Additionally, in a research setting, these models can serve as initial hypotheses that can drive further experimentation.

2.1 Methodology

In this section we propose a new algorithm called REMED (Rule Extraction for MEdical Diagnostic). The REMED algorithm includes three main steps: 1) attributes selection, 2) selection of initial partitions, and finally 3) rule construction.

2.1.1 Attributes Selection

For the first step we consider that in medical practice the collection of datasets is often expensive and time consuming. Then, it is desirable to have a classifier that is able to reliably diagnose with a small amount of data about the patients. In the first part of REMED we use simple logistic regression to quantify the risk of suffering the disease with respect to the increase or decrement of an 574attribute. We always use high confidence levels (>99%) to select attributes that are really significant and to guarantee the construction of more precise rules. Other important aspect to mention is that depending on the kind of association established (positive or negative) through the odds ratio metric, we build the syntax with which each attribute’s partition will appear in the rules system. This part of the algorithm is shown in the top of figure 1.

Read also  The Self Esteem Problems Health And Social Care Essay

2.1.2 Partitions Selection

The second part of REMED comes from the fact that if an attribute x has been statistically significant in the prediction of a disease, then its mean x (mean of the values of the attribute) is a good candidate as initial partition of the attribute. We sort the examples by the attribute’s value and from the initial partition of each attribute, we search the next positive example (class = 1) in the direction of the established association. Then, we calculate a new partition through the average between the value of the found example and the value of its predecessor or successor. This displacement is carried out only once for each attribute. This can be seen in the middle part of figure 1.

2.1.3 Rules Construction

In the last part of the algorithm, we build a simple rule system of the following way: if (ei,1 ≥ p1) and (ei,j ≤ pj ) and … and (ei,m ≥ pm) then class = 1 else class = 0 where ei,j denotes the value of attribute j for example i, pj denotes the partition for attribute j and the relation ≥ or ≤ depends on the association attribute-disease.

With this rule system we make a first classification. We then try to improve the accuracy of our system by increasing or decreasing the value of each partition as much as possible. For this we apply the bisection method and calculate possible new partitions starting with the current partition of each attribute and the maximum or minimum value of the examples for this attribute. We build a temporal rule system changing the current partition by each new partition and classify the examples again. We only consider a new partition if it diminishes the number of false positives (FP) but does not diminish the number of true positives (TP). This step is repeated for each attribute until we overcome the established convergence level for the bisection method or the current rule system is not able to decrease the number of FP (healthy persons diagnosed incorrectly). This part of the algorithm is exemplified at the bottom of figure 1.

We can appreciate that the goal of REMED is to maximize the minority class accuracy at each step, first selecting the attributes that are strongly associated with the positive class. Then stopping the search of the partition that better discriminates both classes in the first positive example, and finally trying to improve the accuracy of the rule system but without diminishing the number of TP (sick persons diagnosed correctly).

3. Machine learning in complementary medicine

3.1 Kirlian effect – a scientific tool for studying subtle energies

The history of the so called Kirlian effect, also known as the Gas Discharge Visualization (GDV) technique (a wider term that includes also some other techniques is bioelectrography), goes back to 1777 when G.C. Lihtenberg in Germany recorded electrographs of sliding discharge in dust created by static electricity and electric sparks. Later various researches contributed to the development of the technique (Korotkov, 1998b): Nikola Tesla in the USA, J.J. Narkiewich-Jodko in Russia, Pratt and Schlemmer in Prague until the Russian technician Semyon D. Kirlian together with his wife Valentina noticed that through the interaction of electric currents and photograph plates, imprints of living organisms developed on film. In 1970 hundreds of enthusiasts started to reproduce Kirlian photos an the research was until 1995 limited to using a photo-paper technique. In 1995 a new approach, based on CCD Video techniques, and computer processing of data was developed by Korotkov (1998a;b) and his team in St. Petersburg, Russia. Their instrument Crown-TV can be routinely used which opens practical possibilities to study the effects of GDV.

Read also  Study Of The Therasuit Health And Social Care Essay

The basic idea of GDV is to create an electromagnetic field using a high voltage and high frequency generator. After a thershold voltage is exceeded the ionization of gas around the studied object takes place and as a side effect the quanta of light { photons are emitted. So the discharge can be fixed optically by a photo, photo sensor or TV-camera. Various parameters in°uence the ionization process (Korotkov, 1998b): gas properties (gas type, pressure, gas content), voltage parameters (amplitude, frequency, impulse waveform), electrode parameters (configuration, distance, dust and moisture, macro and micro defects, electromagnetic field configuration) and studied object parameters (common impedance, physical fields, skin galvanic response, etc.). So the Kirlian effect is the result of mechanical, chemical, and electromagnetic processes, and field interactions. Gas discharge acts as means of enhancing and visualization of super-weak processes.

Due to the large number of parameters that in°uence the Kirlian effect it is very di±cult or impossible to control them all, so in the development of discharge there is always an element of vagueness or stochastic. This is one of the reasons why the technique has not yet been widely accepted in practice as results did not have a high reproducibility. All explanations of the Kirlian effect apprehended °uorescence as the emanation of a biological object. Due to the low reproducibility, in academic circles there was a widely spread opinion that all observed phenomena are nothing else but °uctuation of the crown discharge without any connection to the studied object. With modern technology, the reproducibility became su±cent to enable serious scientific studies.

Besides studying non-living objects, such as water and various liquids (Korotkov, 1998b), minerals, the most widely studied are living organisms: plants (leafs, seeds, etc. (Korotkov and Kouznetsov, 1997; Korotkov, 1998b)), animals (Krashenuk et al., 1998), and of course humans. For humans, most widely recorded are coronas of fingers (Kraweck, 1994; Korotkov, 1998b), and GDV records of blood excerpts (Voeikov, 1998). Principal among these are studies of the psycho-physiological state and energy of a human, diagnosis (Gurvits and Korotkov, 1998), reactions to some medicines, reactions to various substances, food (Kraweck, 1994), dental treatment (Lee, 1998), alternative healing treatment, such as acupuncture, ‘bioenergy’, homeopathy, various relaxation and massage techniques (Korotkov, 1998b), GEM therapy, applied kineziology and °ower essence treatment (Hein, 1999), leech therapy, etc., and even studying the GDV images after death (Korotkov, 1998a). There are many studies

currently going on all over the world and there is no doubt that the human subtle energy field, as vizualized using the GDV technique, is highly correlated to the human’s psycho-physiological state, and can be used for diagnostics, prognostics, theraphy selection, and controling the effects of the therapy.

4.Limitation

M. Schurr, from the Section for Minimal Invasive Surgery of the Eberhard-Karls-University of Tuebingen, gave an invited talk on endoscopic techniques and the role of ML methods in this context. He referred to current limitations of endoscopic techniques, which are related to the restrictions of access to the human body, associated to endoscopy. In this regard, the technical limitations include: restrictions of manual capabilities to manipulate human organs through a small access, limitations in visualizing tissues and restrictions in getting diagnostic information about tissues. To alleviate these problems, international technology developments focus on the creation of new manipulation techniques involving robotics and intelligent sensor devices for more precise endoscopic interventions. It is acknowledged that this new generation of sensor devices contributes to the development and spread of intelligent systems in medicine by providing ML methods with data for further processing. Current applications include suturing in cardiac surgery, and other clinical fields. It was mentioned that particular focus is put by several research groups on the development of new endoscopic visualizing and diagnostic tools. In this context, the potentials of new imaging principles, such as fluorescence imaging or laser scanning microscopy, and machine learning methods are very high. The clinical idea behind these developments is early detection of malignant lesions in stages were local endoscopic therapy is possible. Technical developments in this field are very promising, however, clinical results are still pending and ongoing research will have to clarify the real potential of these technologies for clinical use.

Read also  Underage Binge Drinking In UK Health And Social Care Essay

Moustakis and Charissis’ work (Moustakis and Charissis, 1999) surveyed the role of ML in medical decision making and provided an extensive literature review on various ML applications in medicine that could be useful to practitioners interested in applying ML methods to improve the efficiency and quality of medical decision making systems. In this work the point of getting away from the accuracy measures as sole evaluation criteria of learning algorithms was stressed. The issue of comprehensibility, i.e. how well the medical expert can understand and thus use the results from a system that applies ML methods, is very important and should be carefully considered in the evaluation.

5.Improvement & Conclusion

The workshop gave the opportunity to researchers working in the ML field to get an overview of current work of ML in medical applications and/or gain understanding and experience in this area. Furthermore, young researchers had the opportunity to present their ideas, and received feedback from other workers in the area. The participants acknowledged that the diffusion of ML methods in medical applications can be very effective in improving the efficiency and the quality of medical care, but it still presents problems that are related to both theory and applications.

From a theoretic point of view, it is important to enhance our understanding of ML algorithms as well as to provide mathematical justifications for their properties, in order to answer fundamental questions and acquire useful insight in the performance and behavior of ML methods.

On the other hand, some major issues which concern the process of learning knowledge in practice are the visualization of the learned knowledge, the need for algorithms that will extract understandable rules from neural networks, as well as algorithms for identifying noise and outliers in the data. The participants also mentioned some other problems that arise in ML applications and should be addressed, like the control of over fitting and the scaling properties of the ML methods so that they can apply to problems with large datasets, and high-dimensional input (feature) and output (classes-categories) spaces.

A recurring theme in the recommendations made by the participants was the need for comprehensibility of the learning outcome, relevance of rules, criteria for selecting the ML applications in the medical context, the integration with the patient records and the description of the appropriate level and role of intelligent systems in healthcare. These issues are very complex, as technical, organizational and social issues become intertwined. Previous research and experience suggests that the successful implementation of information systems (e.g., (Anderson, 1997; Pouloudi, 1999)), and decision support systems in particular (e.g., (Lane et al., 1996;

Ridderikhoff and van Herk, 1999)), in the area of healthcare relies on the successful integration of the technology with the organizational and social context within which it is applied. Medical information is vital for the diagnosis and treatment of patients and therefore the ethical issues presented during its life cycle are critical. Understanding these issues becomes imperative as such technologies become pervasive. Some of these issues are system-centered, i.e., related to the inherent problems of the ML research. However, it is humans, not systems, who can act as moral agents. This means that it is humans that can identify and deal with ethical issues. Therefore, it is important to study the emerging challenges and ethical issues from a human-centered perspective by considering the motivations and ethical dilemmas of researchers, developers and medical users of ML methods in medical applications.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)