Health and Social Care Essays – red dot system

Introduction

In the frequently frantic and universally pressured world of the A&E departments of this country’s hospitals, mistakes get made. This is a fact of life. In any human endeavour this is sadly true. Until recently, the blame culture that was prevalent within the NHS, made certain defensive behaviour patterns amongst staff almost endemic (Vincent, 1994). It is one of the characteristics of a professional life that you have to take responsibility for your actions. If you take the wrong action, you will be criticised. This defensive attitude was, to a large extent, fostered by the professional health insurers who, worried about paying out large quantities of their funds, demanded secrecy, no apology and a defensive stance from those that they insured.(Clinical Services Committee)

It became apparent to those who were in a position to have an overview of the situation that such a situation was actually in nobody’s interest (Barley, 2000). Healthcare professionals were practising defensive medicine, patients were being kept in the dark when mistakes were made, and most important of all, because problems were not examined in an open and constructive way, productive lessons were not learnt. All that was happening was that defensive stances were becoming entrenched.

The advent of the no-blame culture is helping to erode these stances and attitudes (Aldridge 2000). It is allowing the development of practices which may help the efficiency of our hospitals and provide the patient with a better service.

The red dot system arose as a product of both of these factors. The pressure on the A&E department staff is often relentless and great. The structure of the system is that many decisions are taken by comparatively inexperienced staff members and often not the most appropriate for the decision that needs to be taken. Huge numbers of X-Rays are seen by junior doctors and decisions regarding treatment are initially made before a senior specialist has a chance to oversee them. It would follow, by any common sense analysis of the situation, that any measure that could help in the decision making process should be welcomed.

This argument is taken further by the article by Vincent et al. (1988) . In the days before the red dot system was seriously considered, Vincent and his colleagues carried out a study of the radiological errors made by junior hospital doctors. They found an error rate of 35% when the X-Ray was assessed by the SHO alone. For errors with a clinically significant impact the rate was 39% (of abnormal films).

The red dot system represents a mechanism to try to address this gap. It involves the radiographer – usually, but not always, the one who has taken the film – giving the clinician some feed back. Radiographers see many thousands of films and are generally very familiar with the structures that they show. Quite apart from their formal training, simply by everyday familiarity and experience, they get to know what is normal and what is not. The radiographer is therefore well placed to recognise an abnormality even though they may not fully appreciate the full clinical significance of what is on the film. The same argument can be applied to the clinician, who can generally recognise pathology in a patient but may not be so familiar with the X-Ray changes.

The red dot system requires the radiographers to examine the film after it has been ordered by the clinician. If they feel that there is an abnormality on it they will place a self-adhesive red dot on it to denote that they believe that it contains an abnormality. Clearly this does not relieve the clinician of the responsibility of examining the film as, the legal responsibility for interpreting the film must rest with him. This is only reasonable since even the most experienced radiologist would only give a report on what he could see on the film, the full significance of the changes seen can only be fully assessed by a healthcare professional who has also seen and assessed the patient. As we will discuss later, the converse argument that the absence of a red dot does not imply that there isn’t an abnormality – it only denotes that the radiographer hasn’t seen one.

The red dot system

In a letter to the BMJ Keith Piper (2003) outlined the case for the red dot system and the radiographer reporting system (See on). It was initially suggested by the Audit Commission in 1993 that radiographers could be trained to interpret certain images and this was found to be of particular interest in view of the difficulties that some departments currently experience with the reporting service

The first accredited course was run in 1994 many radiographers have since been reporting on primary skeletal X-Rays in A&E departments

Piper points out that the system is designed to reduce errors in reporting X-Rays. It is ultimately totally reliant on the radiographs being finally reported by a senior radiologist in a timely fashion. Unfortunately, this is not always the case as Beggs pointed out in 1990 when it was found that over 20% of UK teaching hospitals did not report on all accident and emergency films

With specific reference to the red dot system, the letter by Aldridge and Freeland (2000) passes comment on the system which is in use in their hospital and, having audited it, they present their results. The system in use conforms to that currently outlined by British Association of Accident and Emergency’s guidelines (1983). The important facets of their system include

The rapid return of X-Rays to the requesting clinician

Reporting of X-Rays by a consultant radiologist within 24 hrs.

Telephone recall of patients who have mistakes picked up

The use of the red dot system by the radiographers

The use of such X-Rays for teaching purposes for staff

As far as the audit of the red dot system was concerned, they report the last audit showed an 1.5% false positive result, 2.0% false negative result with the rest categorised as true positive or negative results. The authors felt that this represented an excellent approach to what they described as an error prone activity, reducing mistakes by accident and emergency staff (often junior), increasing patient satisfaction, and reducinglong term patient morbidity and litigation. This letter is a significant piece of evidence as it is written by two clinicians who are clearly anxious to assess the system and to make it work. They appreciate the problems, quantify them and address them by placing safeguards to minimise problems. Significantly, they suggest the use of the red dot system – where it has picked up omissions by the clinical staff – to be the basis of teaching junior staff in an attempt to further reduce potential problems.

These results should be seen in the context of a study by de Lacey et al.(client to supply date) who considered the accuracy of casualty officer’s interpretation or X-Rays in their departments. They found that by comparing the casualty officer’s interpretation with that of a radiologist, it only compared favourably in 83% of cases. The 17% discrepancy clearly represents a major burden in terms of clinical implications for the patient, financial implications for the hospital and possibly litigation implications for the casualty officer. The study also examines the implications of a delayed reporting system (by the radiologist). It was found to reduce their workload by 25% by restricting their reporting to those films which the casualty officer was unsure or thought may have an abnormality. It clearly follows from this that any measure that is likely to increase efficiency inaccuracy of reporting is likely to have benefits of both economy and patient suffering. We therefore need to examine the premise that the red dot system does exactly that.

These figures are clearly worrying insofar as the 17% discrepancy is a wide margin. The figures still have to be viewed in context however as, although they represent the interpretation of as specialist (the radiologist) as compared with that of the non-specialist (the clinician), the paper does not draw any distinction between the experience levels of the two groups. The clinicians may be comparatively inexperienced casualty officers and the radiologists probably are consultant grade. If that is the case, then the figures are much less alarming. This point is discussed in detail further on in the piece (Williams et al 2000) where radiologists in training are compared to radiologists of consultant grade. The point is brought into sharper focus by consideration of the next two papers.

Read also  Complication In Grand Multi Parity

Before we consider this aspect however, we need to evaluate the accuracy of reporting in the A&E Department environment. Benger and Lyburn (2003) attempted to investigate exactly that. They scrutinised the X-Ray output of an A&E Department over a six month period (nearly 12,000 films). They identified the films which had discrepancies in reporting between the X-Ray staff and the A&E Department staff. From the 12,000 films they found (only) 175 discrepancies. In clinical terms, this equated to a rate of 0.3% of patients who needed a change of management as a result. In all our deliberations on the subject, perhaps it is this that actually is the subjective criteria for whether a system works within tolerable limits or not. Different studies may find different discrepancy rates in interpretation of X-Ray films, but what is of practical value is the actual number of patients who require a change of management as a result. If a minor degree of subluxation of a proximal interphalangeal joint is missed by a casualty officer and subsequently picked up by a radiologist, it will appear on inventories of discrepancies such as the ones discussed above. In terms of patient care or treatment, it will not make a scrap of difference. This point is made, rather more eloquently and in a different context, by Fineberg (1977) and the Institute of Medicine (1977).

This point should not be taken lightly and indeed, it goes to the core of this piece. Academic studies may show different abnormality detection rates between the different professional groups. While recognising that these are clearly important, they are not the yardstick by which we must judge the red dot system. We have already examined two papers on the subject that have reported differences in abnormality detection at each end of the spectrum – one of 17% and one of 1.5%. We should not be blinded by these figures themselves. What actually matters is the number of patients who have a change of management decision as a result of this discrepancy. The paper quoted above (Benger and Lyburn 2003) is one of the few which actually gives us this information. They quote an observed change of management in only 0.3% of patients which, for any system, is a very tolerable level of error. This is clearly a very fundamental point and one that we need to examine further. The next paper that we should consider looks at exactly this point and examines it in great detail.

Taking a more academic approach Brealey and Scally (2001) tackle the difficult issue of just how to interpret the findings of a study that purports to evaluate the reading of X-Rays by two or more different professional groups. This is a very technical paper and is included here for the sake of completeness. It examines all of the possible margins for error and bias when reporting a trial. It throws little direct light onto our deliberations here because of its very technical nature, but it would be of considerable importance to one who wished to interpret the findings of a major trial independently. The point needs making that the trial design can influence the outcome of the trial (and therefore its usefulness) to a great extent. As we have made the point above, the actual figures produced at the end of the trial must be interpreted in the light of the trial design. Actual detected differences in readings between two groups of professionals may be of academic interest, but in the context of our examination of the red dot system, they are not nearly as important as a critical examination of the discrepancies which resulted in a change of patient menagement.

On the direct issue of the red dot system, an almost immediate precursor to the system was reported in the BMJ in 1991 by Renwick et al. . He discussed a system that was tried out of getting radiologists to indicate their diagnoses on the pre-reported X-Rays, in order to guide the casualty officers in their decisions. The conclusions of the study were that, because of the high rate of false positive reporting (7%) and higher rate of false negatives (14%) it was appropriate for radiologists to offer useful advice but to take no more responsibility than that. We shall discuss the issues of false positives and false negatives further on in this piece and clearly they are an inherent problem with the system. It follows that we should, perhaps, address the reasons why there are these discrepancies and use them as a learning exercise to try to reduce the gap.

In the excellent and concise article written by Touquet et al. (1995) the authors address the Ten Commandments of A&E Department radiology. They discuss the red dot system in the following terms.

Inexperienced doctors will inevitably come across injuries that they have never seen before. In these cases it may not be possible to make a diagnosis but you will notice that the films do not look quite right. Good examples of this are lunate and perilunate dislocations of the hand. It is important to seek senior advice and also to listen to the radiographer. Many departments operate a “red dot” system, in which the radiographer flags up an abnormality. An experienced radiographer may be as good as or even better than a junior doctor at interpreting films.

The problem with this system is that the absence of a red dot does not necessary mean that there is no abnormality. This is important to remember because the final responsibility lies with the doctor, and not the radiographer. Therefore never accept poor quality or inadequate films.

The most salient point of this article is in the last paragraph. The absence of a red dot does not mean the absence of an abnormality and the liability lies with the doctor not the radiographer. This is clearly proper, as any experienced healthcare professional will state, any investigation (particularly an X-Ray) is only an adjunct to diagnosis, it is the person who is clinically in charge of the patient who has to assimilate all the available evidence to make a diagnosis. The radiographer has not seen the patient to examine, and certainly will not have to hand all of the other potential diagnostic aids that are available in a modern A&E Department. It is entirely reasonable to ask for his opinion on an X-Ray film, but it is not reasonable to hold him responsible for it’s definitive interpretation when he has not seen it in the context of the patient.

This statement is behind the reasoning for the legal responsibility of X-Ray interpretation. It would be clearly inappropriate to ask a radiographer for his opinion on a film and then make him responsible for any subsequent management decisions that were based on that opinion. Some commentators have criticised the red dot system for its clear lack of apportionment of responsibility to the radiographer. We would suggest that this shows a fundamental lack of appreciation of the problems involved. The radiographers are trained to be experts in taking X-Ray films. They are not, and do not pretend to be, trained in the biological sciences and their applications to pathology and the human disease processes. It is quite appropriate to ask their opinion in an area of their expertise (the interpretation of the X-Ray film), but it is quite inappropriate to ask them to make clinical management decisions. For this reason, all questions of liability always rest on the clinician in charge of the patient, and it is only right that this should be the case.

It is fair to say that some of the views reviewed so far have been old school – necessarily so, as the intention was to document the evolution of the red dot system. It is equally fair to state that we have only considered the use of the system in the A&E Department. The truth of the matter is that in the recent past, the status of the radiographer has increased in professionalism both within their own speciality and within the NHS as a whole. Many of the comments made in some of the earlier papers quoted will therefore, now seem rather outmoded and not consistent with the modern experience of working in the NHS.

Read also  The Health Industry Of Malaysia Health And Social Care Essay

To redress the balance we shall look at an article from Papworth hospital by Sonnex et al; (2001) . The authors describe a system currently in use at an acute cardiothoracic unit. Radiographers were asked to assess all the X-Rays taken over a six month trial period. Those that were assessed as showing acute changes had a red dot placed on them to denote an abnormality and these were then assessed by a radiologist. The success or failure rate was then measured against this standard.

The figures are rather different from the figures quoted in the studies that looked at skeletal X-Ray in A&E Departments. The reason for this is almost certainly that a chest X-Ray is notoriously hard to interpret, even more so when it is a post operative X-Ray. The results were reported as a total sample of 8614, of which 464 (5%) had red dots applied. Over 100 of these were considered inappropriate. 38 X-Rays which were abnormal were not picked up. It would appear that radiographers tend to err on the side of caution when reviewing an abnormal chest X-Ray, even more so when previous comparative films were not available for comparison. This particular study had a high false positive rate.

One should not lose sight of the fact that the radiographers concerned were dealing with a different population to those that we were considering earlier. The patients were generally very ill and often in a post operative state making assessment far more critical than perhaps the colder X-Ray of the A&E Department where decisions could reasonably be delayed safely for 24-48 hrs. there was therefore perhaps far more pressure on them to report any possible abnormality. It is also appropriate to comment that this was the first stage of a study which then went on to review the radiographers performance after a further period of training. One would reasonably anticipate a higher agreement rate after appropriate training.

As we have already seen the red dot system has evolved in several different variants. The basic premise is the same in each case – how is it possible to minimise the potential sources of error caused by inexperience? A further variant is outlined by Williams et al (2000). His paper title specifically involves the cost effectiveness of the scheme as well as the overall impact on patient management. In this scheme ( which was running at the Radcliffe Hospital in Oxford) the original A&E Department films were reviewed by radiologists-in-training. They identified 684 incorrect diagnoses over a one year period. These were then called red reports and reviewed by a consultant radiologist. During this process 351 missed fractures were detected with ankle, finger and elbow fractures being the main areas where pathology was missed. Williams also reported 11 incidences of pathology on a chest X-Ray as being missed. This amplifies the point made earlier that the radiologists-in-training tended to produce false positives at a rate of about 18% when compared to the subsequent, more expert opinion.

In this particular study, further action was taken by the A&E Department staff in 42% of those cases although no operative intervention was required in any patient as a result of the missed diagnosis. Despite these figures, it must be noted that these cases form a very small percentage of the X-Rays taken in a busy A&E Department

False positives and false negatives

We have looked at a number of studies that have compared radiographer’s interpretations of X-Ray films against that of a Consultant Radiologist who has generally been used as the Gold Standard. The difference between the two sets of interpretations is then subdivided into false positives and false negatives. This group is actually the most important as it is firstly an indication of the usefulness of the whole system of red dot reporting and secondly it is also an indication of how much more training any particular reader (radiographer or casualty officer ), of the films has to undergo, in order to make fully competent assessments.

The false positive is the situation where the radiographer has identified a problem that is not there. Conversely, the false negative is when they have missed pathology that is there. In most of the assessments that we have seen, there are more false positives than negatives. This implies that the radiographers are being over cautious when confronted with an equivocal film.

Several of the papers that we have seen so far have stated (either explicitly or otherwise) that the absence of a red dot does not imply the absence of any pathology. Any common-sense analysis of the situation would suggest that this is clearly self-evident. It must be the case where two highly trained – but clearly not expert – healthcare professionals are looking at a film for pathology, they are probably more likely to arrive at the right answer than one alone.

Brealey (2005) produced a Meta-analysis of studies involving radiographer’s input in interpreting films and found that radiographers involved either in the red dot system of X-Ray reading improved with experience and with training, acquired an accuracy approaching that of radiologists when dealing with skeletal X-Rays.

The red dot system is designed to utilise the expertise of specially trained radiographers to interpret plain X-Rays. From the evidence presented above we can say that there is evidence that radiographers are clearly more expert in interpreting plain skeletal X-Rays than chest X-Rays or visceral radiographs. The red dot system appears to be a growing movement within the profession. A paper by Brealey (2003) pointed out the fact that between 1968 and 1991 the radiologist’s workload increased by 322% but the number of posts increased by only 213%. As a result of this the number of films successfully reported within 48hrs fell to 60%. As a result of this trend the Royal College of Radiologists decided to endorse the trend of radiographers giving indications of pathology on X-Rays . Brealey’s paper examines the initial cohort of radiographers who were trained under this scheme and found that, statistically, there was no significant difference between the reading of an X-Ray by a radiographer or a radiologist (in the case of plain skeletal X-Rays) which supports the view that the red dot system is viable.

Any examination of this issue would be incomplete without a consideration of the detailed and analytical paper by Friedenberg (2000) which he provocatively entitled The advent of the supertechnologist. It is particularly relevant to our consideration of the red dot system and the role of the radiographer as it looks at the background to the whole issue. Friedenberg uses the term Skill mix as a specific term to define the current trend in medicine away from specialisation and departmentalisation and towards the communal utilisation of expertise from different individuals in related fields to complement or increase the expertise available to patients. He points out that this is not actually a new concept and cites the optician who relieves the workload of the ophthalmologist and the nurse specialist anaesthetist who relieves the anaesthesiologist by performing uncomplicated procedures. He quotes a whole host of paramedical providers who now assist the physician, in most cases without problems

Loughran et al (1996a, 1996b, 1992) have specifically looked at the practicality of utilising the skills of the radiographer to better advantage than just taking the films.

He contrasts the difference in practice between the UK and the USA, citing the cause of the complete separation of the roles of radiographer and radiologist in the USA as being due to the fact that in the USA, the radiologists still operate largely on a fee-per-service basis whereas in the UK the pressure is primarily on clinicians to become more efficient and to keep costs down.

Read also  Discuss The Concepts Of Health

Friedenberg, interestingly also examines the evolution of the legality of the roles of radiographer and radiologist.

Between 1900 and 1920, there was competition between radiographers and radiologists with regard to the performance of radiography and the interpretationof radiographs. In the middle 1920s in England, radiographers were prohibited from accepting patients for radiography except under the direction of a qualified medical practitioner (Quotes Larkin 1983)

After this the professions came closer and by 1971 Swinburne (1971) was suggesting that radiographers could perfectly well separate normal from abnormal films, which after all is the basis behind the red dot system . As we have discussed earlier, this move then progressed into the first formal appearance of the red dot system in North Park Hospital in 1985. The first trials of the system found that approximately half of the abnormalities that were not picked up by the junior casualty officers were detected by the radiographers. The early safe guards were outlined by Loughran (1996) as follows:

1. It is made clear to the referring physician that the report is a technologist’s report. The physician is encouraged to consult the radiologist if there is a lack of clinical correlation.

2. The technologist must consult the radiologist if he or she isin doubt.

3. The physicians, radiologists, and technologists have deviseda set of guidelines to create a safe environment for this practice.

4. Initially, the technologist’s practice is monitored on a regularbasis. After the technologist is experienced, however, monitoring is no longer performed. Such monitoring should be performed if a new technologist enters this practice.

Interestingly, Loughran also subsequently produced a set of guidelines for the radiographer :

1. The technologist should be confident in his or her report.

2. In cases of doubt, a radiologist’s opinion should be obtained.

3. In such cases, although the report may be issued by the reporting technologist, the consultant’s name should be appended to the report.

4. All reports by a technologist should be clearly designated asa technologist’s report.

5. If the patient re-presents for radiography of the same bodypart within 2 months, this should be reported by a radiologist.

6. Non-trauma examination findings should be reported by the radiologist.

7. All accident department images in patients who are subsequently admitted as inpatients should be reported by the radiologist.

8. Clinicians are to be advised to consult the radiologist if clinical findings do not match those in the technologist’s report.

9. Regular combined reporting sessions are to be held with theconsultant radiologist.

Robinson (1999) Defines the ideal areas for radiographers and radiologists with the following definition between cognitive and procedural tasks thus:

Procedural tasks can be described, defined, taught, and subjected to performance standards that make them transferable to other staff with appropriate training. Cognitive tasks that are related not only to the interpretation of images but also to decisions about differential diagnosis and appropriate choice of further investigations are more difficult.

We have examined the evolution of the red dot system and there have been moves towards the logical progression beyond the radiographer simply indicating that there may be a problem to the situation where radiographer who have undertaken further training have developed their skills in other ways as well, but this is beyond the scope of this piece. Perhaps we should leave the last thought to Friedenberg who envisages the future as being the era of the Supertechnologist and it is the specialist who is left to do a small number of very highly specialised procedures.

References

1. Jonathan Aldridge, Peter Freeland, (2000) Safety of systems can often be improved BMJ 2000;321:505 ( 19 August )

2. The Audit Commission (1995). Improving Your Image – How to manage Radiology Services More Effectively. London: HMSO.1995

3. Victor Barley, Graham Neale, Christopher Burns-Cox, Paul Savage, Sam Machin, Adel El-Sobky, Anne Savage (2000) Reducing error, improving safety BMJ 2000;321:505 ( 19 August )

4. Beggs I, Davidson JK 1990. A&E reporting in UK teaching departments. Clinical Radiology, 41, 264-267.

5. J R Benger, I D Lyburn (2003) What is the effect of reporting all emergency department radiographs? Emerg Med J 2003; 20:40-43n.

6. Benger JR. (2002) Can nurses working in remote units accurately request and interpret radiographs? Emerg Med J. 2002 jan;19(1):68-70

7. S Brealey, A J Scally (2001) Bias in plain film reading performance studies British Journal of Radiology 74 (2001),307-316

8. S Brealey, D G King, M T I Crowe, I Crawshaw, L Ford, N G Warnock, R A J Mannion, S Ethell,(2003) Accident and Emergency and General Practitioner plain radiograph reporting by radiographers and radiologists: a quasi-randomised controlled trial British Journal of Radiology (2003) 76, 57-61

9. Brealey S, Scally A, Hahn S, Thomas N, Godfrey C, Coomarasamy A. (2005) Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis. Clin Radiol. 2005 Feb;60(2):232-41

10. Brennan TA, Leape LL, Laird NM, Herbert L, Localio AR, Lawthers AG, (1991) Incidence of adverse events and negligence in hospitalised patients: results of the Harvard Medical Practice study. N Engl J Med 1991; 324: 370-376

11. Clinical Services Committee, British Association for Accident and Emergency Medicine. X-ray reporting for accident and emergency departments. London: BAEM, 1983. (Currently under revision.)

12. C K Connolly (2000) Relation between reported mishaps and safety is unclearBMJ 2000;321:505 ( 19 August )

13. Fineberg HV, Bauman R, Sosman M. (1997) Computerised cranial tomography: effect on diagnostic and therapeutic plans. Institute of Medicine. Policy statement: Computed tomographic scanning. Washington DC: National Academy of Sciences, JAMA 1977;238:224-7.

14. Richard M. Friedenberg, (2000) The Role of the Supertechnologist Radiology. 2000;215:630-633.)

15. Johansson H, Räf L. (1997) A compilation of “diagnostic errors” in Swedish health care. Missed diagnosis is most often a fracture.Lakartidningen 1997; 94: 3848-3850

16. Pia Maria Jonsson, Göran Tomson, Lars Räf, (2000) No fault compensation protects patients in Nordic countries BMJ 2000;321:505 ( 19 August )

17. G de Lacey, A Barker, J Harper and B Wignall An assessment of the clinical effects of reporting accident and emergency radiographs

18. Larkin G. (1983) Occupational monopoly and modern medicine London, England: Tavistock, 1983.

19. DD Loughran CF, Alltree J, Raynor RB, (1996) Skill mix changes in departments of radiology: impact on radiologists workloadreports of a scientific session. Br J Radiol 1996; 69(suppl):129.

20. Loughran CF. (1996) Report of accident radiographs by radiographers. RAD Magazine July 1996; 34(suppl):

21. Loughran CF. (1992) The clinical radiographer [diploma thesis] Keele, England: University of Keele, 1992.

22. Mackenzie R, Dixon AK. (1995) Measuring the effects of imaging: an evaluative framework. Clin Radiol 1995;50:513-4

23. Piper K, Paterson A and Ryan C, (1999). The implementation of a Radiographic Reporting Service for trauma examinations of the skeletal system, in four National Health Service Trusts, Internal Publication Canterbury Christchurch College 1999

24. Keith Piper, (2000) Programme Director Reporting by radiographers BMJ 7 September 2000

25. Renwick IG, Butt WP, Steele B. 1991 How well can radiographers triage x ray films in accident and emergency departments? BMJ. 1991 Apr 27; 302 (6783): 1023-4

26. Robinson PJA, Culpan G, Wiggins M. (1999) Interpretation of selected accident and emergency radiographic examinations by radiographers: a review of 11,000 cases. Br J Radiol 1999; 72:546-551.

27. The Royal College of Radiologists. Medico Legal Aspects of Delegation (FCR/2/93). London: The Royal College of Radiologists, 1993.

28. The Royal College of Radiologists. Staffing and Standards in Departments of Clinical Oncology and Clinical Radiology. London: Royal College of Radiologists, 1993.

29. Sonnex EP, Tasker AD, Coulden RA. (2001) The role of preliminary interpretation of chest radiographs by radiographers in the management of acute medical problems within a cardiothoracic centre. Br J Radiol. 2001 Mar;74(879):230-3

30. Swinburne K. (1971) Pattern recognition for radiographers. Lancet 1971; 1:589-590.

31. Robin Touquet, Peter Driscoll, and David Nicholson (1995) Recent Advances: Teaching in accident and emergency medicine: 10 commandments of accident and emergency radiology BMJ, Mar 1995; 310: 642 – 648

32 Vincent C, Young M, Phillips A. (1994) Why do people sue doctors? A study of patients and relatives taking legal action. Lancet 1994;343:1609-13

33. Vincent CA, Driscoll PA, Audley RJ, Grant DS. (1988) Accuracy of detection of radiographic abnormalities by junior doctors. ArchEmerg Med. 1988 Jun;5 (2): 101-9

34. Williams SM, Connelly DJ, Wadsworth S, Wilson DJ. (2000) Radiological review of accident and emergency radiographs: a 1-year audit. Clin Radiol. 2000 Nov;55(11):861-5

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)