Improving the Safety of Pediatric Sedation: Human Error, Technology, and Clinical Microsystems


Study

Sample

When sampled

AE rate

Percentage preventable

Harvard Medical Practice Study [4, 5]

30,121 records

1984

3.7 %

Majority

51 hospitals

Quality in Australian Health Care Study [6]

14,179 records

1992

16.6 %

48 %

28 hospitals

Utah and Colorado Study [7]

15,000 records

1992

2.9 %


13 hospitals

London Study [8]

500 records

1998

10.8 %

48 %

2 hospitals

New Zealand Study [9]

15,000 records

1998

12.9 %

35 %

13 hospitals




Table 30.2
Factors that may increase risk in children undergoing sedation, especially outside the operating room



























• Weight-based and off-label use of drugsa

• Changing physiology and dose and drug effect with agea

• Sedation monitoring systems and scores that vary and change with agea

• Limited reserves to tolerate dose inaccuraciesa

• Difficulty in maintaining homeostasis because of small size and immature physiologya

• Congenital conditions and comorbiditiesa

• The increasing number and complexity of sedation cases conducted in childrena

• Sedation performed under urgency

• Sedation performed in a variety of different locations with no standardized backup or safety equipment

• Sedation performed by a variety of staff, including anesthesiologists, emergency ward staff, cardiologists, nurses, and house officers

• Variability in target depth of sedation and in sedation training


aThose particularly applicable to pediatric patients




The Need for a Paradigm Shift


The Institute of Medicine in the United States claimed in 2000 that “health care is a decade or more behind other high-risk industries in its attention to ensuring basic safety” and called for a paradigm shift in the quality of patient care [3]. Responding to this call, the 100,000 Lives Campaign, introduced by the Institute of Healthcare Improvement, reported in 2006 the saving of 122,300 lives over a period of 18 months in American hospitals through the implementation of six evidence-based practices [12, 13]. Although this campaign is highly commendable in terms of engaging health-care providers and has led to the further 5 Million Lives Campaign [14], it has also been criticized for being unable to demonstrate what aspects of the intervention were actually effective in achieving the result, or that much of the observed reduction in mortality was not due to other influences [12, 15]. While systematic approaches to improving safety have recently been shown to produce benefits in some areas [1619], the improvement of safety in health care has been uneven, and in many areas little or no improvement has occurred. We suggest that the sedation of children is one such area and that preventable adverse events still occur too often in this context. Medication errors are a leading source of adverse events in pediatric patients [1, 2023]. Among other things, these contribute to both inadequate sedation and excessive sedation with consequent airway complications, cardiovascular instability, and prolonged recovery. In many settings the management of many of these complications may be almost routine, rendering the complications inconsequential (e.g., by the provision of supplemental oxygen and jaw thrust) [24]. However, this rapid and easy management highlights the importance of many of the system, monitoring, training, and perioperative communication issues that are critical for the safe sedation of children.

Efforts to improve pediatric sedation have focused on many of the issues in Table 30.2, and proxy markers are often used as measures of safety or risk for this purpose. Proxy markers are indicators that are associated with, but occur more frequently than, rare outcomes of interest. They tend to focus on structures and processes rather than outcomes and so tend to be easier to measure [25]. Examples of markers of safety in sedation include the documentation of fasting for solids and liquids, the recording of weight, allergies, consent, risk assessment, and appropriate vital signs including depth of sedation, the presence of appropriate staff, written drug orders, and provision of a discharge handout [26].


Medication Errors


Medication errors (in any age group) may occur through commission or omission [27, 28]. The former involves the wrong drug, the right drug inadvertently repeated (so-called insertion errors), the wrong dose, the wrong route, or the wrong time. In addition, failure to correctly record administered medications may also be considered an error because of the critical importance of an accurate record in planning ongoing patient care [22, 29]. In errors of commission, harm may occur through unintended effects of incorrect actions (e.g., sedation from dexmedetomidine instead of dexamethasone for nausea). In errors of omission, harm may occur through the absence of intended effects (e.g., awareness or unwanted movement during inadequate sedation). The “six rights” of medication administration have been promulgated in response to these known failure modes, namely, the right patient, dose, medication, time, route, and record of the administration [22].

Experience from pediatric anesthesia suggests unintentional additional medication doses are the most prevalent drug error, but wrong drug, wrong dose, and wrong route errors are also common; errors with analgesics and antibiotics are particularly common [30, 31]. In intensive care or high dependency units, errors are frequent in both the administration and the prescribing of drugs [32]. In addition, adverse respiratory events arising from sedatives and analgesics often reflect poor choices of drugs and inadequate understanding and application of pharmacology, particularly when using combinations of drugs [33]. For example, respiratory adverse events are more common with fentanyl/ketamine combinations than with ketamine alone [34].

Dosage errors are also particularly common in children [5, 31, 35]. The patient’s growth, maturation, and size are critical determinants of dose. Clearance, the pharmacokinetic parameter dictating maintenance dose, is immature at birth and matures over the first few years of life. Bupivacaine toxicity has occurred in infants receiving continuous regional neuronal blockade through failure to appreciate immature clearance [36]. Clearance has a nonlinear relationship to weight [37]: when clearance is expressed using a linear function (e.g., L h−1 kg−1), it is highest in the 1- to 2-year-old age band and decreases throughout childhood until adult rates are achieved in late adolescence. Drug doses scaled directly from an adult dose (in mg kg−1) will typically be inadequate. Consequently, propofol when used as an infusion for sedation in children requires a proportionately higher dose rate to achieve the same target concentration as in adults [38]. Similarly, the use of remifentanil parameters derived from adult studies for infusions in children results in lower concentrations than anticipated because clearance expressed per kilogram is higher in children [39].

There is substantial between-subject variability of response to any given dose. Pharmacodynamics has been inadequately studied in children and especially in infants. It follows that reliance on dose is not enough to judge effect reliably, and sedation must be monitored. This is difficult in young children, partially because of a lack of objective measures of effect in this group (e.g., processed EEG); instead it is necessary to rely on observation and on measurement tools (such as sedation scores) based on observation. However, observation may be difficult when children are undergoing certain radiological procedures, such as MRIs, for example. This difficulty in assessing sedation increases in children who have preexisting cerebral pathology [40] or behavior disorders [41] or who are very young [42].

The paucity of integrated pharmacokinetic-pharmacodynamic (PK-PD) studies of intravenous sedation in children, particularly sedation involving multiple drugs, predisposes to inadequate or excessive dose. Drug interactions may occur with mixtures used for sedation, but they may also be consequent to longer-term therapy with other drugs. For example, phenobarbital, used for seizure control, induces CYP3A4, an enzyme responsible for ketamine clearance. Thus, the sedative effect of ketamine, which is metabolized by CYP3A4, is reduced in children on long-term phenobarbital therapy [43, 44].

Infants are unable to swallow pills, but pediatric oral formulations are not available for the majority of commercially available medications. When no liquid oral formulation is available, intravenous preparations are often administered orally (e.g., of midazolam or ketamine) without adequate information about their absorption characteristics, hepatic extraction ratio, or the effect of any diluent used to improve palatability; this may lead to inappropriate dosing [45].

Children generally require smaller doses than adults. Because medications are packaged for adult use, dilution is commonly required in pediatric anesthesia. This further predisposes to dosage errors [5, 35], often in the form of tenfold overdoses because mistakes with the decimal place are easy to make [46]. Technique is particularly important in the administration of medications to small children and babies. Some of the intended dose of a medication may easily be retained in the dead space of any part of an intravenous administration set, or in a syringe, with the result that the desired effect may not be obtained. Subsequently, an unintended dose of this medication may be given inadvertently, flushed from the dead space by the later injection of another medication. The effect then may be excessive, untimely and potentially lethal [47]. Apnea, bradycardia, hypotension, and hypotonia have been reported in a premature neonate weighing 1.6 kg after an overdose of morphine, arising from medication unintentionally retained in a syringe [48].

Although medications are usually prescribed on a weight basis (e.g., in mg kg−1), children are often not weighed. A survey of 100 children’s notes in a busy emergency department revealed that only 2 % were weighed prior to the prescribing of medication [49]. Twenty-nine percent of physicians’ estimates, 40 % of nurses’ estimates, and 16 % of parents’ estimates differed from actual weight by more than 15 % [50]. The accuracy of methods used to estimate weight also varies [51, 52].

Given the many factors that predispose to medication error in small children (Table 30.2), the importance of monitoring (particularly the degree of sedation) is obvious. Stress should also be placed on protocols (e.g., for measuring weight) and training (e.g., in the differences of PK-PD pharmacology between children and adults). Finally, guidelines, technology, and equipment need to be suitable for children rather than simply adapted from adult applications.


The Clinical Microsystem as a Unit of Analysis


A clinical microsystem is a group of “clinicians and staff working together with a shared clinical purpose to provide care for a population of patients” [53]. Understanding the operation of the clinical microsystem that delivers pediatric sedation is the key to identifying aspects for improvement. The elements of this microsystem include the patients, the clinicians, support staff, information technology, supplies, equipment, and care processes—and elements may be spread over various locations within the organization or beyond into the community. Certain roles, such as the person administering sedatives, may be held by individuals from different professional groups from instance to instance. The training of these individuals, and the approaches and standards used by them, may differ. In addition, sedation occurs in a variety of locations, which contributes to the variation in the staff available to perform the sedation, the equipment used, and the available safety and backup systems. This variation in location creates risks that do not apply to a team that performs in a fixed location, such as an operating room. In an operating room, the team typically has a designated number of defined roles filled from specified professional groups (such as nurses, anesthesiologists, and surgeons). Equipment tends to be reasonably standardized, and the way in which the members of the team perform their duties and interact with each other is relatively formalized.

To understand the operation of a clinical microsystem, the first step is to identify the personnel and other components that comprise the microsystem and then map the functional relationships of each to the others. Such a map can then be used as a guide to collect information on the operation of the microsystem and to identify gaps between how the microsystem is intended to operate and how it actually does operate. Strengths should be identified as well as weaknesses. The concept of “positive deviance” is that in any domain a few individuals facing risk will follow “uncommon, beneficial practices” and therefore experience better outcomes than their counterparts [54]. Once identified, these positively deviant strengths can be formalized, shared, and promoted more widely. The ultimate goal is to find ways to improve the connections between the elements of the microsystem, enhance its performance, and promote better outcomes [5557].

Although the clinical microsystem seems likely to be a useful unit of analysis for the purposes of improving clinical safety during pediatric sedation, it is also necessary to consider the nature of its constituent parts, namely, humans and technology, and the way these interact. The complexity of technology used in health care today and the psychological determinants of human error remain important and underappreciated factors in the genesis of poor clinical outcomes.


People Versus Systems


Traditionally, safety in medicine has largely depended on the resolve and vigilance of individual clinicians to anticipate and avoid dangerous outcomes. Such an approach to safety has been called the person-centered approach, because all responsibility for safety rests on the shoulders of the individuals in the workplace [58, 59]. For the majority of the time, the person-centered approach works reasonably well in most organizations. Even in error-prone environments, skilled personnel can often perform adequately or even very well, finding inventive and creative ways to keep operational activities within desired limits despite deficiencies in technical and organizational aspects of their environment [60]. People should not be expected to perform like machines, which execute the same tasks repeatedly without deviation. Indeed, recovery from an unexpected event or other departure from the routine is one of the strengths of human intelligence (and a weakness of machines) and is a key feature of the avoidance of adverse events in complex endeavors. However, personal resolve to avoid bad outcomes is not sufficient: simply deciding to avoid error is, on its own, doomed to failure. In work environments where perfect performance is required every time and where error may lead to devastating consequences, the person-centered approach is insufficient to guarantee the requisite levels of safety and performance in the long term.

An important consequence of the person-centered approach is that the search for the reasons that things go wrong is typically not expanded further than those individuals immediately involved in the accident. All clinicians, no matter how resolved, will sooner or later make errors—simply because they are human and error is a statistically inevitable concomitant of being human [58, 59, 61]. Under the person-centered approach, when clinicians make mistakes, as they inevitably will, they are typically blamed for their carelessness and told to try harder to avoid error. Typically, little or no effort is made to identify the features of the system that predisposed or contributed to the error. This leaves such features active in the environment to precipitate similar errors in the future. Reason has called these features “passive errors” [58] or “latent factors” [62]. In the ultimate person-centered response, eliminating (e.g., through dismissal) the person who made the error simply sets up the replacement person for the same error to happen again. All medical systems contain many features that can only be described as accidents waiting to happen, and the relentless increase in the complexity of medical technology and treatment means that resolve and vigilance alone are increasingly inadequate to ensure the safety of patients [6366].


Making Sense of Uncommon Adverse Events


Repeat even a safe activity often enough and eventually an accident will result—this phenomenon has been called the law of large numbers [67, 68]. The simple realization that the probability of an accident or failure can never be absolutely zero is one of the central ideas to come from the study of high-technology systems, including aviation, nuclear power, and space exploration [66, 69, 70]. Health care is a highly developed technological system, and the number and complexity of patients continues to increase year on year. It follows that the number of patients harmed by their procedures must also increase (given a constant, or even slowly decreasing underlying risk of harm). Thus, even though health care is almost certainly safer today than it has ever been in terms of relative risk (at least in high-income countries), it is causing harm to a record number of patients. However, relative risk estimates or Bayesian inference do not come intuitively to many people when they are required to interpret the occurrence of such adverse events [71]. Humans tend to focus on the total number of bad outcomes, regardless of the associated number of trouble-free outcomes [7275]—on the numerator alone, rather than the ratio of numerator to denominator. We tend to have a fixed idea of how many plane crashes or medical mishaps are tolerable each year, regardless of the total number of planes in the sky, or patients treated. The current alarm about the safety of health care suggests that the number of patients harmed each year may be approaching the fixed level over which many people will cease to view health care as safe (Fig. 30.1). A further consequence of the law of large numbers is the fact that an adverse event of any particular type will not be seen often, or at all, by any particular clinician—thus, clinical impressions can be considerably biased in relation to the true rate and importance of the adverse event [76]. Such bias can lead either to an underestimate (if the adverse event has never been encountered—the clinician perhaps believing that this is because his or her practice is better than average) or to an overestimate (if the clinician has been unfortunate enough to have had perhaps two or three bad experiences with the adverse event). Quantifying the true rate of any infrequent adverse event requires a systematic approach. It is trivial to estimate statistically the sample size needed to gain a reasonable estimate of any particular low incidence phenomenon: such studies often require data collection from thousands of patients, which can be prohibitive. Both these consequences of the law of large numbers, that is, not considering the denominator and the bias present in clinical impressions, impede the development of effective algorithms for dealing with uncommon adverse events and present a significant challenge for evidence-based health care. To continue to be viewed as safe, all technologies must become progressively safer with increasingly widespread use. Many aspects of medical technology have so far failed to achieve this.

A159425_2_En_30_Fig1_HTML.gif


Fig. 30.1
What is considered safe is generally perceived as a fixed level of accidents for any particular technology (reproduced from Webster CS. Why anesthetizing a patient is more prone to failure than flying a plane. Anaesthesia. 2002;57:819–820, with permission from John Wiley and Sons)

One of the most promising approaches to the improvement of safety in health care involves the adoption of what has been called the systems approach [64, 7779]. This differs from the human-centered approach in that it widens the focus of safety initiatives from the individual to include the “system” in which individuals work and emphasizes the elimination of unsafe aspects of equipment, procedures, work environments, and organizations. There are good examples of changes to particular aspects of systems that have dramatically improved safety, such as the inclusion of anti-hypoxic devices in modern anesthetic machines to prevent the omission of oxygen [80]. However, many of the straightforward opportunities for simple improvements through engineering innovations have been taken, and further implementation of the systems approach in health care will increasingly depend on a deeper understanding of the nature of human error, the factors that engage humans in changing behavior, and the way specific health-care systems fail. Critically, this better understanding will need to be followed through to the redesign of specific unsafe features within health-care systems.


The Nature of Human Error


Human errors are not random events. Their nature in any particular circumstance, and even the frequency with which they occur, can be predicted to a large degree through an understanding of the underlying mechanisms of human psychology [58, 8186]. The capacities of our cognitive faculties are finite and imperfect. We can absorb, store, and process only a small portion of the information or stimuli in the world at any given time. We often act on “autopilot” without being consciously aware of many of the details of our actions, yet remain distractible. In addition, our memories are selective and dynamic. We remember certain events better than others on the basis of their significance to us as individuals, our recent similar experience, or the task we were engaged in at the time. Even once committed to memory, information in our heads changes over time, and recall can be partial and slow. Most of these limitations, far from being shortcomings, are in fact coping mechanisms honed by millions of years of human evolution [85, 87, 88]. Likewise, being able to carry out sequences of behavior in an automatic manner, without being consciously aware of the individual actions that make each up, allows us to perform more than one action at a time and frees up limited cognitive resource to monitor life-threatening or otherwise important events in the environment. For example, while engrossed in reading a book, we remain able to react appropriately to developing circumstances around us, for example, by noticing that the house is on fire. The upside of the nature of our cognitive faculties is that we perform quickly, often creatively, and typically very well for the vast majority of the time [89]. The downside is that under certain circumstances, we can be predisposed to make particular types of error [58, 83].


Error Types1


Psychologist James Reason, drawing on the work of Jens Rasmussen in particular, has defined a theoretical framework called the generic error-modeling system (GEMS) by which human behavior and errors can be classified [58, 62, 90, 91]. In the GEMS, human behavior is seen as being controlled by either conscious or automatic processes or a mixture of these two control modes (Fig. 30.2). Such control modes lead to three relatively distinct forms of human behavior. The three forms of human behavior also lead to three general classes of human error.

A159425_2_En_30_Fig2_HTML.gif


Fig. 30.2
The three modes of human performance (in clouds) and their relationship to the control modes and situations in which they are employed (adapted from Reason [58, 62]). Many attempts to improve safety in health care simply call for clinicians to pay more attention to their work, but fully conscious control of routine work is a mode of performance that is not sustainable in human nature (this imaginary zone in human performance is indicated by the question mark). We must look elsewhere for better and more effective methods of safety improvement


Knowledge-Based Errors (or Errors of Deliberation)


At the highest level of conscious awareness, the conscious control mode is slow, prone to error, requires effort, and operates sequentially (i.e., it deals with one thing at a time) [58, 62]. However, it can deal with completely novel and complex problems and is a primary source of human knowledge. The increased cognitive effort required when learning a new task appears to be directly reflected in the physiological activity of the brain. Novelty requires a “full-brain” conscious response, resulting in a large increase in brain activity [92]. In contrast, a familiar situation where an existing skill or rule can be applied results in little increased brain activity yet leads to smoother behavioral performance. Typically, we resort to the conscious control mode only when our stock of existing rules has become exhausted. This is not because we are mentally lazy, but because in most circumstances reasoning from first principles, using the conscious control mode, would take much too long. In addition, the operation of the conscious control mode (or the process of deliberation [85]) is probably the most error-prone human control mode. Furthermore, this process is often based on an incomplete or inaccurate “knowledge base”; some of this knowledge may reside in our minds and be amenable to training, but much of it is in the world, including in other people’s minds. Thus, faulty decisions often reflect mental models that are subtly out of line with reality. This is the source of the term “knowledge-based errors,” but in fact this phenomenon can promote rule-based errors as well. In addition, human deliberation suffers from a number of known biases, including confirmation bias (arriving at a conclusion and then adapting the facts to fit it), frequency bias (using the first information to mind), and similarity bias (attempting to solve two superficially similar, but different, problems in the same way) [58, 85]. Attempts to remove or mitigate such biases have been made, most recently through a process called cognitive debiasing, which proposes a suite of educational and mentally reflective initiatives aimed at “recalibrating” the mind in order to improve clinical tasks such as diagnoses [93, 94]. All such initiatives, however, start with gaining a better understanding of human psychology.


Rule-Based Errors


Rule-based behavior is the next level down in terms of the degree of conscious awareness required for the execution of a behavior—using the intermediary or mixed control mode (Fig. 30.2) [58, 62]. Acting in a rule-based way typically involves the conscious recognition of a familiar set of circumstances and the application of a learned rule. Applying an existing rule is much faster and less effortful than deliberation, and the majority of decisions in health care involve the application of rules in this way. Appropriately, the bulk of education in health care focuses on the acquisition of a very large rule base. Rule-based errors typically involve either the misinterpretation of a set of circumstances and hence the application of a good rule in the wrong situation or the application of a bad or inadequate rule that is thought to suffice. As an individual’s repertoire of rules increases, with ongoing education and experience, he or she becomes more expert and is able to apply an appropriate rule in a much larger number of circumstances. Thus, an expert is likely to be equipped with a much greater, and typically more reliable, resource of rules than a novice [85] and will need to resort to deliberation (i.e., actively reasoning from first principles) less often.


Skill-Based Errors


The unconscious control mode is fast (often reflex-like), efficient, but rigid. It is the control system that allows “automatic” or skill-based behavior and comprises a collection of highly learned, frequently used routines or skills. Skill-based behavior tends to be so well learned that once started a sequence will often run through to completion without much further involvement from conscious awareness, for example, tying shoelaces or signing your name. In addition, the recognition of subtle cues and patterns by experts is often done at the unconscious level, leading to masterly and rapid performance that the individual often has difficulty explaining after the fact other than in terms of intuition, often stating simply that “they just knew” [84].

Experts have a large repertoire of skill-based behaviors, which allow them to perform at higher levels of efficiency than novices. Skill-based performance allows multitasking while requiring the least cognitive effort of any form of human performance. A novice will often labor over a single task that an expert can perform in seconds, and simultaneously with other tasks, simply because the novice has yet to acquire the ability to perform the task at the skilled-based level [58, 62].

Without skilled-based behavior, few of us would be able to perform even the simplest of everyday tasks, yet ironically skill-based expertise can also predispose us to make certain errors [95]. The ability to drive to work by an accustomed route while mentally planning the morning’s activities is usually advantageous. However, if your workplace has recently changed, it is possible to find yourself halfway to the old, familiar address before realizing that you are traveling in entirely the wrong direction. Errors like these do not usually matter, because under normal conditions there is time to compensate for them—recovering from error is one of the greatest strengths of human intelligence [89, 96]. However, in certain error-intolerant environments, such as health care, typical everyday errors can lead to disaster so quickly that there is no time to prevent the consequences. The ability of a clinician to administer a drug while simultaneously calling further treatment instructions to an assistant in an emergency is a situation where the advantages of skill-based behavior may make the difference between life and death. However, such circumstances may also predispose a clinician to administer the wrong drug if drugs are poorly labeled and are used in an environment with inadequate safeguards. Novices are less likely to make such drug errors, simply because they do not possess the skill base with which to perform many of the actions involved at the unconscious level. However, a novice is likely to respond too slowly to provide effective patient care in a life-and-death emergency.

Two of the commonest categories of skill-based error are slips, in which an expert correctly performs a well-learned skill in incorrect circumstances (e.g., injection of the wrong drug), and lapses, in which an expert misses a step in a well-learned and otherwise correctly executed skill sequence because of momentary interruption from the environment or concurrent tasks (e.g., a busy clinician failing to record the administration of a drug) [29, 95, 97]. Both kinds of errors occur because the expert is able to perform skill-based behaviors largely unconsciously. Therefore, unlike performance of the rule-based type, greater expertise does not reduce the chance of error in skill-based performance. It is little appreciated that experts, in fact, can be expected to commit more slips and lapses than novices simply because they have a larger skill base at their disposal [58].


Technical Errors


A further kind of skill-based error common in health care has been described by Runciman and colleagues as the technical error [98]. A technical error can occur when the correct rule is employed, when no slip or lapse occurs, but where the desired outcome is not achieved because of a mismatch between the required technical skill and the applied technical skill. In the placement of an epidural catheter, for example, the tip may be inserted too far, resulting in the complication of dural tap, or it may not enter the epidural space at all, resulting in no anesthetic effect. The primary factor contributing to such technical errors is variability of patients and of physicians. During the insertion of an epidural catheter, the physiology of some patients may make insertion more difficult than others and some physicians are more skilled than others. Physicians also have good and bad days. If the difficulty of a particular patient is beyond the skill of a particular physician on the particular day, a dural tap or failed insertion may occur. Whether this is an error or not is a normative matter. If the epidural was one that a reasonable practitioner could usually have achieved, then, arguably it was a technical error. However, some tasks in medicine, including some epidural insertions, are technically impossible for the vast majority of practitioners with contemporary equipment and techniques. It seems unreasonable to refer to failure in these circumstances as error. Error should not be judged primarily by the outcome but by the process involved in its commission. Many anesthesiologists will know the feeling, in realizing that they have performed a dural tap (in this example), that they somehow just got it wrong—that they made a technical error. Golf provides a good illustration of this idea. No golfer living today would classify failing to get a hole in one from a distance of 150 m as a technical error, but many would readily relate to the idea that an uncharacteristic slice into the rough of a drive from the tee was a technical error.

The challenge of patient variability should not be underestimated. Unlike many high-technology endeavors where a great deal of standardization is possible, health care clearly must contend with the subtle physical variations and abnormal anatomies that exist in individuals—differences that are often unknown and unknowable before the procedure has begun. This is quite a different situation than with a manufactured artifact, such as an aircraft, where its exact structure and function can be known and where these details are documented. As Atul Gawande has put it, “a study of forty-one thousand trauma patients in the state of Pennsylvania—just trauma patients—found that they had 1,224 different injury-related diagnoses in 32,261 unique combinations. That’s like having 32,261 kinds of airplane to land” [99]. Furthermore, unlike aircraft, none of these 32,261 unique trauma cases came with a manual.


Exhortation and Protocols


Despite these complexities, typically little training or education on the psychology of error or the nature of human behavior is provided during a health-care career. Efforts aimed at reducing error in health care often involve exhortation to be more careful at worst or the creation of new safety procedures and protocols at best [59, 100, 101]. Both these approaches to error reduction focus on the individual clinician and so are consistent with the human-centered approach. This view holds that all error is due to forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness [102]—paying more attention or following often lengthy safety protocols is therefore expected to stop error. Exhortation alone to be more careful, particularly with respect to skill-based performance, is equivalent to asking clinicians to perform all their duties with the conscious control mode. However, fully conscious control of routine behavior is a human performance mode that is not sustainable for anything more than very short periods, especially when individuals are required to possess a skill base related to the tasks they are being asked to perform. In Fig. 30.2 this imaginary zone in human performance is indicated with a question mark.


The Effects of Fatigue


Physical and mental fatigue increase with sleep deprivation, and increased fatigue leads to increased likelihood of the occurrence of the error types previously mentioned [103, 104]. Humans also experience a normal circadian cycle in sleepiness through the 24-h day, increasing in late afternoon (from 2 PM to 6 PM) and early morning (from 2 AM to 6 AM) where performance can be impaired [105, 106]. For example, the circadian nadir of human performance has been implicated in a number of notorious industrial accidents such as the Bhopal chemical plant accident in 1984 that killed 3,787 people; the Chernobyl nuclear reactor accident in 1986, which it has been estimated may eventually kill 27,000 people internationally through cancer; and the Three Mile Island nuclear reactor accident in 1979 (discussed in more detail later) [70].

Much of the research into the effects of fatigue involves test tasks, notably the psychomotor vigilance task (PVT), administered over short periods in a quiet room with no distractions—conditions that have little in common with the work of an anesthesiologist. Furthermore, increased mental effort and the effects of adrenaline may counter the effects of fatigue, at least temporarily, and so some doubt remains over whether the risk of error during anesthesia is necessarily increased by moderate degrees of sleep deprivation [104, 107, 108]. Evidence that fatigue impairs surgical performance is also less than clear [109]. On the other hand, some participants in a simulation-based study of anesthesia residents fell asleep for brief periods [110], and 48.8 % of respondents to a survey of Certified Registered Nurse Anesthetists had witnessed a colleague asleep during a case [111]—events that seem hard to defend. Other studies in health care have demonstrated increased risk of significant medical errors, adverse events, and attentional failures associated with fatigue [108, 112114]. For example, on the basis of 5,888 h of direct observation, interns working traditional schedules involving multiple extended-duration shifts (≥24 h) per month have been found to make 20.8 % more serious medication errors and 5.6 times more serious diagnostic errors than when working without extended-duration shifts [115]. It is also relevant that Dawson has shown that shifts of 16 h or more are associated with reductions in performance equivalent to the effects of alcohol intoxication as legally defined [116]. However, the causes of human fatigue are not confined to the work place, and it is also unclear that all recommended fatigue countermeasures are effective in improving patient care. For example, reducing the work hours of residents has resulted in more handovers of care, and these in themselves are a known source of patient risk due to communication failure [117, 118]. Attempts to reduce working hours for clinicians have been made in various countries, but, in many, current hours worked remain higher than in other safety-critical industries such as the aviation industry [114, 119]. Furthermore, limitations to residents’ hours of work are more common than limitations to the hours that senior doctors may from time to time be asked to work [120]. In general, though, some reasonable limits on work hours are appropriate. Strategic napping may also be effective in bringing relief from fatigue [106, 119], and facilities should be provided to allow this.


Human Factors and the Culture of Safety


In recent years there has been growing interest in the adoption of the “safety culture” of the aviation industry in anesthesia, and the analogy of the anesthetist as the “pilot” of his or her patient has become well known [121, 122]. The aviation industry in the United States began adopting systematic approaches to improving safety in the 1920s when the first laws were passed to require that aircraft be examined, pilots licensed, and accidents properly investigated. The first safety rules and navigation aids were then introduced. The first aviation checklist was introduced following the crash of the Boeing Model 299 in 1935, killing two of the five flight crew, including the pilot, Major Ployer Hill [99, 123]. The Model 299 was a new, more complex aircraft than previous models, and during the more involved process of flight preparation, Major Hill omitted a critical step—he forgot to release a catch, which on the ground locked the aircraft’s control flaps. Once in the air this mistake rendered the aircraft uncontrollable. The crash investigators realized that there was probably no one better qualified to fly the aircraft than Major Hill and that despite this the fatal error was still made. Some initially believed that the new aircraft was too complicated to be flyable. Given the circumstances of the accident, the investigators realized that further training would not be an effective response to prevent such an event from occurring again. Thus, the idea of a checklist emerged: a simple reminder list of critical steps that had to occur before the aircraft could leave the ground. With this checklist in use, the Model 299 (and later versions of it) remained in safe operation for many years.

A teamwork improvement system called Crew Resource Management (CRM), primarily focused on nontechnical skills such as communication in the cockpit, followed checklists in aviation in the early 1980s [75, 123]. Aviation checklists have subsequently been applied to many other routine and emergency aspects of aircraft operation and are today organized hierarchically in a binder such that in an uneventful flight only the topmost checklist is required. However, if operating conditions deviate for the routine, the checklist hierarchy forms a decision tree through which additional relevant checklists are brought to bear on each abnormal set of conditions, for example, managing an engine fire [99, 124]. In this way checklists coordinate the actions of those in the cockpit with each other and with members of the wider microsystem of aircraft operation, including members of the cabin crew, aircraft traffic control personnel, and through traffic control, other aircraft. It should be emphasized, however, that checklists do not substitute for training and expertise; they are simply a form of aide-memoire to assist in making training and expertise more effective. The ongoing training of pilots is itself a model for safety improvement that health care is only now beginning to adopt.

Today, much technical and nontechnical flight training occurs in sophisticated immersive flight simulators. The result of this on-going program of training in human factors relevant to flying is an enviable safety record for the aviation industry. Commercial air travel is now by far the safest form of transportation by distance—resulting in only 0.05 deaths per billion kilometers traveled, compared with 3.1 and 108 deaths per billion kilometers traveled by car and motorcycle transportation, respectively [123]. It is worth noting that even the latter risks are much lower than that of anesthesia. This can be seen if the risk of death attributable to anesthesia is assumed to be 1 in 200,000 cases (and we believe this to be an optimistic estimate) [125, 126], and both this and the rates for road transportation are converted to a time basis. People are generally much more likely to die in a road accident than during an anesthetic, but that is because of the relative exposures to these risks, rather than to the rates of risk themselves.


Simulation and Safety


Modern manikin-based simulators were first introduced in health care in the 1960s and have since been used primarily for technical skills training such as airway management and life support. In the 1980s, more immersive simulation environments incorporating such manikins were developed and training began to include crisis management during rare events and the safety of care [127]. A version of CRM for anesthesia was first promoted in the early 1990s, but nontechnical skills training for complete clinical teams, including surgical staff, is (surprisingly) a recent innovation [128, 129]. The slower uptake of simulation in health care probably reflects the greater technical challenge of simulating the human body and its various responses to health-care interventions. Considerable realism can be achieved today [130, 131], but a key deficit in anesthesia simulation lies in the fact that the simulators require an operator. Although some of the physiologic models are impressive on their own, there is a long way to go before a simulator will automatically respond to the interventions of anesthesia in the way a healthy patient does, let alone the way patients with various pathologies might do. Again, this reflects the fact that anesthesia, involving human patients, is much more complex than aviation, in which pilots expect to work with standardized and fully functional aircraft. Certainly weather varies, but if safety is in doubt, flights are deferred. With many acute patients, the avoidance of risky conditions is not possible. Furthermore, although there is emerging evidence of the transfer of learning in clinical simulators to the real world, much work needs to be done to assess the validity of many aspects of health-care simulation [131, 132]. While flight simulators have for many years been sufficiently immersive and realistic that a pilot trained entirely in the simulator can step into a real aircraft and fly it without further training, it will be many years before simulation in health care reaches this level of sophistication.


Teamwork and Communication


An additional challenge for modern health care is that its multi-professional nature hinders the changing of work culture and increases the risk of poor teamwork and communication failure [133135]. Communication strategies used by hospital personnel have not kept pace with the increasing complexity of care and have changed little, if any, in decades. A clinical team is often comprised of a disparate set of individuals from different schools of training with different skill sets and world views who must somehow work together to bring about a successful outcome for a unique patient with a unique presentation—and this is likely to be particularly the case during sedation outside the operating room. As a consequence, observational research in health care demonstrates that failures in teamwork and communication are relatively common, particularly when handing over patient care from one health-care team to another and when a patient is receiving multidisciplinary care involving a number of professional groups simultaneously [133, 136, 137]. Furthermore, the communication that does occur during multidisciplinary care often happens in silos, that is, within a professional group rather than between groups. Professional silos manifest an unwillingness to speak up to challenge others, a lack of engagement in team decision making, and poor agreement on shared goals [133, 138]. Poor communication of this sort has been associated with compromised patient safety, increased rates of procedural errors, patient harm, significant additional costs, and work place dissatisfaction [56, 139]. However, team processes can be improved. A recent systematic review of 28 qualifying papers reports on team processes such as communication, coordination, leadership, and nontechnical skills; from 66 comparisons of a team process variable with a performance variable, 40 (61 %) were found to be significantly related [140]. Of the 11 studies reporting team process interventions, 7 (66 %) showed significant improvements after the intervention.

Salas et al. [141] have proposed a model for teamwork based on empirical evidence from teams across diverse organizations that is informative in efforts to improve teamwork in pediatric sedation. Five dimensions of effective teamwork are described: team orientation, team leadership, mutual performance monitoring, backup behavior, and adaptability. These dimensions are underpinned by three coordinating factors: mutual trust, closed loop communication, and shared mental models within the team.

Team orientation is probably the most important factor. Mutual trust and shared mental models are unlikely to occur if the people providing sedation for diverse procedures in children, and the different proceduralists with whom they are working, do not even identify as a team. Lack of team orientation is a substantial barrier to improvement, and there would be great value in the simple step of getting all relevant practitioners together and obtaining agreement that the care for pediatric patients undergoing sedation actually warrants the formation of an explicit team that works together to standardize and improve their equipment and processes [142, 143].

Leadership is interesting in this context. In the clinical setting, leadership will need to be dynamic depending on the issue in question and the training and experience of the practitioners involved. If present, an anesthesiologist would be expected to lead the management of a crisis that developed during a procedure, for example, but decisions about aspects of the procedure itself are more likely to be initiated by the proceduralist. An agreed approach is required to ensure that the best decisions are made and this requires discussion and consensus building away from the demands of managing patients. This raises the important question of the overall leadership of the team. There is obviously a need for regular meetings of the team members to discuss approaches, set expectations, agree on needed equipment, and adopt guidelines, among many other important aspects of practice. There is no particular reason for such a leader to be an anesthesiologist, a surgeon, or a member of any other particular group—the role here is really one of coordination and consensus building.

An effective way to build teamwork is to provide training for the whole team in communication and other nontechnical skills. As previously discussed, simulation provides a powerful tool for doing this. Briefing sessions of the whole team at the beginning of every clinical session are very helpful to plan the day and to ensure that mental models are indeed shared in respect of anticipated problems and the plans for dealing with them. Not only do such sessions improve safety, they also greatly improve the flow and efficiency of the day. Debriefing at the end of each session is also valuable. This can be very brief and should focus on what went well and what opportunities for improvement were noticed.

If patients are regularly transferred at the end of procedures to postanesthetic care rooms, high dependency rooms, or even wards, attention should be paid to standardization of the process of handover or handoff. The work of de Leval and his group has resulted in important gains in safety and efficiency when taking patients from the operating room to the intensive care unit [144]. Similar gains are likely in the context of pediatric sedation.

Some team process improvements may be enhanced by the adoption of process tools. The World Health Organization (WHO) Safe Surgery Checklist was specifically designed to promote better communication and enhance teamwork. Some of the benefits that have been demonstrated with its use were found in categories not specifically targeted by checklist items [17]. The authors of the checklist have speculated that these additional benefits may be due to the more global effects of better team communication engendered by the act of carrying out the steps of the checklist itself, including individual team members introducing themselves by name [99]. This has two advantages. It promotes directed communication in which people are addressed by name. It also activates people; once a person has spoken, he or she is more likely to speak again. This increases the likelihood of speaking up if an error is noticed.


The Nature of System Failures


The complexity and design of systems is also a significant contributor to human error. Complexity theory asserts that some systems behave in ways that are inexplicable on the basis of only a knowledge of the systems’ individual components—that is, the behavior of the whole depends on more than a knowledge of its parts [57, 145]. Typical examples of such complex systems are living organisms, stock markets, and the weather. Socio-technological systems contain human operators or workers as vital components in their everyday function and are thus distinguished from purely technological systems that are capable of essentially automatic operation [3, 83, 146, 147]. Specific work environments, clinical microsystems, or large-scale technological systems can be understood as complex socio-technological systems in this sense. Despite this, health care remains one of the last industries to adopt the kind of systematic approach to safety that has proved successful in many other high technologies [66, 69, 121, 148150].


Characteristics of Safe and Unsafe Systems


In Charles Perrow’s Normal Accidents Theory, a “normal accident” is one that occurs in a complex system through the unanticipated interaction of multiple failures. The complexity of the system both predisposes to the occurrence of simultaneous multiple failures and masks the many potential ways in which such individual failures may interact in a dangerous way [66]. Perrow also suggests that the function of any system can be classified along two dimensions: interaction and coupling. A task or process can be said to have complex interaction between parts if there are many alternative subtasks at any point in its completion or linear if it is comprised of a set of fixed steps carried out in rigid sequence. The coupling dimension describes the extent to which an action in the task or process is related to its consequences. A system is tightly coupled if consequences occur immediately after an action. Hence, tightly coupled systems result in more accidents because minor mistakes, slips, or lapses can become serious accidents before they can be corrected. A loosely coupled system is more forgiving of error and allows greater opportunity for an error to be corrected in time to avoid serious consequences [151]. These two dimensions form Perrow’s interaction/coupling space with which human activities can be classified [66].

For example, baggage handling by airlines is a relatively safe organizational activity because it is both loosely coupled and has linear interaction between parts (bottom left quadrant of Fig. 30.3). That is, a bag tends to progress through a fixed number of independent steps on the way to being delivered to its owner, and there are many opportunities to correct mistakes in the process. Furthermore, the consequences of failure are typically irritating and correctable rather than catastrophic. At the opposite side of the interaction/coupling space, a nuclear power plant by comparison is potentially dangerous because it has both complex interaction and tight coupling between parts or subsystems (top right quadrant in Fig. 30.3). Errors in the operation of a nuclear power plant may very quickly lead to dangerous outcomes. In addition, complex interaction makes the system inherently more difficult to control because such complexity increases the chance that unanticipated system interactions may cause the system to spontaneously depart from the desired path of operation. While it is widely understood that nuclear power plants are complex and tightly coupled, it is less well appreciated that health-care systems also fall into the most dangerous quadrant of the interaction/coupling space (the upper right-hand quadrant) and have similar characteristics [152, 153]. In fact, health care is probably more challenging than nuclear power plants, because it combines tightly coupled elements with loosely coupled elements and varies from simple through complicated to complex and indeed chaotic (or dynamical) [99]. On the other hand, the potential for truly catastrophic consequences on a grand scale is larger with nuclear power plants. Human beings are complex (physiological and psychological) systems and so appear on the extreme high end of the complexity dimension. A normal awake patient would fall on the loose side of the midline of the coupling dimension because of the homeostatic and self-regulating subsystems of the body. However, a human being undergoing anesthesia or sedation is a decidedly more tightly coupled system than a fully awake individual, necessitating close monitoring and an array of techniques to maintain the patient’s safety. Consequently a sedated patient migrates to a location within the interaction/coupling space significantly closer to the most potentially dangerous top corner (Fig. 30.3)—a zone in closer proximity to a nuclear plant than an aircraft.
Nov 2, 2016 | Posted by in PEDIATRICS | Comments Off on Improving the Safety of Pediatric Sedation: Human Error, Technology, and Clinical Microsystems

Full access? Get Clinical Tree

Get Clinical Tree app for offline access