Cardiorespiratory monitoring is an integral part of clinical care in the neonatal intensive care unit (NICU) setting. Preterm infants are exceptionally unstable in terms of respiratory control, with a high incidence of apnea, bradycardia, and desaturation events during the first few months of life22,72 because of both immaturity of the central nervous system and susceptibility to disease and infection. A high percentage of infants in the NICU setting require respiratory support, including supplemental oxygen, continuous positive airway pressure (CPAP), and mechanical ventilation. Recent trials and changes in clinical practice have focused on decreasing the duration and extent of respiratory support to minimize the incidence of chronic lung disease.91 These aggressive weaning protocols require the presence of stable spontaneous ventilation and oxygenation to be successful. During this period, ideal respiratory monitoring should have the capability of detecting both central and obstructive apnea with accurate continuous measures of oxygenation to minimize both hyperoxic and hypoxic exposure. Regardless of monitor settings, nursing records continue to underestimate the true incidence of cardiorespiratory events.101 A further reduction is anticipated with longer response times owing to an increase in patient-to-nurse ratio and number of single patient rooms. Therefore future state-of-the-art bedside monitoring should include accurate identification of apnea, bradycardia, and intermittent hypoxemia events, with electronic storage of high-resolution, long-term, minimally processed raw waveforms and overall summary variables for retrospective review by the clinical care provider. This chapter includes a brief history, the current modes and future directions of hemodynamic, blood gas, and respiratory monitoring used in the NICU setting. The first electrocardiogram in humans was recorded in approximately 1870 by Muirhead using a siphon, followed by Waller in 1887 using a capillary electrometer. In 1903, Einthoven developed an improved model known as the string galvanometer, producing precise but unpredictable electrocardiogram (ECG) recordings. He assigned the letters P, Q, R, S, and T to the deflections of the ECG waveform and was awarded the 1924 Nobel Prize in Physiology or Medicine “for his discovery of the mechanism of the electrocardiogram.”64 The use of these recordings is commonplace in any hospital setting with extraction parameters ranging from basic heart rate to more extensive analyses of ECG arrhythmias. More sophisticated pattern recognition algorithms of heart rate variability, such as spectral and Poincaré analyses, were examined in the 1970s; however, these tools have yet to be implemented into clinical practice. During each heartbeat, an electrical impulse originates in the sinoatrial node, is propagated among the muscles of the atrium, through the atrioventricular node, followed by dispersion throughout the ventricles. In summary, the heart can be viewed as a dipole as excited myocardium is negatively charged with respect to the myocardium at rest. The small alterations in voltage generated by the heart can be measured at the body surface with electrodes placed on the chest. The most common type of electrode used in the NICU is the silver-silver chloride, foil-based, recessed or floating electrode. Each electrode contains a highly conductive electrolyte gel with a composition that varies by manufacturer. Electrode placement is important in acquiring a signal of adequate resolution, especially in neonates with a limited surface area. Baird et al. in 19925 found the optimal placement to be the right mid-clavicle and the xiphoid (Figure 39-1). Multiple probe applications should be avoided because this can compromise ECG signal integrity owing to poor adhesiveness of the electrodes and increase the chance of skin damage, including high transepidermal water losses,15 in extremely preterm infants with fragile skin. The ability of the electrodes to detect small electrical changes with each heartbeat requires the application of a current to the chest wall. The recommendations of acceptable current limits by American Heart Association (AHA) cover two aspects of electrical safety. The first entails the amount of current allowable in a patient-connected lead that can flow through the myocardium without inducing ventricular fibrillation. The second aspect pertains to the allowable chassis leakage current that flows through the patient to ground. The AHA recommends that currents be limited to 10 µA through patient leads and less than 100 µA, with an optimal level of 10 µA for chassis leakage current.52 The initial goal of simple heart rate and basic rhythm determination has expanded to identification of specific arrhythmias, such as prolonged QT, and algorithms of heart rate variability, such as spectral and wavelet analysis and Poincaré plots, among others. Alterations in heart rate variability and prolongation of QT interval have been shown to be associated with prematurity100 and infant morbidity, including sudden infant death.32 Although these sophisticated analyses have been limited to the research arena, development of the ability to detect alterations in an automated fashion may play a role in future clinical care. The first attempt to use sound from an external source for medical diagnostics was recorded in 1761 by Leopold Auenbrugger, who used percussion as a diagnostic tool in heart disease. Almost two centuries later, the implementation of echocardiography began in 1953 when Inge Edler and Hellmut Hertz met to discuss the use of ultrasound for heart investigation. Initial setbacks included the inability to produce frequencies high enough to use for measuring the very short distances involved. After borrowing the first ultrasonic reflectoscope, designed for nondestructive material testing, from a shipyard in the city, Hertz was able to observe pulsatile echo signals. Later that year, the first echocardiograms were recorded followed by implementation as a routine clinical diagnostic tool in 1954.26 Application of the Fick principle is considered the gold standard of cardiac output monitoring in a research setting. This method states that cardiac output can be calculated by dividing the pulmonary oxygen uptake by the arteriovenous oxygen concentration difference. For carbon dioxide, the pulmonary carbon dioxide exchange is divided by the venoarterial CO2 concentration difference. The modified CO2 Fick method can be used in neonates but requires frequent blood sampling. Because of this limitation, there are currently no studies validating these methods in neonates.20 Doppler-based cardiac output measurements can vary widely and should be limited to trend monitoring.20 During TTE, measures of left ventricular output, right ventricular output, or superior vena cava flow can be obtained. Validation data with TTE have been limited to transthoracic left ventricular output measures, which have been shown to be comparable with pulmonary artery thermodilution and O2-Fich methods.20 A variation of thoracic electrical impedance, electrical velocimetry, includes surface ECG electrodes placed on the forehead, left side of the neck, left mid-axillary line at the level of xiphoid process, and left thigh. During application of a small alternating electrical current through the thorax, changes in voltage are measured during periods of systole and diastole. Stroke volume (SV) is then determined using the following equation: where Vept (mL) is the volume of electrically participating tissue derived from body mass and height, νLVET (s−1) is the ohmic equivalent of mean aortic blood velocity during left ventricular ejection and LVET (s) is the left ventricular ejection time. Recent data have shown electrical velocimetry as a comparable mode of measuring left ventricular output in neonates when compared with echocardiography,65 although variation among individuals was seen using both techniques. Measurements of circulatory pressure were documented in the eighteenth century by Stephen Hayes.10 In 1828, Poiseuille won the gold medal of the Royal Academy of Medicine for his doctoral dissertation pertaining to the use of a mercury manometer for the measure of arterial blood pressure. The idea of a noninvasive sphygmomanometer was recorded by Vierordt in 1855 followed by modifications by Marey and others. In 1896, Riva-Rocci reported the method upon which our present-day technique is based. The size and placement of the cuff can affect accurate measurements of blood pressure. A cuff that is too narrow or applied loosely may result in falsely high readings. The American Heart Association recommends a cuff width of approximately 40% of the limb circumference. The two modes of indirect blood pressure monitoring include the auscultatory and oscillometry methods. The auscultatory method entails rapid inflation of the cuff, followed by slow deflation while listening for distal Korotkoff sounds with a stethoscope. This method, most commonly used in adults, is limited by the inaudible frequency range of arterial sounds in neonates, intra-observer variability, and disturbance to the patient. The oscillometry method is more often used in newborn intensive care units. In this method, cuff pressure is rapidly inflated to above systolic pressure. As the pressure is slowly released, small pulsations can be detected as the cuff approaches systolic pressure. When the cuff pressure decreases to below systolic pressure the oscillations increase in magnitude because of blood flowing into the artery. Ultimately a maximum oscillation point will be reached corresponding to mean arterial pressure, followed by a decline as the cuff pressure decreases to baseline (Figure 39-2). In critically ill premature infants, oscillometric blood pressure measurements have been shown to have good agreement with arterial catheter values, although accuracy is greatly diminished in infants with a mean airway pressure less than or equal to 30 mm Hg.92 Although the use of a cuff does not allow for continuous monitoring of blood pressure, many systems have the ability to provide automated transient readings. The multitude of diseases associated with prematurity frequently necessitates oxygen therapy as a component of clinical care. Even during periods of supplemental oxygen attempting to stabilize baseline oxygenation, severity of illness compounded with immature respiratory control quite often leads to respiratory instability presenting as rapid intermittent hypoxemia events.24 Therefore, blood gas measurements are useful for estimating baseline levels of oxygenation but cannot quantify the multitude of short desaturation events that commonly occur in this patient population.22 This has led to the implementation of noninvasive continuous estimates of oxygenation such as pulse oximetry and transcutaneous monitoring. The concept that pulse oximetry could be calculated from the ratios of absorption of red and infrared light from blood and tissue was first conceived in the early 1940s.84 This was followed by the use of pulsatile variations in light as a measure of arterial oxygen saturation initiated in the early 1970s by Takuo Aoyagi. During an attempt to measure cardiac output using a dye dilution method—whereby photocells detect light of specified wavelengths as they are passed through the blood—Aoyagi used an ear oximeter to avoid invasive arterial blood sampling. The signal transmitted from the ear oximeter exhibited pulsatile variations prohibiting accurate measurements of cardiac output. Aoyagi devised a way to filter these oscillations by subtracting out a pulse signal detected at 900 nm corresponding to the infrared range of the light spectrum. He found that he was only intermittently successful. In retrospect, this was most likely caused by changes in oxygen saturation because oxygen desaturation increases infrared light transmission while decreasing red light transmission. The failure of consistently filtering out the pulsatile variations, or “noise” components, of the dye curves led to the idea of measuring these dynamic changes in light transmission to compute a noninvasive estimate of arterial oxygen saturation. The first commercially available ear oximeter was marketed in 1975 by Aoyagi and Nihon Kohden followed by the Minoruta Camera Company’s development in 1977 of a fingertip pulse oximeter probe. Initial interest and use was limited to pulmonary function laboratories until Jack Lloyd, founder of Nellcor Incorporated, recognized its potential as a noninvasive technology for measuring oxygenation in unstable or severely ill patients.84 The principle of pulse oximetry is based on the Beer-Lambert law, which states that the concentration of an absorbing substance in solution can be determined by the intensity of the light transmitted through the solution.87 Applying this concept, pulse oximetry relies on the light absorption characteristics of deoxygenated and oxygenated hemoglobin in the red (600-750 nm wavelength) and infrared (850-1000 nm wavelength) light spectrum ranges, respectively (Figure 39-3). The oximeter probe comprises two LEDs, each emitting light of a specified wavelength (660 and 940 nm) through the capillary bed of the infant’s extremity. A photodiode detector on the opposing side of the electrode measures the intensity of the light passing through the extremity at each wavelength, which is equivalent to the amount absorbed by tissue, and venous and arterial blood. Oxygen saturation values can be extrapolated from this measurement by exploiting the relatively small arterial pulsatile changes, also known as the plethysmogram waveform, with each heartbeat. This is accomplished by measuring the ratio of the transmitted light owing to the arterial pulsatile component (pulsatile) to the transmitted light owing to the constant component of the signal (constant); that is, tissue and baseline blood in tissue (Figure 39-4). This ratio is calculated separately for both the red and infrared waveform signals. The ratio of the red (pulsatile/constant component at 660 nm) to the infrared signal (pulsatile/constant component at 940 nm) can be then be converted to a measure of oxygen saturation. The advantages of pulse oximetry are ease of use, fast response time, and continuous measures of oxygen saturation. The probe requires no heating or calibration by the user and is routinely placed on the palm of the hand or sole of the foot. In sick infants with intravenous lines or heparin locks precluding access to these extremities, recent data have suggested the wrist or ankle as an adequate alternate site.68 The rapid response time and continuous measurement capabilities make oximetry the ideal modality for detection of intermittent hypoxemia, which often occurs in preterm infants.22 Accuracy of pulse oximetry is dependent on multiple factors, including range of oxygenation, probe position, both motion and ambient light interference, low perfusion, skin pigmentation, variations in hemoglobin, and calibration algorithms. In general, pulse oximetry has been shown to provide reliable estimates of oxygen saturation during periods of normoxia but deteriorates as hypoxemia worsens (i.e., <70%). Improper probe placement and ambient light interference can result in either falsely high or low values of Spo2.71,96 Under conditions of excessive ambient light interference, the Spo2 will trend toward a value of 85%, the Spo2 value at which the ratio of red to infrared light absorption equals 1, or complete failure to detect Spo2 with a value of 0. Display values of 0 can also occur because of motion artifact, a common occurrence in early model pulse oximeters. Therefore, proper probe placement should include direct opposition of the emitter and detector to minimize an optical shunt, and covering of the extremity to reduce ambient light interference. Improved technology has implemented various filtering algorithms to minimize loss of signal and accompanying nuisance alarms,53 the most successful being developed by Masimo Corporation (Irvine, CA).41,79 The procedure begins with detection of all optical density ratios that correspond to oxygen saturations of 1% to 100% and computation of the reference signal for each optical density ratio (Figure 39-5). The output power of the adaptive noise canceler is measured for each reference signal followed by identification of the appropriate peak in the Discrete Saturation Transformation Algorithm that corresponds to the largest Spo2 value. The saturation algorithm is independent of recognition of a clean pulse, giving it a distinct advantage over pulse oximetry systems using these criteria as a prerequisite for calculation of arterial oxygen saturation. Additional factors affecting Spo2 accuracy include dark skin pigmentation and low perfusion (i.e., periods of hypothermia, low cardiac output, or vasoconstriction), which may result in delayed waveform recognition and underestimation of oxygen saturation levels.71,96 Various types of hemoglobin can contribute to deviations in both displayed value of Spo2 and clinical interpretation of oxygen content delivery. Differences between pulse oximeter display values can be attributed to whether the instrument is displaying functional Spo2 or fractional Spo294 where: These values typically differ by 2%, the value equivalent to the levels of COHb, MetHb, and other dysfunctional hemoglobins in healthy, nonsmoking adults. In the presence of fetal hemoglobin, because of its high affinity to oxygen, a normally clinically acceptable level of Spo2 may not translate to adequate oxygen delivery to the tissue. This effect is noted by a left shift in the oxygen dissociation curve.86 Blood transfusions during the first week of life will reduce the proportion of fetal hemoglobin and should also be taken into consideration when oxygen therapy is being regulated.21 Finally, calibration procedures, including motion artifact algorithms, software versions, and mathematical extrapolations at low ranges of Spo2 can vary by manufacturer and affect the displayed Spo2 value. Recent multicenter trials randomizing infants to low oxygen saturation to decrease the incidence of retinopathy of prematurity (ROP), revealed a flawed calibration artifact in the Masimo SET Radical, resulting in an artificial reduction in saturations of 87% to 90% and an increase in higher values of oxygen saturation.49,90 Revised software implemented in 2009 corrected this bias. Encompassing all of the factors that can affect accuracy, it is not surprising that measures of bias and precision have been shown to vary widely among manufacturers.85 However, numerous studies have reported on the accuracy of pulse oximeters in pediatric populations28,41 with improvements to within plus or minus 2% for arterial oxygen saturations greater than 70%.82 An understanding of monitor settings is imperative to avoid periods of hyperoxia and hypoxia and to minimize nuisance alarms. There are three pulse oximeter parameters that directly affect patterns of oxygenation—alarm threshold, alarm duration, and waveform averaging time. Although there are no existing standards for alarm thresholds because the optimal oxygen saturation target has yet to be identified,89,90 a high alarm setting of 95% for infants receiving supplemental oxygen is generally accepted to avoid hyperoxia exposure. Low alarm settings are conventionally set between 80% and 85% with an increased interest in avoidance of both sustained hypoxia and short intermittent oscillations in oxygenation. Alarm time delays can range from 0 to 15 seconds depending on the manufacturer. A long time delay of and increased averaging time are most often used to minimize nuisance alarms. The averaging time is probably the most misunderstood parameter on the pulse oximeter display. Conceptually, a longer averaging time will minimize oscillations in the Spo2 waveform by averaging the current data point with previous Spo2 values within a specified window, most often ranging from 2 to 16 seconds. Common clinical settings trend toward the longest averaging time to minimize nuisance alarms. However, longer averaging times can erroneously under-report both event severity30 and the incidence of short events less than 10 seconds and falsely overreport the occurrence of prolonged desaturation events greater than 20 seconds in duration.97 Therefore the ideal monitor settings should include a short averaging time to accurately detect the incidence and severity of true desaturation events and a long monitor alarm delay, if needed, to minimize nuisance alarms. Since the late 1700s, it has been known that human skin breathes, taking in oxygen and giving off carbon dioxide. In the early 1950s, initial exploitations of that observation with skin electrode measurements failed because the surface Po2 would rapidly fall to zero when the skin was covered. However, Baumberger and Goodfriend found that inducing vasodilatation and increased blood flow by heating the skin caused the Po2 at the skin surface to increase to a level that approximated arterial Pao2.6 In 1972, heated surface electrodes were introduced by two German groups,25,45 and by the late 1970s, transcutaneous monitoring became widely accepted in neonatal clinical care.83
Biomedical Engineering Aspects of Neonatal Cardiorespiratory Monitoring
Hemodynamic Monitoring
Electrocardiogram
History
Principle of Operation
Cardiac Output
History
Principle of Operation
Blood Pressure
History
Principle of Operation
Blood Gas Monitoring
Oxygenation
Pulse Oximetry
History.
Principle of Operation.
Transcutaneous Monitoring of Oxygen
History.
Stay updated, free articles. Join our Telegram channel
Full access? Get Clinical Tree