Training for intrapartum emergencies is a promising strategy to reduce preventable harm during birth; however, not all training is clinically effective. Many myths have developed around such training. These principally derive from misinformed beliefs that all training must be effective, cheap, independent of context and sustainable.
The current evidence base for effective training supports local, unit-based and multi-professional training, with appropriate mannequins, and practice-based tools to support the best care. Training programmes based on these principles are associated with improved clinical outcomes, but we need to understand how and why that is, and also why some training is associated with no improvements, or even deterioration in outcomes.
Effective training is not cheap, but it can be cost-effective. Insurers have the fiscal power to incentivise training, but they should demand the evidence of clinical effect; aspiration and proxies alone should no longer be sufficient for funding, in any resource setting.
Highlights
- •
Misconceptions exist that all obstetric emergency training programmes are effective.
- •
Effective training is not cheap, but it can be cost-effective.
- •
Local, multi-professional simulation training is evidence-based.
- •
Improvements in clinical outcomes should be the principal end point of training.
- •
Parallel process evaluation is useful.
Introduction
Improving maternal and perinatal care and reducing preventable intrapartum harm in particular is a global priority. Improved training for intrapartum care is at least part of the potential solution; however, we must ensure that training is both effective and sustainable.
More and better training for obstetric emergencies have been almost ubiquitous recommendations in national reports identifying intrapartum preventable harm from continents across the world , in reports of increasing litigation costs , by national guidelines and in clinical jeremiads .
These recommendations have generated a huge variety of local solutions and courses with variable degrees of evaluation ranging from aspirational association with general educational principles and ‘hope’ that there will be a positive effect, through local tests of knowledge and skill, up to clinical evaluation of outcomes. An excellent and extremely comprehensive review of the current landscape of obstetric emergency training , and their impact has been recently published, and we will not repeat it. In this article, we will review some of the current myths that have grown up around training for obstetric emergencies so that maternity carers can engage with the most clinically useful and cost-effective training to provide the best possible outcomes for mothers and babies across the world.
Myth 1: Training must be effective
The history of evidence-based obstetric care is littered with well-intentioned, biologically plausible interventions, which when robustly investigated turn out to do more harm than good, for example, X-ray pelvimetry for previous cephalo-pelvic disproportion and high-dose vitamins C and E to prevent pre-eclampsia, amongst others. Training should be similarly evaluated.
All obstetric emergency training programmes are well intentioned, and most are based on national or international guidance, but there are now important and robust data where training was not associated with improved clinical outcome , or it was associated with an increase in perinatal morbidity: the rate of neonatal injury doubled in the decade after training was introduced in Oxford , and a recent cluster randomised trial from the Netherlands demonstrated a trebling of hypoxic neonatal injury in units allocated to training compared with control units with no training . These are alarming and counter-intuitive data, but they are extremely important.
There are also reports where training was associated with decreases in knowledge and confidence from immediately post course to 6 weeks post course for many of the emergency situations measured .
Furthermore, this is no less the case in low resource settings where a failure of most studies to underpin their results with adequate evidence precludes valid pronouncements on the effectiveness of the courses described .
Therefore, training is not magic, and nor is it automatically effective. Furthermore, the success of training depends on keeping mothers and babies safe, and not on achieving improvements in written test scores. We need a mature debate about the active ingredients of effective training.
A review of effective training for obstetric emergencies published in 2009 concluded that many of the courses then reviewed had common features: institution-level incentives to train, multi-professional training of all staff in their units, teamwork training integrated with clinical teaching and the use of high-fidelity simulation models.
These themes have been reiterated in two more recent reviews for obstetric training , one of which concluded that all maternity and neonatal health professionals should attend in-service training sessions. Furthermore, on-site ‘in-house’ training with low-tech, highly realistic models is more readily implementable than off-site training at simulation centres, and training integrated into institutional clinical governance and quality-improvement initiatives is likely to have better results. Finally, there must be some form of quality assessment of the training to ensure that it meets minimum standards .
Notably, multi-professional training has also been identified in similar training programmes outside obstetrics , in quality-improvement programmes and as a feature of high-reliability organisations both within and outside medicine . Therefore, the local and multi-professional elements are likely to be important, if not essential, components of effective training.
There are also subtleties within the broad recommendations of local training for all staff that should be considered, notably frequency of training and its planning.
The Simulation and Fire drill Evaluation (SaFE) study in the UK investigated knowledge and performance after training for shoulder dystocia (SD), postpartum haemorrhage (PPH) and eclampsia, and improvements were maintained for at least 12 months after training . Moreover, training programmes associated with improvements in clinical outcome have all mandated annual training. Therefore, annual training is probably a reasonable target.
Unannounced simulation in clinical settings has been proposed with potential advantages of decreasing required resources, increasing realism and affordability, as well as widening multidisciplinary team participation . However, these benefits appear to be compared with training in simulation centres, and they resonate with previously discussed local, multi-professional models rather than to any scheduling. Moreover, when unannounced simulation has been evaluated in an obstetric setting, a significant minority of staff considered unannounced simulation to be stressful and unpleasant, and midwives more frequently so. Furthermore, planning and implementation of unannounced simulations were deemed time consuming and challenging .
Training for obstetric emergencies is not always effective; however, it is well intentioned. There is a nascent evidence base for training, and we should use this to inform future training programmes. Currently, the evidence supports local, multi-professional training for all staff annually. Other models of training should be required to provide robust evidence of positive effect before they are adopted or funded by national bodies.
Myth 1: Training must be effective
The history of evidence-based obstetric care is littered with well-intentioned, biologically plausible interventions, which when robustly investigated turn out to do more harm than good, for example, X-ray pelvimetry for previous cephalo-pelvic disproportion and high-dose vitamins C and E to prevent pre-eclampsia, amongst others. Training should be similarly evaluated.
All obstetric emergency training programmes are well intentioned, and most are based on national or international guidance, but there are now important and robust data where training was not associated with improved clinical outcome , or it was associated with an increase in perinatal morbidity: the rate of neonatal injury doubled in the decade after training was introduced in Oxford , and a recent cluster randomised trial from the Netherlands demonstrated a trebling of hypoxic neonatal injury in units allocated to training compared with control units with no training . These are alarming and counter-intuitive data, but they are extremely important.
There are also reports where training was associated with decreases in knowledge and confidence from immediately post course to 6 weeks post course for many of the emergency situations measured .
Furthermore, this is no less the case in low resource settings where a failure of most studies to underpin their results with adequate evidence precludes valid pronouncements on the effectiveness of the courses described .
Therefore, training is not magic, and nor is it automatically effective. Furthermore, the success of training depends on keeping mothers and babies safe, and not on achieving improvements in written test scores. We need a mature debate about the active ingredients of effective training.
A review of effective training for obstetric emergencies published in 2009 concluded that many of the courses then reviewed had common features: institution-level incentives to train, multi-professional training of all staff in their units, teamwork training integrated with clinical teaching and the use of high-fidelity simulation models.
These themes have been reiterated in two more recent reviews for obstetric training , one of which concluded that all maternity and neonatal health professionals should attend in-service training sessions. Furthermore, on-site ‘in-house’ training with low-tech, highly realistic models is more readily implementable than off-site training at simulation centres, and training integrated into institutional clinical governance and quality-improvement initiatives is likely to have better results. Finally, there must be some form of quality assessment of the training to ensure that it meets minimum standards .
Notably, multi-professional training has also been identified in similar training programmes outside obstetrics , in quality-improvement programmes and as a feature of high-reliability organisations both within and outside medicine . Therefore, the local and multi-professional elements are likely to be important, if not essential, components of effective training.
There are also subtleties within the broad recommendations of local training for all staff that should be considered, notably frequency of training and its planning.
The Simulation and Fire drill Evaluation (SaFE) study in the UK investigated knowledge and performance after training for shoulder dystocia (SD), postpartum haemorrhage (PPH) and eclampsia, and improvements were maintained for at least 12 months after training . Moreover, training programmes associated with improvements in clinical outcome have all mandated annual training. Therefore, annual training is probably a reasonable target.
Unannounced simulation in clinical settings has been proposed with potential advantages of decreasing required resources, increasing realism and affordability, as well as widening multidisciplinary team participation . However, these benefits appear to be compared with training in simulation centres, and they resonate with previously discussed local, multi-professional models rather than to any scheduling. Moreover, when unannounced simulation has been evaluated in an obstetric setting, a significant minority of staff considered unannounced simulation to be stressful and unpleasant, and midwives more frequently so. Furthermore, planning and implementation of unannounced simulations were deemed time consuming and challenging .
Training for obstetric emergencies is not always effective; however, it is well intentioned. There is a nascent evidence base for training, and we should use this to inform future training programmes. Currently, the evidence supports local, multi-professional training for all staff annually. Other models of training should be required to provide robust evidence of positive effect before they are adopted or funded by national bodies.
Myth 2: Training is knowledge transfer
Evidence about how best to improve the safety and outcomes of healthcare exists, but the challenge of implementing evidence-based practice at the point of care remains.
Many theoretical frameworks for the translation and implementation of knowledge into practice exist . These frameworks have provided useful heuristic tools to understand the complex process of implementation/translation that clinical teams often term ‘training’.
Effective training is more than knowledge transfer: clinical knowledge improvement does not always translate into changes in practice. There is an emerging body of evidence that has identified the use of tools and artefacts to act as prompts for action in the workplace and for microsystem design to facilitate best practice for staff at the point of care : ‘make the right way, the easy way’, for example, checklists and stickers. However, stickers alone are unlikely to be useful: one common mistake of an oversimplified ‘checklist’ story is in the assumption that a technical solution (checklists) can solve an adaptive (sociocultural) problem . Stickers and tools need to be implemented by and during training.
Cardiotocography (CTG) stickers that summarise national guidelines into a simple stick-on format have been successfully introduced into practice with an associated 50% reduction in Apgar scores of <7 min and hypoxic ischaemic encephalopathy in a UK unit . However, the sticker itself does not magically improve outcomes; all staff in the unit need to be trained to use the sticker annually, its use should be mandated for all staff whenever a CTG is reviewed, and other contrary tools and systems should be discontinued. Finally, the use of stickers should be ‘policed’ using notes audits and the effect on outcomes such as low Apgar scores should be measured.
Stickers have been recommended in Sweden , and where they have been introduced as part of a multi-professional training programme, there have been significant improvements in the number of infants born in poor condition in both the US and Australia .
Other tools such as eclampsia boxes and maternity-specific early warning charts have also been introduced, all of which require teaching and training to use them.
Effective training is likely to be more about training teams to use tools, boxes and checklists, than direct transfer of new knowledge.
Myth 3: Training is independent of context
There is proof of principle that some training for obstetric emergencies is associated with improvements in outcome . However, there is a dearth of data on the effect of local context.
Most of the studies evaluating clinical outcomes after training have used a hospital unit as the unit of investigation, but there are more robust methods including cluster randomised studies and stepped-wedge designs that will help us understand the effect of training at scale. The Training Obstetrische Spoed Teams Interventie (TOSTI) study in the Netherlands is an excellent example of robust cluster randomised study design that has already informed the evidence base for training . There are also other robust studies currently being conducted .
Moreover, the implementation context is important Successfully scaling up training across a network of units is challenging , and there are many examples of failures . Even within seemingly successful models of training implementation, there are significant variations in outcome improvements at a unit level .
This is likely to be because clinicians and implementers rarely understand the social processes and mechanisms that produced the outcomes , at least partly because they are unaware of the essential requirements that make implementation successful. Therefore, there may be mistakes, as the ‘active ingredients’ are poorly understood, and their efforts and resources are misdirected.
The stepped-wedge study of obstetric emergency training in Scotland, the Trial of Hands-on Interprofessional Simulation Training for Local Emergencies (THISTLE) study , has a parallel process evaluation: THISTLE-Plus, which aims to identify the features of context that are necessary for programmes to work, and the strategies that can be used to create those conditions of context where they do not already exist – recognising that context and programme are often in dynamic interaction.
A group of qualitative researchers from the UK are conducting a research study to develop a programme theory of one successful programme: Practical Obstetric Multi-Professional Training (‘PROMPT’) . The aim of the study was to characterise the training within the original host context to understand the active components of the PROMPT intervention, particularly the following: what are the components or ‘active ingredients’ of PROMPT; what are the mechanisms through which it works in contributing to improvements in culture, practice and safety; what are the contextual clinical, leadership, managerial, organisational and cultural factors that have contributed to the success of PROMPT; and what is needed to replicate programme success elsewhere and to avoid the problems of ‘cargo-cult’ implementation?
Training programmes are increasingly recognised as complex interventions, and the research agenda should move away from the current gold standard of single-unit-interrupted time series reports to robust clinical trials, including parallel evaluations of process and context . This has been summarised recently in a clinical paper as progressing from ‘ Does the training work?’ to ‘ How and why does the training work?’ . Clinicians often do not have the research tools/experience to investigate the ‘how and the why’, and therefore could, and should, work more closely with social scientists to address these important questions.
Myth 4: Training is cheap
Effective training is not cheap. Furthermore, the costs of training are usually borne locally by the obstetric department, whereas the benefits of improved intrapartum outcomes are felt in areas of the health system outside maternity care. Therefore, a whole system approach is required to incentivise effective training using existing financial levers.
Training locally in clinical units is very likely to be cheaper than training in simulation centres ; however, local training is not without cost. Although there are expenses associated with training materials, training models and venues, the main costs of training are the release of staff to provide a faculty, and also staff to be trained. Few programmes have been costed formally, but in one UK training programme associated with improvements in outcomes , slightly more than 400 multi-professional (midwife, anaesthetist, obstetrician and healthcare assistant) staff days were required to train a large UK maternity department.
The effect of sustained training may also improve over time. In one long-term study of training for SD, there was a 70% reduction in brachial plexus injury after 4 years of training, and there was a 100% reduction in permanent injury after more than a decade of training . Over 85% of staff were trained annually during a period of 12 years. Clearly, this requires a significant investment for the institution, to both organise and sustain the training, as well as releasing staff from clinical duties to attend training.
The costs of training are similar in low resource settings, but training staff on-site eliminates travel, accommodation and hotel venue hire costs. A policy of no-per-diem payments can also reduce the cost of training .
Training can, however, still be cost-effective. There are examples where litigation payments have been reduced in parallel to improvements in maternity outcomes: one UK group have identified improvements in perinatal outcomes after training that have been associated with a 91% reduction in litigation payments.
There are similar reports from the US: one group reported improvements in perinatal outcomes, and they observed that the national obstetric claims experience (claims/10,000 births) was approximately 20% higher than that seen in their system . Another group have also described a parallel reduction in poor intrapartum outcomes and the number of reserved claims per birth, which decreased at a rate of approximately 20% per policy year . In a further US paper, the average annual compensation payments decreased from $27,591,610 between 2003 and 2006 to $2,550,136 between 2007 and 2009 in association with a decrease in sentinel events .
Finally, a recent study from Victoria in Australia describes improvements in neonatal outcomes after training that were associated with reductions in litigation costs. An insurer funded the training and has also calculated the reduction in litigation claims: the reduction in litigation costs were over 20 times the cost of the training , the benefits to families and society notwithstanding.
Effective training is not cheap, but it can be very cost-effective. Insurers are perfectly placed to pump-prime implementation, and to use their fiscal power to incentivise best practice and outcomes.

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

