Health-care organisations are required to monitor and measure the quality of their maternity services, but measuring quality is complex, and no universal consensus exists on how best to measure it. Clinical outcomes and process measures that are important to stakeholders should be measured, ideally in standardised sets for benchmarking. Furthermore, a holistic interpretation of quality should also reflect patient experience, ideally integrated with outcome and process measures, into a balanced suite of quality indicators. Dashboards enable reporting of trends in adverse outcomes to stakeholders, staff and patients, and they facilitate targeted quality improvement initiatives. The value of such dashboards is dependent upon high-quality, routinely collected data, subject to robust statistical analysis.
Moving forward, we could and should collect a standard, relevant set of quality indicators, from routinely collected data, and present these in a manner that facilitates ongoing quality improvement, both locally and at regional/national levels.
Highlights
- •
Quality care includes patient satisfaction, best outcomes and fewest interventions.
- •
Clinical outcomes and patient experience should be prioritized over process measures.
- •
A standardised set of QIs should be produced from routinely collected data.
Introduction
Purchasers, policymakers and patients are rightly demanding greater accountability for the money spent and for the quality of care provided.
This is particularly the case for maternity care, which could and should be safer . The National Health Service Litigation Authority (NHSLA) in England published 10 years of maternity claims, with >5000 claims from 2000 to 2009, expected to cost up to £3.1 billion. This represents a £600 litigation surcharge for each and every infant born in that decade , and this has risen to £700 for each baby in the current decade . Previous scandals have led to calls for health-care organisations to develop and implement robust systems to measure and monitor the quality of their maternity services , and the UK government has mandated regular reporting of health-care quality indicators (QIs) .
However, the measurement of quality is difficult: quality is multifaceted , and we must ensure that measurement is broad enough to include what is important to all stakeholders, and not merely what can easily be measured.
Finally, although health services can be awash with data, there is very little guidance about how best to analyse and present information to staff, and in particular other stakeholders including patients.
This article will discuss the definitions of quality, quality measures and QIs of quality of care, the use of maternity dashboard systems for monitoring quality and performance and the importance of patient contributions with regard to maternal perception of quality of care.
How do we define quality of care?
Recent years have seen unprecedented efforts to measure health-care quality, and the methodological and pragmatic complexities of these efforts have led to major debates: which ‘dimensions’ of quality to measure; whether to focus on processes or outcomes; which outcomes to prioritise-traditional clinical outcomes or more patient-centred ones; and, perhaps most important, how to link measurement to action through policy, professional and management levers .
The Health Foundation has identified that many current systems for the measurement of quality are rather one-dimensional: ‘what we currently measure is not how safe healthcare systems are now but how harmful they have been in the past’ .
This is no less a problem in maternity care where there have been a number of calls for a comprehensive approach to the measurement of quality , which should contain the multiple perspectives involved in maternity care , including those of staff .
Process measures
Process (e.g. caesarean section (CS) rate) and system (e.g. size of unit) measures are commonly employed in quality measurement, at least partly because they are easy to measure. There is also an implicit assumption that the hospitals that perform best on selected process measures will have the best health outcomes.
Recently, this assumption has been challenged in maternity care , and a US research group has demonstrated that although process measures may be associated with an adverse outcome, the hospitals that performed best on those measures did not have the best risk-adjusted rates of obstetric morbidity .
We are not suggesting that process measures are not valid or should not be measured and/or reported; process measures may provide valuable insight into a hospital service, and they could be usefully combined with clinical QIs to provide balanced measures.
Clinical QIs
The use of a suite of clinical indicators or outcomes is one way to measure the quality of a clinical service. Historically, maternal mortality rate was used as the earliest measure of the quality of obstetric care . This remains a crude but important indicator, still employed today in international comparisons. However, the steep decline in maternal deaths over the last few decades in the UK, and many developed countries, limits its value.
A number of quality measurement outcome tools have been proposed to improve accountability and information sharing in maternity care . These include the Adverse Outcome Index (the percentage of deliveries with one or more specific adverse events), the Weighted Adverse Outcome Score (WAOS) and the Severity Index (SI) that describes the severity of the outcomes . However, they do not appear to have been widely implemented.
Legal claim analyses (LCAs) provide an important but narrow perspective of adverse clinical outcomes, and they could possibly be used as part of a portfolio of indicators; however, by their nature, they suffer from a significant lag time, which hinders timely feedback into clinical services . Clinical outcome measures are appealing, but there can be issues with appropriate case mix or population risk adjustment, and at least one group of surgeons has asked the NHS to reconsider the publication of mortality rates . Certainly, CS rates in the UK vary with different population demographics . Maternity risk managers also highlighted lack of accurate population risk adjustment as a significant obstacle to the measurement of clinical quality .
Problems of appropriate risk adjustment notwithstanding, effective quality monitoring relies on the identification of appropriate QIs based on high-quality data. Ideal QIs should be relevant to the area of care being monitored, measurable using routinely collected data and alterable by best practice. National best practice guidance has been published to help teams devise and employ good QIs within the UK health-care setting .
Although many QIs have been proposed and are in use in maternity care, there are no standardised, uniformly agreed sets of indicators. Many calls have been made for a standard set of QIs both internationally and in the UK . However, the current lack of structure and rigour has resulted in an enormous variation in the QIs monitored and definitions used: 290 clinical indicators were identified within 96 clinical categories with up to 18 different definitions in four sets of nationally recommended intrapartum QIs from the UK, Australia, USA and Canada . Moreover, in one UK region comprising 10 maternity units, there were 352 different QI definitions, covering 37 different QIs with up to 39 different definitions for each indicator! . This is clearly an unnecessary variation, and it should be streamlined. There is an urgent requirement for a national and international core set of maternity QIs.
Suites of indicators have been developed using robust methodologies: systematic review and Delphi panels . The USA has also developed a National Quality Forum Perinatal Care Core Measure Set that includes five very limited quality measures , which would appear to be relatively unambitious in UK practice.
Once a set of QIs has been selected, it is imperative that they are analysed using robust statistical methods. Unfortunately, this may not always be the case, and in one review of a single UK health region , the overwhelming majority of units used arbitrary thresholds for adverse outcomes and there was no benchmarking. A number of researchers have recommended the cumulative sum control chart (CUSUMS) method as the most appropriate method to monitor the relatively low-frequency adverse outcomes in health care and maternity care . Further guidance is urgently required to inform alert thresholds for adverse outcomes.
Overall, clinical indicators that are measurable and alterable with best practice are essential to the useful measurement of quality, and there is at least one example from maternity care that demonstrated monitoring of QIs to be both feasible and beneficial: an adverse trend in infants born with a low Apgar score was identified, thereby allowing for timely corrective action and improvement in perinatal outcomes .
Patient-reported outcome measures
Quality measures must also have a direct relevance to patients’ lives, including their experience of and satisfaction with the care they receive .
Satisfaction also depends on the values placed on different biomedical outcomes, which can vary widely between different cultures and individuals . For example, CS may be the preferred mode of delivery amongst a studied population of Brazilian women, but it is conversely perceived as a highly undesirable outcome amongst certain sub-Saharan African populations .
Various surveys and tools exist to evaluate these patient perceptions of service, and since October 2013, all NHS-funded maternity services have asked patients to answer a single question about how likely they would be to recommend the services they have received to friends or family (Friends and Family Test) if they needed similar care or treatment.
The UK’s Care Quality Commission (CQC) conducts triennial surveys of maternity service users in the UK. Its most recent survey collated the experiences of over 23,000 women who had a live birth between January and March 2013. The report measured quality issues for patients centred around their physical care both antenatally and postnatally, care of their babies, attention to pain management and discharge arrangements as well as the professionalism and competence of staff .
Ideally, patient-reported measures, such as results from the CQC’s survey on women’s experiences of maternity care, would be integrated with, and provide additional context for, a holistic interpretation of numerical indicators .
Data quality
‘Garbage in = Garbage out’ was originally coined to describe the unquestioning approach used by computer’s logic boards; it is equally relevant to the measurement of quality.
Data quality is key to meaningful measurement of quality, whether QIs or process measures, and the dangers of poor data have recently been highlighted after the publication of a report Patterns of Maternity Care in English NHS Hospitals , which identified 11 performance indicators to compare performance between NHS maternity units. The data were derived from the NHS Hospital Episode Statistics (HES) system, which has significant problems with data completeness; key data fields such as gestational age and birthweight were missing in over 20% of records. An accompanying editorial concluded that HES data cannot be used to undertake this kind of analysis .
Other authors have highlighted concerns regarding the accuracy of HES data; in 2009–2010, there were 17,000 recorded male inpatient admissions to the UK obstetric services , which seems unlikely. Moreover, the HES data do not collect neonatal data.
However, local databases in the UK contain most of the data missing in HES , and they are amongst the most accurate data sets in the NHS with over 94% agreement with the notes in some analyses . Therefore, it would seem appropriate to aggregate local databases into higher-order data sets to measure and, importantly, benchmark quality measurement between units. This has been feasible in the 10 maternity units across a whole NHS region .
Good-quality routinely collected data are available across the NHS, and they should be harnessed to avoid costly and unnecessary manual population and duplication .
Finally, a core Maternity Services Data Set has been proposed by the Health and Social Care Information Centre in the NHS ( http://www.hscic.gov.uk/maternityandchildren/maternity ), which, at least in theory, is mandatory from May 2015. However, it has yet to be implemented, and it will take time to get accurate data coverage of sufficient quality to use. There will also be insufficient historical data for comparison, at least initially. Therefore, local data will be important for the foreseeable future.
Presentation
Presentation of information to stakeholders is an essential part of quality measurement, but there is a dearth of data to inform best practice.
Graphical displays and tools to represent health outcomes date back at least to the 1800s when in 1858 Florence Nightingale employed a graphical display (polar-area diagram) to present her findings that the majority of deaths were due to poor sanitation in military hospitals, and not casualties in battle . This revolutionised care provided in military hospitals in the Crimea, and the use of visual data tools and displays is equally powerful in modern health-care systems.
Clinical dashboards are frequently proposed, and they facilitate this process within UK maternity settings . A maternity dashboard was first described in UK practice in 2005 for a hospital after several preventable maternal deaths, to help measure and manage what was described as serious clinical underperformance . In response to this, the Chief Medical Officer’s report into intrapartum deaths recommended that dashboards be piloted at several sites nationwide, to monitor standards of care in maternity units .
In 2008, the Royal College of Obstetricians and Gynaecologists (RCOG) recommended that all maternity units implemented a dashboard to plan and improve their maternity services’ . Within this guidance, the RCOG included an example dashboard, which utilised a red–amber–green (RAG) colour coding system to alert users of changes in rates or frequencies of selected events and QIs, against locally agreed standards, on a monthly basis.
Although recommended for use in all UK maternity settings by the RCOG, there are very few data in the literature related to maternity dashboard use and development. There is an optimistic description of the implementation of a maternity dashboard in a London teaching hospital and a much more guarded survey of dashboards across the South West NHS region .
A recent description of the feasibility of a simple dashboard using a standardised set of QIs appears to show great promise for a practical and pragmatic solution to the collection, measurement and presentation of clinical and process indicators . However, there were no patient-reported outcome measures (PROMs), and more research is definitely required in this important area.

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

