Incorporating Research into Healthcare Decisions




© Springer International Publishing AG 2017
Christina A.  Di Bartolo and Maureen K. BraunPediatrician’s Guide to Discussing Research with Patients10.1007/978-3-319-49547-7_5


5. Incorporating Research into Healthcare Decisions



Christina A. Di Bartolo  and Maureen K. Braun2


(1)
The Child Study Center, NYU Langone Medical Center, New York, New York, USA

(2)
Department of Pediatrics, The Mount Sinai Hospital, New York, New York, USA

 



 

Christina A. Di Bartolo



Keywords
Implementation barriersPatient characteristicsEvidence-based practiceResearch–practice gapCommunication strategies



Overview


Researchers and physicians alike have long recognized that the conclusions of clinical research should have some practical applications. While basic science seeks to gain knowledge for its own sake, clinical research aims to contribute to improved wellbeing in individuals and populations. Still debated is the extent to which findings from research trials can be appropriately and feasibly applied in clinical settings [1]. Specific questions in this ongoing dialogue rest on the merits of particular findings, what information can be interpreted as evidentiary, and how that evidence is to be applied and integrated within existing practices [1].


Evidence-Based


The questions imply a distinction between treatments developed and tested in research studies and the practice of using research-backed therapies in clinical care. More than just a subtlety to be inferred, this distinction is a real one, marked by specific terminology. Within controlled research trials, the therapies, techniques, interventions, or medications that produced statistically and clinically significant changes are called evidence-based treatments (EBT) [1]. Alternatively, the broader clinical practice that maintains up-to-date knowledge of research findings and incorporates them into practice when deemed valuable within the context of individual patient needs, values, and clinical presentation is called evidence-based practice (EBP) [1]. Clinicians can engage in evidence-based practice without using a specific EBT. Consider a physician who keeps apprised of recent research developments in muscle hypotonia without prescribing a specific new EBT for a patient, who is already responding well to the current course of treatment. This clinician delivers evidence-based care by considering EBTs through the lens of a specific patient. Other doctors may want to deliver specific EBTs but cannot due to feasibility or logistical barriers.

On the other hand, clinicians may utilize an EBT without necessarily practicing in an evidence-based fashion. In this case, consider a physician who prescribes a new, evidence-based medication without considering whether or not the patient might achieve positive outcomes with a previously established treatment. Providing evidence-based practice requires that physicians make educated decisions, not simply dispense new treatments without question or at the urging of pharmaceutical representatives. Evidence, expertise, and patient characteristics are the hallmarks of evidence-based practice [1].

As a practice conceptualization, EBP began in the 1970s with British epidemiologist Archibald Cochrane [2]. He determined that women entering pre-term labor were not appropriately treated with corticosteroids because systemic reviews had not synthesized the results of research trials into meaningful clinical guidelines [2]. The needless and senseless deaths of thousands of premature babies spurred his creation of the Cochrane Collaboration in 1993 [2]. The collaboration’s aim, from its inception to the present day, is to assist clinical decision-making by creating and updating publicly available systemic reviews of the latest and most reliable research findings for health practitioners [3]. This approach to intentionally remain abreast of research findings with the express purpose of delivering optimal care ushered in the era of evidence-based practice. Empirical support for evidence-based practice shows that clinical care incorporating treatments from rigorous studies improves patient outcomes by 28% compared to practices derived from tradition [2].

The past 20 years have shown the concept of evidence-based practice to be vastly influential in a number of areas [4]. A Medline search places the first usage of “evidence-based practice” in 1992, with 600 more appearances 5 years later [5]. The sharp increase continued, with over 1,000 results from 1995–2000 alone [5]. Clearly evidence-based practice gained traction and wide acceptance among those who publish—primarily, researchers. Impressions among practitioners are not as easy to quantify. Studies have found that positive impressions of utilizing research findings to inform on care have been widely adopted by practitioners and policymakers in the fields of medicine, nursing, psychology, and others [4]. Evidence also exists for a similarly positive impression in the field of pediatrics. One study of pediatricians and pediatric nurses found that these medical professionals displayed moderate to good scores with regard to their attitudes toward evidence-based practice [6].

When performed as intended, evidence-based practice bridges the gap between researchers and physicians. Researchers and doctors regularly observe the same phenomena, but their different perspectives quite frequently lead to misunderstanding and disagreement. Like the proverbial blind men examining parts of an elephant, those feeling the tail interpret a rope, those feeling the legs conclude a tree, and so on. Physicians criticize researchers for being out of touch with true clinical presentations, studying disease in a way that is not practically helpful, and producing treatments that cannot be delivered due to logistical constraints—time, money, apparatus, infrastructure, etc. Researchers voice exasperation when physicians continue to deliver ineffective care, as a result of entrenched practices, that lacks a basis in evidence.

While not completely unfounded, mutual critiques between the world of research and applied medicine drive a wedge between natural partners in the quest for better care. The magnitude of philosophical differences between researchers and physicians appears smaller when removed from the academic setting. Famous psychology researcher Alan Kazdin wryly notes that the personal lives of researchers and physicians show their tacit acknowledgement of the merits of their counterparts’ approaches [1]. He points out that rarely would a researcher who has fallen ill eschew a treatment with insufficient evidence of efficacy, particularly if no evidence-based treatment exists for the ailment from which they suffer. Similarly, physicians who find themselves ill commonly begin researching their condition to learn more about possible treatment options, and their scientific backing, when approaching their own healing. Observed in this phenomenon are researchers and clinicians displaying significant agreement regarding evidence-based practice despite sometimes finding themselves at odds over the scientific method and specific EBTs.

Researchers and physicians both acknowledge individual variability in their work, albeit in different fashions. Researchers measure the extent to which a treatment works for participants of differing characteristics with moderators. Moderators are participants’ characteristics that can be measured at the start of the study. After observing treatment effects, researchers review the moderator variables to assess whether or not they affected the treatment’s impact on outcomes for participants with similar moderators. For example, a study may uncover that a sleep training intervention produces better outcomes for families reporting low levels of stress before implementing the intervention in their homes than it does for high-stress families. Without referring to them as moderators, physicians regularly address these same characteristics in their practice. Doctors consider clinical variables that they expect to either promote or hinder the effects of their prescribed treatment. In the sleep training example, a mother with concerns about her child’s sleep asks her pediatrician for guidance. Before recommending sleep training, the pediatrician assesses conversationally whether the mother’s stress levels will likely allow her to implement the procedure as designed. Comments indicative of high-stress guide the physician to conclude that the sleep training may not help the family in their current state and may in fact stress them further. The physician accordingly makes a different recommendation that is more likely to be successful in addressing the presenting problem.

Many clinical decision-making models involve a component of incorporating physician expertise and individual patient characteristics, such as clinical presentations, comorbidities, preferences, and values [2]. While these are important considerations, they are presumably employed in all manner of sensible healthcare decision-making. In this chapter, we focus on the considerations for decision-making more directly relevant to incorporating research findings into practice. Specifically, this chapter addresses the practical and theoretical impediments to EBP as well as its facilitators. We will provide communication strategies for physicians to follow when holding discussions about research and EBP with patients.


Practical Impediments


Many physicians approve the concept of evidence-based practice in theory. Indeed, incorporating new knowledge into an already professionally established knowledge base of experience is one of the more interesting aspects of practicing medicine [7]. Despite theoretical acceptance, doctors encounter a number of factors impeding full implementation of evidence-based practice in reality.

Although the consideration of scientific evidence is a widely accepted practice, clinicians do not formulate their decisions on evidence alone. A deeply engrained method for decision-making relies on the consensuses that emerge within fields and communities of experts [7]. It is possible for a consensus view to form as a result of formal reviews of compilations of studies [7]. Practically, the constantly expanding body of knowledge hampers the integration of new knowledge into previous consensus decisions. As a result, real-world physicians rely on an informal network of other practicing clinicians to determine whether or not a consensus is building around a particular treatment [7]. This network effect is observed in the regional differences among malpractice litigation. Plaintiffs who experienced adverse outcomes are more likely to sue if they received procedures considered outside the standard of care. Comparing medical malpractice lawsuits across different regions shows how the standard of care varies from locale to locale, presumably driven by these networks [7]. Community standards on which physicians rely can be combined with a more systematic approach. However, this synthesis requires endeavors, such as journal clubs, that can only be created and sustained through physician time and effort [7]. While clinicians may want to practice evidence-based care, more easily accessible non-research sources of information often drive clinical decision-making.

Another practical factor deterring the implementation of evidence-based treatments in clinical settings is the lack of long-term follow-up results in many research studies. When safe and effective methods for treating an illness exist, a lack of evidence regarding long-term outcomes of newer methods means that the older treatment is more likely the safer one [7]. Balancing the interest and optimism in new treatments against reliable and established means of care presents an additional mental calculation for physicians who consider which treatment to prescribe. Ironically, the long-term outcomes of their own patients with the pre-established treatment are also typically not measured in a systematic way. Clinicians can collect data on their own patients (called an n -of-1 trial), but these efforts require a willing patient and physician to implement [7]. Given the wide body of expertise immediately accessible to a seasoned clinician on previously established treatments, evidence-based treatments are harder to learn about, and inherently riskier to implement.

Logistical barriers also hinder the integration of EBTs into existing clinical practice settings. Where administrative support is lacking, physicians encounter a more difficult time obtaining resources they may need to pursue evidence-based practice [2]. Insufficient mentors or advocates for EBP and inadequate knowledge circulation about EBP present key obstacles in light of the consensus-driven effect described above [2, 7]. Even when education and information are available to inform practicing doctors of new EBTs, it is often didactic in nature and resists uptake [2]. Worse, education for trainees in the area of EBP typically focuses on the research aspect without elaborating how to incorporate those research findings into practice [2].


Practical Facilitators


Despite practical hurdles to the implementation of evidence-based practice, there are facilitators for EBP as well. Some are positive opposites of the above barriers, including administrative support, EBP mentors or advocates, and a better connection between research and clinical practices within a region [2]. Other practical resources, such as time and money, also facilitate EBP [2]. Easily comprehensible and available writing on research also assists integration of EBTs into practice [2].


Theoretical Impediments


In addition to these practical challenges in EBP, there is a conceptual paradox inherent in utilizing evidence-based treatments to make decisions for an individual. Applying study findings to an individual has face validity, as decisions are often made for one person based on what is observed to happen for many people. For example, a traveler attempting to board a train in a foreign country might observe numerous people first going to a ticket window. The traveler can reasonably conclude that based on this observed sample, he as an individual ought to stop at the ticket window as well. Despite this intuitive sense that research results can be applied in a similar fashion, the goals of research and clinical care are vastly different. The goal of research is to acquire knowledge regarding phenomena. In service of this goal, research systematically gathers data on a number of individuals (the sample) to infer conclusions about the whole (the population) [4]. Notice there is no role of the individual in this process. Yet treating the individual is the goal of clinical care, and the paradox is formed.

EBP presents a special challenge for patient-centered care. First, clinical research studies tend to be disease-centered, not patient-centered [5]. For example, most recruitment efforts for clinical studies mention the disease of interest on the flyer, subway ad, or email blast. For example: “Does your child have difficulty breathing? If so, you may qualify for a clinical treatment study at our Asthma Center.” Research does not recruit people, but rather samples of people who share a disease in common. This is because research commonly follows a biomedical approach, wherein a disease exists within a patient. Disease is considered an objective reality. By contrast, illness is a patient’s subjective experience of feeling unwell. Patient-centered care focuses on patients experiencing illness within their specific psychological and social context [5]. Two individuals may suffer from the same disease, but one may consider himself ill while the other does not.

The distinction between disease and illness is made regularly clinically, but addressed rarely scientifically. Research studies seek to treat the disease, while a patient-centered physician seeks to cure the patient of illness. Evidence-based practice in its current format continues to draw from research studies’ conclusions about diseases. Hence, the results are harder to situate within a patient-centered model.

Second, research studies often attempt to minimize differences between individuals [5]. As one example, the randomization process is intended to more equally distribute individual characteristics that may meaningfully interact with the treatment—whether positively or negatively—between the treatment and control conditions. Certain analyses after the data have been collected serve the same function: by covarying for specific baseline features, the analyses seek to show that the results would hold regardless of these individual characteristics. Patient-centered care takes the opposite approach, with individual characteristics often driving key treatment decisions [5]. If a child is sufficiently afraid of needles, his doctor may choose to administer the influenza vaccine via nasal spray rather than injection. The patient’s characteristic drove the clinical choice irrespective of whatever the evidence may show regarding vaccine efficacy as a function of delivery method. This characteristic, in a research study, would be relegated to a discussion of moderator variables and might not even make the main outcomes paper. In clinical practice, this characteristic determined the clinical choice.


Theoretical Facilitators


The field at large can take actions to address the paradox of delivering evidence-based treatments to a sample size of one. Performing more meta-analyses observing clinical outcomes, particularly those examining effect sizes, provides a more complete picture as to clinical outcomes among different individuals [4, 5]. Researchers can also place more weight on individual variability, whether in the form of moderator results in research studies or a clinical appraisal of individual characteristics in a clinical setting [4]. Researchers can also plan their studies to incorporate patient preferences into the design [5]. In addition to the continued performance of randomized controlled trials, these kinds of designs would yield results that more closely reflect real-world outcomes, where patient preferences having some impact on treatment administration are the norm [5].

Physicians on an individual level can reduce the discrepancy between the two models of medicine through doctor–patient communication [5]. The relationship and information shared between doctor and patient serve as the true bridges connecting the gulf between the principles of EBP and patient-centered care [5]. A doctor’s first question during an office visit is, “What brings you in today?” Taking care to listen to the specific answer is the first step to grounding the care that follows in the patient’s needs. The goal is for the doctors to assess the patient’s needs along two axes—disease versus illness, and control versus guidance. First of all, is the patient asking for a cure for a disease, or is the patient seeking relief from symptoms or functional impairment that promote illness? Second, does the patient want to remain in full control of the medical decisions, or is he asking for guidance from the doctor? In our discussion of patient-centered care in Chap. 1, we presented the evidence showing that not all patients are interested in making their healthcare decisions. Some prefer the paternalistic approach.

Once the doctor know where the patient’s preferences fall along these two dimensions, the following discussion, treatments considered, and ultimate decision are grounded within the patient’s needs regardless of whether or not evidence-based treatments are offered, considered, or chosen. Interactions in which the patient’s preferences for conceptualizing their ailment and the level of control they prefer are rooted in patient-centeredness. This holds even in cases where the patient follows a biomedical approach and prefers that the doctor retain control over the clinical decision-making. This outcome may mimic the paternalistic view of medicine, but if it occurs as a result of patient preference, it remains patient-centered.


Initiation of Discussion of Research Findings


In the conversations about care, either party in the doctor–patient interaction can initiate discussion regarding research findings and clinical implications. Physicians may initiate a discussion of research findings because they have considered the statistical and clinical significance, the new intervention’s anticipated effects compared to treatment-as-usual, possible side effects, and cost (both direct and indirect) [8]. The doctor’s comfort level with trying a new treatment will moderate the likelihood of raising the discussion, and patient preferences will ultimately decide if the treatment is chosen [8].

While this rational approach is perfectly sensible, many clinicians will not necessarily make their decision to initiate discussion in such a structured fashion. When the information available is incomplete, as is often the case when initiating a conversation when the patient’s preferences are unknown, clinical judgment typically takes precedence. Clinical judgment has been interpreted as the intuition of experts [9]. Much about the mechanisms of clinical intuition is still unknown [9]. However, reports of clinical intuition appear to reflect that it works via the rapid combination of (1) subconsciously accessing knowledge of past experiences; and (2) using this knowledge to fill in the gaps of information. While some clinicians may first analyze a research study before presenting an EBT to their patients, others may mention a recent finding based on an intuitive sense their patients would be open to hearing more.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 30, 2017 | Posted by in PEDIATRICS | Comments Off on Incorporating Research into Healthcare Decisions

Full access? Get Clinical Tree

Get Clinical Tree app for offline access