Funding and Bias




© Springer International Publishing AG 2017
Christina A.  Di Bartolo and Maureen K. BraunPediatrician’s Guide to Discussing Research with Patients10.1007/978-3-319-49547-7_2


2. Funding and Bias



Christina A. Di Bartolo  and Maureen K. Braun2


(1)
The Child Study Center, NYU Langone Medical Center, New York, New York, USA

(2)
Department of Pediatrics, The Mount Sinai Hospital, New York, New York, USA

 



 

Christina A. Di Bartolo



Keywords
Funding sourcePharmaceutical industryClinical trialsPublication biasScientific journalismImplicit biases



Information Dissemination


The patient-centered movement in the medical profession reinforces patient autonomy while patients make their health care decisions. Truly autonomous decision-making relies crucially on informed consent, and in turn, informed consent requires information [1]. All this begs the question: where does this information come from? Put another way, how do the results of researchers’ studies reach patients? The information chain from researcher to patient is comprised of multiple players, including: the researcher, the funder of the research, the medical journal editor, the journalist whose interpretation of the study appears in popular media, the doctor reading the study, and the patient reading the journalist’s article. Together, these players serve to fund, research, disseminate, and implement new medical advances. How effective is this process in transporting a clear message from start (researcher) to finish (patient)? Consider the playground game of “Telephone,” in which children sit in a row and whisper a message from one end of the line to the other. As in the game, even when no one intentionally distorts the message, the end result the patient hears is often radically different than the one the researcher meant to deliver. Distortion can occur without necessarily malicious intent because each player in the process brings his or her own biases into the process.


Bias


Now is the time to define the word bias, for both physician and patient. Bias carries a negative connotation in the popular lexicon. In everyday language, only judgmental, close-minded people are biased. This chapter will heavily review how explicit kinds of biases affect research studies. However, from a psychological standpoint, the construct of a bias can also refer to a neutral process. Biases are our brains’ automatic and unconscious processes that occur without our intent [2]. In the field of psychology, everyone is biased. Biases operate to affect our thinking and subsequent behavior without conscious awareness. This category of biases is said to be “implicit” [2]. Cognitive psychologists refer to a bias when they describe any particular systematic “lean” of our brains. Psychologists consider these biases systematic because they function in a relatively predictable fashion; that is, they are not random.


To Explain to a Patient

Biases can be thought of as sunglasses for our brains. Sunglasses are not inherently bad. They might even serve some goals well: to look attractive, to filter out harmful UV rays, or to reduce the discomfort of bright light. Sunglasses accomplish all these goals by way of distortion. Biases in our brains are the same. They create slight distortions to serve a goal (e.g., to react quickly, to reduce cognitive burden, to simplify disparate details into a cohesive story). When people wear sunglasses for a long time, they eventually “forget” they are wearing them. Their brains stop consciously noting that the environment looks darker, and they begin to operate as if this is the way the world always looks. Anyone who has ever forgotten to remove their sunglasses even once they have entered a building has experienced how easy it is to lose track of a distortion. This is what biases do. They provide distortions for such a prolonged time that your brain does not notice them. Biases are systematic, in that they are not random; they work in one way. Similarly, one pair of sunglasses can also only make things look darker. They do not sometimes make things darker, other times lighter, and other times tinted green or yellow. However your sunglasses distort, they distort this way every time. Each bias is like that, too. Even though we often do not notice them, they behave in a predictable fashion.

Biases exist in everyone’s brains and affect our behavior. Because the chain of information from researcher to patient involves a myriad of people, all of those biases gradually distort the message as it winds its way through the chain. We will examine different biases that occur among the parties to affect their behaviors within the research process.

A bias affecting people who are involved in research projects spanning years is called the sunk cost fallacy. This bias exists because people do not make each decision in their lives independently of others they have already made. Instead, people perform something called “mental accounting,” in which they take their previous decisions into account when making a new one. This bias is designed to keep people on track with their goals. For example, when someone is deciding whether or not to eat a piece of cake, that individual will factor into their decision that they already indulged in ice cream and cookies earlier in the day. The true decision is not whether to eat cake or not eat cake, the decision is whether to eat cake in addition to the other sweets consumed that day. In this fashion, mental accounting can be helpful.

However helpful mental accounting may be, the sunk cost fallacy bias that distorts thinking and prompts people to put more energy into an endeavor if they have already put some energy into it previously [2]. It takes a great deal of effort for people to realize their project is not reaping benefits, and that subsequently, stopping is the most cost-effective choice. In deciding whether or not to stop, people utilize mental accounting and factor in everything they have already poured into the project. They want the work to pay off to justify all of their previous efforts. As much as this makes sense on the surface, the logic is only a result of our faulty mental accounting. In truth, once something is done, it becomes a “sunk cost.” It cannot be recouped at any point regardless of the next move. Take, as an example of a sunk cost, startup costs for a company. The money spent to start the company is spent before the company can generate a return. It is gone, regardless of whether the company makes money or does not.


To Explain to a Patient

Ask your patient if they have ever spent more time on something than they originally intended to because by the time they realized it was not going well, it felt too late to stop. If they found themselves putting in more time and energy into something that was not going well than they normally would, ask them if it was because they had already spent time on it. This is the sunk cost fallacy.

Researchers are not immune to the sunk cost fallacy. Initial interest prompts researchers into their fields of study. This interest represents an emotional investment in their work. They complete many years of advanced schooling to enter positions for conducting their research. These years—of at least forgoing income while studying, if not also paying outright for tuition—represent time and financial costs. Once finally able to begin conducting their own studies, researchers have already invested considerable cost into their work. The sunk cost fallacy is ripe to unconsciously distort their behaviors at this stage. No matter how objective researchers consciously strive to remain, the sunk cost fallacy urges them to unconsciously hope for one outcome over another.

Funders with a vested interest (i.e., financial incentive) in one outcome over another are also prone to sunk cost fallacy. Pharmaceutical companies consider the money they stand to make should a study go well, and the money they will lose if study results are delayed or disappointing. In some cases, the desire for a return on investment is more than simply an implicit bias—it is a conscious anxiety that affects pharmaceutical companies’ choices, which we will see later in detail.

Another bias in research affecting people who have an idea that one outcome is more likely than another is the confirmation bias [2]. All people with ideas experience confirmation bias. Whenever people have a preconceived opinion about something, the confirmation bias leads them to selectively look for evidence in favor of their opinion and discount information that does not fit their opinion. Just as with other implicit biases, confirmation bias is not intentional.


To Explain to a Patient

Ask your patient how they perform searches on the Internet. For example, imagine they have been worried about how much juice is safe to give their child. Do they enter, “Recommended daily juice intake for children” or do they enter, “How much juice is too much for children?” Many patients will enter the latter. That is because we search for information based on what we already expect to find. But confirmation bias is not finished yet. After performing the search, most people would skim over results that indicate any possible health benefits of some juice intake and click on the links that highlight overconsumption and the effects thereof. This selective searching and acquisition of new information is confirmation bias.

Researchers, certain funders, academic journal editors, pediatricians, and patients alike experience confirmation bias. Researchers want to find a positive outcome, whether that outcome is a cure for a disease or a new neuronal explanation for a disorder. The modern scientific process depends on researchers first theorizing and choosing a hypothesis before starting their study. Requiring researchers to first form a hypothesis is a direct path to confirmation bias. Pharmaceutical companies have a somewhat more explicit confirmation bias at play, and we will review the behavioral outcomes of the bias in this group. Academic journal editors decide what papers to accept based on how the study will be received by the medical community. Making this determination can only be done if those editors have their own ideas about hypotheses and trends in science. They then accept papers that reinforce their ideas. When physicians and patients read about new studies (whether in the medical literature or in the media), confirmation bias prompts them to spend more time reading studies that reinforce what they already believe or hope to be true. When individuals read studies refuting their hypotheses, skepticism increases. Skepticism prompts them to initiate searches for flaws in the design or other information that will help them discount the study findings.

The last bias affecting essentially everyone in the research chain is the novelty preference. This bias operates in humans because we are primed to attend to stimuli that are new and different for the purposes of learning [3]. (Of course at other times people evince a familiarity bias; the two seem to serve different purposes.) New events or knowledge represent a possible source of benefit or harm beyond people’s typical experiences. The novelty preference helps individuals pay attention to learn whether this new stimulus is helpful or harmful. Psychologists describe things that command an outsize place of precedence in our minds as being salient. Newness is highly salient.


To Explain to a Patient

Ask your patients to imagine their houses in their minds. Most pieces of furniture and decorations are in the same place every day. Has the patient ever, one day, moved something? What happened when they came back home later that day or woke up the next day? Did they suddenly “notice” that piece of furniture or decoration in a way they hadn’t before they moved it? That is novelty preference. There is no reason for their notice of this item beyond the novelty of the location. The novelty preference means we pay more attention to something just because it is novel and not because that novelty is necessarily good or bad.

The field of research seeks to uncover new information. Even historians, who research past events, search for new developments in their field. Other than replication studies—a necessary part of the scientific process—all studies conducted are rooted in the idea that the results will uncover some new, as of yet unknown information. The novelty preference leads researchers to believe their findings are inherently important and worthy of attention because they are new. Pharmaceutical companies use patients’ novelty preference to sell “me too” drugs: medications essentially the same as the preexisting medications. Marketers easily sell these kinds of medications to consumers based solely on their newness [4]. Medical journal editors are tasked with publishing innovative findings. The general public reads newspapers or online media to find out what has recently happened. Readers are not interested in yesterday’s news. Journalists prefer writing about new treatments, aware that these articles will garner more reader interest than if they were to write about established treatments.

The implicit biases discussed here are, with a few exceptions, largely blameless. Implicit cognitive biases influence how all people operate their lives. These barely perceptible distortions naturally influence the chain of communication from researcher to patient. Because implicit biases operate below our consciousness, patients are likely unaware how such biases influence what they seek out and read about research. Discussing these implicit biases can help patients remove their metaphorical sunglasses, if only temporarily.

In addition to implicit biases, explicit biases influence the research process and are not morally neutral. Explicit biases function in conscious awareness and can result in everything from neglect and carelessness to outright fraud. The remainder of this chapter focuses on one of the greatest sources of conscious bias in research: funding bias. While the medical profession is designed to help people, the pharmaceutical industry is designed to earn a profit for shareholders and CEOs. This divergence of goals has not escaped many patients’ notice. Yet self-interest is not the all-powerful motivator some believe it to be [5]. Patients can benefit from an increased understanding as to how funding is more or less likely to affect study outcomes. Armed with this knowledge, they can more accurately calibrate their opinions on the research results they encounter.

Before discussing how funding can influence outcomes, we will preview how outcomes are typically reached in research studies. The next chapter provides a complete review of how studies are run and conclusions drawn. Many studies seek to determine if a new treatment provides better health outcomes than the preexisting treatment (if one exists). As such, researchers directing these studies look for evidence of a difference between the treatments. Differences are observed through the use of inferential statistics. These statistics are based on a concept of disputing the null hypothesis, which is a concept that presupposes there will be no difference between the groups. Studies showing evidence in favor of a difference between the groups are said to be “significant.” Notably, statistical significance and clinical significance are separate issues, which we will discuss in depth later in this book. Much as how the American legal system is based on a presumption of innocence (placing the burden on the plaintiff or prosecution to supply enough evidence of wrongdoing), research studies presume no difference between two groups, and the results of the research study shoulder the responsibility of rejecting the null hypothesis. The null hypothesis is rejected on the basis that it is statistically extremely unlikely that the difference observed between the two groups is by chance. The null hypothesis itself can never be proven, because in this case, it is not possible to prove a negative (this is relevant when discussing the limitations of research studies with parents).


Funding Sources


Funding in medical research can be divided into two large categories: publicly funded and privately funded. Public funds come from sources such as the government or charities, where money (typically from taxes or donations) is disbursed with the aim of funding the activities that constitute a civil society. Public funds are designed to promote the public good and are not intended to have a specific agenda. People who give their dollars to charities do not do so with the aim of getting more money in return (although some may hope their charitable donations curry favor or win them influence).

Private funds come from privately held companies, in which individuals invest their money with the stated aim of seeing a return on their investment. The goal for dollars from private funding is to earn more dollars. For example, a company that invests its own money in research and development is anticipating eventually selling the resulting product at a profit.

The main source of public funding in medical research is the National Institute of Health (NIH) [6]. The United States founded the NIH the late nineteenth century. It now disburses approximately 30.1 billion dollars annually [6]. Funded with taxpayer dollars, the NIH is government-run and nonprofit. The NIH does not take in money based on its research efforts, although a small percentage of its research dollars fund grants and contracts through Small Business Innovation Research and Small Business Technology Transfer initiatives [7]. Therefore, NIH-funded research trials are fairly unlikely to be influenced by financial motives. The dedicated cynic will point out that it is impossible to be truly disinterested in money. Nevertheless, influence due to money is observed to occur less in publicly funded trials than in privately funded ones, as discussed below.

Private funding for medical research overwhelmingly comes from pharmaceutical companies [8]. While the NIH continues to be the primary funder for basic research science, in the mid-1980s pharmaceutical companies surpassed the NIH as the primary funder of biomedical research [8, 9]. In 2013, the top pharmaceutical company spent over 8 billion dollars in research and development [10]. Even as far back as a decade ago, estimates found that for-profit entities sponsored 75% of clinical research [8].

As corporate entities, the goal of a pharmaceutical manufacturer is to make money, ideally as quickly as possible. If shares of the company are traded on the stock market, their earnings are reported quarterly. This produces a near-constant pressure to perform well (i.e., make money). This pressure causes myopia of goals, prioritizing short-term monetary outcomes over long-term health gains.

Conducting research is a costly and time-consuming effort. Given their profit motives, it seems paradoxical that pharmaceutical companies would fund research at all. Yet they do not have a choice. By law, prior to selling a new medicine or treatment, companies must prove to the Food and Drug Administration (FDA) that the product passed efficacy and safety standards [11]. This proof is available only through research. Hence, pharmaceutical companies find themselves involved simultaneously in two activities—marketing and research—with divergent goals. The goal of marketing is to make money, and making money requires that the information be in the product’s favor. The goal of research is to expand knowledge in the field (whatever that knowledge may show), and in doing so, it expends vast sums of money. These goals are not quite diametrically opposed, but there is significant tension between them. This tension creates an inherent conflict of interest that serves as a common thread running through all pharmaceutical research.

For a multitude of practical reasons, pharmaceutical companies typically do not conduct research in-house [11]. Instead, these companies previously relied heavily on academic researchers to assist in conducting their trials [11]. Including academic researchers was thought to mitigate the pharmaceutical company’s desire for money by offsetting it with the researcher’s desire to be perceived well in the field by striving to conduct objective, bias-free, pure research. Academic researchers viewing their careers through a long-term lens are incentivized to keep their priorities from shifting to the short-term focus of the pharmaceutical companies. By assigning each entity in the process its own goal, this arrangement was established as a kind of checks and balances system. High-profile academic names tied to pharmaceutical studies benefitted the companies because of the implicit assumption that academic researchers’ quest for knowledge placed them above the desire for money, however unrealistic this perception may be [11, 12]. Of course researchers are not immune to the influence of money. Pharmaceutical companies provide equity ownership of their companies, consultancy positions, and funding to researchers. All of these activities cost money to pharmaceutical companies. As they are not charities, companies continually spending money in this fashion can be assumed to lead to a direct benefit for the companies [12].

While there is great prestige for companies when they involve academic researchers, this partnership comes at a cost. As stated, academic research is costly and takes notoriously long to conduct. Various approvals processes in academic centers, such as the Institutional Review Board (established to protect the rights of human participants) and Sponsored Programs Administrations (which oversee the distribution and use of funds awarded for research purposes), are required before study activities can begin. In some cases, companies found that it took too long to recruit enough patients to reach the numbers needed for the study [11]. These delays directly impact the pharmaceutical companies’ bottom line. Delays in research mean delays in obtaining FDA approval. Each day a drug cannot be sold costs the company approximately 1.3 million dollars [11].

These costly delays prompted pharmaceutical companies to partner elsewhere for their research needs [11]. Contract-research organizations (CROs) and site-management organizations (SMOs) cropped up to meet this need of the pharmaceutical companies [11]. CROs are centers specifically designed to conduct research studies [11]. When a commercial advertises a product as “clinically proven,” they are likely referring to a clinic such as can be found in a CRO. The purpose of a CRO is to make money, and they do so by obtaining contracts from pharmaceutical companies who need their products tested. SMOs are similar in that they are involved in testing, but they are often contracted with CROs, so that they become subcontracted with pharmaceutical companies. As the pharmaceutical company pays the CRO, it becomes the customer in the arrangement. The phrase “the customer is always right,” is often bandied about in modern customer service. The sentiment in this phrase is remarkably apt when the customer (the pharmaceutical company) has orders of magnitude more money and influence than the entity they are choosing to send their business to. CROs competing with one another for pharmaceutical companies’ business have every financial incentive to keep the pharmaceutical companies satisfied with their tests’ findings.

One can see how this arrangement between the large pharmaceutical companies and the relatively weaker CROs could lead to subpar research quality. From the start, the pharmaceutical company typically creates a study design and gives it to the CRO to follow, like a chef handing a recipe off to a line cook. There is no independent oversight of these study designs to ensure that they are properly powered, ethical, and valid [11].

Just as in academic research studies, pharmaceutical companies typically establish protocols whereby two groups of people are compared—those who get the new treatment, and those get something else (either nothing, a placebo, or a preexisting treatment for the same ailment). Despite this key similarity, many meaningful differences have been consistently observed between privately funded and publicly funded studies. Privately funded studies often use surrogate outcome measures rather than actual clinical outcomes [4]. For example, a study of executive functioning in children might examine whether children become better at a study measure such as playing computer games (theorized by the treatment developer to represent underlying executive functioning abilities) rather than whether or not the child is actually turning in more of their homework on time (the functional outcome most parents and teachers care about). Such a study would conclude that the client’s program helps children’s executive functioning, when in reality is only helps them get better at playing a game.

Many privately funded studies exist for the purposes of FDA approval, a one-time goal. Therefore, they do not spend the copious amounts needed to fund long-term trials, examining what happens to the people in their trials after years have passed. By not conducting such longitudinal studies, long-term health effects of the treatment or medication, including adverse events, are not included in test results [4]. Subsequently, some extreme adverse events, such as toxicity, have occurred in the general population taking a drug because it had never been tested for long-term safety before the drug came to market [4]. For this reason alone, statistically speaking, an old drug that is still used by the medical profession is more likely to be safe than a newer one [13]. If a drug has been used clinically for a generation, the range of likely adverse events is already known.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 30, 2017 | Posted by in PEDIATRICS | Comments Off on Funding and Bias

Full access? Get Clinical Tree

Get Clinical Tree app for offline access