19 Evaluation of community-based obesity interventions

Summary and recommendations for research



  • Evaluation of complex social interventions needs to go beyond examining whether an intervention works overall in order to address the larger question, “what works, for whom, and in what circumstances?” and even better, to help us to understand “why?”
  • Interventions need to have been thoroughly developed through the prior stages, which may involve theoretical development, qualitative testing, modeling, feasibility testing and an exploratory trial, prior to large-scale summative evaluation.
  • Once the prototype intervention has been developed, it is important to pilot test the intervention, or at least the key components of the intervention, with a particular focus on its feasibility, acceptability and delivery, and evidence of the hypothesized causal processes being triggered as anticipated.
  • It is perfectly feasible to allow variation in intervention form and composition, provided that the basic function and process of the intervention is standardized. This reproduces realistically within the trial the kind of variation that will naturally occur in real world practice.
  • The RCT design ensures that systematic differences in external influences between groups do not occur and thereby ensures that an unbiased estimate of the average effect of the intervention is obtained.
  • It is crucially important in an effectiveness trial of a complex community intervention to conduct a comprehensive qualitative investigation within the trial, so that variable factors can be monitored.

Introduction


This chapter attempts to provide pragmatic guidance on the key issues which need to be considered when evaluating community based action intended to prevent obesity. In seeking to provide useful guidance for evaluators, the chapter does not dwell on the many methodological and epistemological debates that have dominated the public health and health promotion literature over the vexed question of the optimal approach to evaluating complex community interventions (see Box 19.1). However, the reader is encouraged to learn from the substantial work on evaluation of complex social and public health improvement interventions that lies outside the specialized literature on obesity prevention. It is important that the developing community of multi-disciplinary obesity prevention research teams do not waste resources on repeating the mistakes and debates that have hampered progress in these other areas of significant related research and evaluation activity.



Box 19.1 Methodological debate: polarization or pragmatism?



  • Historically, there has been debate over the relative merits of quantitative and qualitative research methods in the evaluation of social interventions.
  • The period from the late 1960s to the early 1980s was a “golden age of evaluation” with 245 “randomized field experiments” conducted in areas such as criminal justice, social welfare, education and legal policy.5
  • Pragmatic mixed method approaches, where methods or combinations of methods are pragmatically chosen to address the specific research question, 6 has been lacking but is now developing.
  • Public health is necessarily cross-disciplinary requiring the combination and integration of research methods from a diversity of contributing disciplines. 7
  • More recently, there has been a call for a trans-disciplinary science approach using a shared con ceptual framework to draw together the most rig orous and appropriate discipline-specific theories, models, methods and measures for the question being posed.8

A further lesson to be learned from experience elsewhere is that the term “evaluation” covers a wide range of activities, which vary greatly across a number of dimensions. While this chapter focuses on the evaluation of community interventions, within that focus, it is important to recognize that evaluation projects will vary according to the purpose of the evaluation, the resources available to conduct the evaluation, and the complexity of the intervention to be evaluated. We consider each of these three dimensions, with a primary focus on the evaluation of complex community interventions, and the key stages in the evaluation of such interventions.


Evaluation: purpose and resources


In planning any evaluation, it is important to consider why that evaluation is taking place. Many evaluations, particularly those carried out by practitioners rather than researchers, are undertaken primarily as an exercise in accountability, with an emphasis on documenting or measuring what happened, with possibly some attempt to identify the impact, of a particular funded activity. Such evaluations are of limited scope and are not really the concern of this chapter, as they are more appropriately conducted within a project management or performance assessment framework, rather than being considered as evaluative activities. Any true evaluation should aim to produce learning and/or improvement. A good professional ethic requires that lessons are learned regarding the process and impact of an intervention and that there is continuous ongoing assessment of whether the intervention is working as anticipated and having the desired outcomes. It is critical that the possibility that interventions can do harm is not rejected. Many well-intentioned interventions have been found to be doing more harm than good in terms of their main purpose,1 while others have unanticipated impacts or are detrimental to subgroups of the target population. It is also important that professionals strive to improve the quality of interventions, whether by improving their reach, effectiveness, efficiency or equity.


What is evaluation?


There has been much debate as to the definition of “evaluation”, and how it is distinct from “research”. Shaw2 proposes a three-level taxonomy, in which “evaluation” (which we refer to as practitioner evaluation) is characterized by a focus on practical problems with the objective of informing practice immediately and locally. It is usually undertaken by practitioners with little emphasis on scientific rigor, an enhanced form of reflexive professional practice. “Evaluation research” uses stronger methods and seeks to have an impact on practice to improve effectiveness and efficiency, with dissemination through professional and policy networks and in the grey literature. And Shaw’s third level is “applied research”, which is led by researchers using strong methods and is disseminated through peer-reviewed scientific papers with the aim of producing generalizable knowledge with an impact on theory and practice over a long-term period.


This chapter adopts a definition of evaluation in line with Pawson and Tilley,3 who see the purpose of evaluation “as informing the development of policy and practice”4 rather than focusing simply on measurement or increased understanding.


Complexity: moving beyond “what works?”


Primarily quantitative summative evaluation research methods utilize randomized controlled trials (RCT) and other experimental and quasi-experimental research designs to identify whether an intervention works better than the counterfactual, which may be no intervention, normal care or an alternative intervention. It is widely accepted that such research designs are the optimal designs to address the “what works?” question, as they eliminate or reduce potential biases in estimating intervention effects. However, in the context of complex social phenomena, the value of evaluations that focus only on “what works?” is limited,4 since the effectiveness of interventions will vary significantly in association with variations in factors such as context, delivery and acceptability.


Whilst experimental research may show us what changes occurred following the intervention, without other forms of concurrent research activity we are left in the dark as to what the intervention actually was in its manifest form, and can only base our conclusions on the na ï ve assumption that the intervention was unproblematically delivered as conceived, and reproduced equally unproblematically in each context. However, it is almost certain that delivery of any complex intervention will vary. Artificial attempts at fully standardizing delivery across contexts, failing to allow any tailoring of the intervention, may not only prove unworkable in real world settings, but might also inhibit effectiveness. 9 However, acknowledgement that trials must allow an intervention to be, to some extent, “out of control”9 brings about challenges in terms of understanding what it is about the intervention that does or does not “work”.


Furthermore, social interventions do not act in an undifferentiated manner upon passive recipients. Outcomes arise through a dynamic interaction between agent and intervention, with an intervention that facilitates change for one individual or subgroup often failing for others. A key example is the tendency for health education interventions to be more effective among educated groups and, therefore, widening rather than narrowing inequalities. For these reasons, evaluation of complex social interventions needs to go beyond examining whether an intervention works at the aggregate level, in order to address the larger question, “what works, for whom, and in what circumstances?” and even better, to also help us to understand “why?”


Weiss distinguishes implementation theory and program theory.10



  • Implementation theory focuses on the components of the intervention, how it was carried out and what results it produced. Implementation theory largely treats each intervention component as a black box, and does not seek to understand the mechanisms through which the intervention brings about change.
  • Program theory additionally focuses on the causal processes and mediators11 through which the intervention brings about its effects, and which may vary across populations, time and contexts.

Gabriel et al12 and Pawson and Tilley3 suggest that experimental and quasi-experimental methods can only play a limited role in addressing the “what works, for whom and in what circumstances?” question. Pawson and Tilley propose realist evaluation as an approach that can answer this question through the development of a more comprehensive program theory with particular emphasis on contexts and mechanisms. Cook13 argues that, rather than developing alternative methods to conduct theory based evaluations, it should be possible to use theory-based methods within an experimental framework. In the main body of this chapter, we identify ways in which this may be done, as a critical part of a mixed methods approach to developing and evaluating complex community-based obesity prevention interventions.


Evaluating complex interventions—research stages and research questions


One stereotype, which is still alarmingly prevalent, is that biomedical research funders publish research calls and maintain peer-review systems that favor proposals for randomized trials of obesity prevention interventions. There is an imperative to identify effective interventions and a need to collect strong evidence of effect, so the obvious solution is to fund trials. However, Sanson-Fisher et al 14 caution that a sole focus on randomized trials as the method for evaluation may prevent the posing of complex questions that the method simply cannot answer, stifling innovation in intervention development. All too often, the research proposals submitted and funded are strong in terms of trial methodology, but very weak in terms of the proposed intervention, drawn up by a group of “experts” with no public involvement, and a weak and unidisciplinary theoretical and empirical basis. There is an inevitable bias towards relatively simple, individually targeted interventions rather than more complex, multi-level interventions and programs. On the other hand, complex multi-level or settings-based interventions developed in conjunction with the target audience and based on strong theory, are typically evaluated using weak research designs with either no estimate of effect due to the absence of a summative evaluation, or potentially biased effect estimates at best.


How to reorient towards helpful, rigorous evaluation designs


A helpful way to move away from this stereotype and to prevent the perpetuation of an inadequate evidence base is to recognize the different stages of research needed in the development, evaluation and implementation of complex interventions. In health promotion, there are a number of models of stages of intervention research and evaluation, including Nutbeam’s six-stage development model.15 and Green’s PRECEDE/PROCEED framework,16 which identify different research questions and the various research methods that are appropriate at each stage. The United Kingdom Medical Research Council (MRC) also published a framework for the evaluation of complex interventions 17 which identified five research stages, mirroring the stages of drug development research, of which the fourth stage was the definitive randomized controlled trial. This model has been particularly helpful in highlighting the need for interventions to have been thoroughly developed through the prior stages, which may involve theoretical development, qualitative testing, modeling, feasibility testing and an exploratory trial, prior to large-scale summative evaluation, thus limiting the reproduction of the stereotype described in the previous paragraph. Box 19.2 gives an example of an intervention that has passed through a member of research stages prior to the final trial phase.


Notwithstanding necessary simplification, we provide recommendations for selected research designs for each of the two main stages of evaluation research, formative and summative. These stages can be mapped onto the three frameworks referenced above, and provide a useful classification to aid the presentation of key issues in evaluation research design. However, it is not intended to suggest that the identification of specific research questions can only be done with reference to this sequence. Indeed, as we describe below, it is likely that key questions regarding the acceptability, implementation and causal mechanisms of an intervention will need to be addressed at each of these stages.



Box 19.2 Case study: fun ’n healthy in Moreland!


A series of obesity prevention studies conducted in Victoria, Australia demonstrate how the different research stages can be developed. A review of the child obesity literature was conducted in 2004 and highlighted the increasing prevalence of child over-weight and obesity and the complexity of environmental and socio-cultural determinants. A clear gap in the evidence base in relation to effective interventions led to the development of a pilot study conducted in three diverse primary schools—in inner urban, suburban and rural areas. This formative evaluation was conducted to test the feasibility and acceptability of a trial methodology, and a range of school, parent and child measures. As a result of this pilot study, the methodology and measures were adjusted to improve acceptability and comprehensibility and a study design was developed using a socio-environmental theoretical framework actioned by the Health Promoting in Schools Framework and culturally competent community development strategies. A five-year cluster randomized controlled trial, fun ’n healthy in Moreland!, was consequently implemented in 2004. This child health promotion and obesity prevention intervention and research study involves 24 primary schools in an inner-urban, culturally diverse area of Melbourne, Australia. It is being conducted in partnership with the local community health service. A comprehensive mixed method summative evaluation will allow an assessment of what worked, for whom, how, why and at what cost.27

Stay updated, free articles. Join our Telegram channel

Aug 4, 2016 | Posted by in PEDIATRICS | Comments Off on 19 Evaluation of community-based obesity interventions

Full access? Get Clinical Tree

Get Clinical Tree app for offline access