It is difficult to convey complex findings in a manner that is useful and timely for decision makers and does not result in an overly reductionist account or a confusingly “complex” set of findings. This is particularly a concern for qualitative research in which large volumes of data are collected. We suggest that one way to present the findings from a complex systems process evaluation is to create a “system story,” wherein the evaluator describes and analyses how the intervention embeds and co-evolves with the system and its elements overtime [3].
Complexity has been part of the vocabulary of public health evaluators for decades [16,21]. However, public health evaluations have tended to focus on the complexity of interventions rather than of the systems within which interventions are implemented [22]. A “complex intervention” is one that has a number of interacting parts, targets different organizational levels or groups of people, and aims to affect a number of outcomes [16,17]. In contrast, a complex systems perspective considers complexity as an attribute of the system.
Main menu
In cases in which a decision was not clear cut, or the reviewers disagreed, a discussion was held with a third reviewer. In brief, studies were included in the review if they (1) self-identified as taking a systems- or complexity-informed approach; (2) were relevant to public health; (3) were process evaluations of interventions with empirical findings; and (4) utilized qualitative methods. An important role for process evaluations is to examine the quantity and quality of what was actually implemented in practice, and why.
While early process evaluation frameworks emphasised roles of context in shaping implementation,6 contextual factors may also moderate outcomes. The causal pathways of problems targeted by public health intervention will differ from one time and place to another. Hence, the same intervention may have different consequences if implemented in a different setting, or among different subgroups. Even where an intervention itself is relatively simple, its causal processes and relationship with its context may still be considered complex. However, when process evaluation is used to complement an impact evaluation, it is often conducted under time and budget constraints, which can limit the kinds of qualitative data collection methods that can be used.
Outcome and process evaluation
The datasets generated during and/or analysed during the study are available from the corresponding author on reasonable request. 94% of participants indicated that they “very much” thought the information will help them make the right decisions.
Implications for a future trial were built from these by the research team, and discussed at Steering Group meetings. Each site was rated by the trial manager according to key parameters with a theoretical link to participant outcome, with the aim of checking the validity of the NPT analysis. These parameters included the extent of engagement with the programme, the level of research nurse involvement and the extent to which the ward manager provided leadership and direction in programme implementation. As in the case study phase, NPT55–58 provided the theoretical framework for implementing and evaluating the SVP. The aim was to facilitate understanding of the practical issues involved in embedding the intervention into routine practice. Taken a step further, process evaluation can also look at the processes of program, management, and infrastructure together to judge the capacity of an organization to deliver on its promised outcomes.
Evaluation resource hub
Here, you need to think about the context of your event, for example, the age of the audience and the time and resources you have available. Ben Guthrie is a fourth-year student pursuing a Bachelor of Recreation and Leisure with a minor in Business. In Winter of 2023, Ben was enrolled in a ‘Program Evaluation in Recreation’ course taught by Dr. Corliss Bean at Brock University. BetterEvaluation is part of the Global Evaluation Initiative, a global network of organizations and experts supporting country governments to strengthen monitoring, evaluation, and the use of evidence in their countries.
- It does not make sense to burden program staff with evaluation plans that are so time-consuming that they don’t have time to run the program.
- A “complex intervention” is one that has a number of interacting parts, targets different organizational levels or groups of people, and aims to affect a number of outcomes [16,17].
- Most small and mid-sized nonprofits conduct formal evaluations because it is required by their funders.
- Yet, it may be important to collect such data during the intervention, rather than at the end when recall will be less accurate.
- It is not our suggestion that evaluators attempt to apply all complexity concepts to any one evaluation but rather focus on those that can generate useful evidence for decision-making [71].
We also believe that a contribution to the field would be a framework that seeks to address some of the problems identified in this review. Several authors have noted that although there are growing calls to utilize a complex systems approach, there have been fewer attempts to describe specific approaches or frameworks for doing so [35,71]. In particular, we advocate integrating a complex systems approach at the beginning of an evaluation design, to ensure that the perspective informs the evaluators’ theoretical position, the evaluation focus, sampling strategy, data collection methods, analysis, and interpretation of findings. Research into complex systems takes place across academic disciplines and has roots in both systems thinking and complexity science.
If there is an expectation that the intervention is likely to have greater/less impact on some children/young people than others, it is useful to include measures that will capture this differential impact (e.g., age, gender, ethnicity, locality). However, organisations will not be able to collect UK data that could lead to individuals being identified, and any data collected needs to be General Data Protection Regulation (GDPR) compliant; an issue that must be addressed when tracking children in order to assess systematic testing change. The following publication argues that this applied research in the implementation of interventions is a focus in the field of ‘Implementation Science’ and should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. The purpose of this paper is to contribute to a theoretical framework that characterizes and explains implementation processes in terms of the social processes that lead from inception to practice.
Process evaluation is interested in the processes of the program, such as how smoothly registration was, how engaged participants were, how satisfied the participants were during the program, and more. This type of evaluation is beneficial because it can be performed during a program cycle and allow staff to adjust the program “on the fly”. Fidelity of form refers to delivering an intervention in exactly the same way each time, whereas fidelity of function means there can be flexibility in how an intervention is delivered so long as it is achieving the same delivery goal each time. For example, information could be delivered to a client group in exactly the same way each time through a leaflet (fidelity of form), or information could be delivered flexibly to achieve the same aim.