Organisation and governance

Evaluating Health and Social Care Programs

Ruth Mackenzie-Stewart and Hanan Khalil

Introduction

Evaluating health and social care programs is important for multiple reasons. These include, but are not limited to:

  • ensuring the organisation is delivering the best programs and services while addressing community needs;
  • making programs, services, and systems more efficient and effective;
  • informing future directions and strategic service planning;
  • producing context-specific and robust evidence to support funding applications;
  • strengthening and informing communication and marketing efforts;
  • and providing feedback to the community about the value of services and programs.

Program evaluation is a systematic process to examine the value of a program or project, including effectiveness, efficiency, and appropriateness. Patton (2012) defined evaluation as “The systematic collection of information about the activities, characteristics, and outcomes of program, services, policy, or processes, in order to make judgments about the program/process, improve effectiveness, and/or inform decisions about future development”.

Significant resources are invested into the development of clinical pathways, guidelines, interventions, and innovations aimed at improving health and social care outcomes; however, systematic and appropriately scaled evaluations of such innovations receive less attention. Evaluation capabilities are required across all levels of health service management and leadership roles to continue to drive innovation, ensure consumer voices are heard, and build evidence-based responsive, appropriate, and effective health services. Several approaches, (frameworks, models, concepts) can be used for evaluation. This presents a challenge to those undertaking evaluations. The approach and methods used for evaluation can depend on factors such as practitioner experience, the discipline, funding body requirements, consumers and community needs, and the setting. It is crucial to ensure the methods used are fit for the problem at hand and generate evidence that can inform future decision-making.

Background

Evaluation is an applied inquiry process for collecting and synthesising evidence before, during, and after the delivery of real-world policies, programs, and interventions that results in passing judgment about the value, merit, worth, significance, or quality of a program, product, policy, proposal or plan against pre-determined goals, aims, objectives or intentions (Scarth, 2005, Bauman and Nutbeam, 2014). Evaluation is most effective when the needs of the users of the system are placed at the forefront (Patton, 2012); however, this needs to be balanced with rigorous design within constrained resources. Investment in designing robust evaluations for existing and new health service policies, programs, and interventions that seek to enhance community health and well-being or health service performance (which contributes to better health outcomes for populations) is fundamental to ensuring we are delivering the intended outcomes. If not, it allows us to answer the fundamental questions of why not and how we can do this better (Bauman et al., 2014). The primary focus of an evaluation is to determine what is working, what does not work, or what can be done to make improvements.

When evaluating change for complex health initiatives, services, and programs it is important for health service managers to keep in mind small incremental changes over time can make a huge impact. Evaluating these incremental changes contributes to identifying the program components driving the change, thus contributing to sustaining longer-term public health and health service impacts (Steckler and Linnan, 2002). When evaluations of complex health service programs are designed at the outset, pathways to change can be better established through multiple evaluation approaches and methodologies (Bauman et al., 2014). Early evaluation can prevent larger-scale implementation barriers and failures and allows for the lived experience of participants to be appreciated through participatory evaluation methodologies (Scarth, 2005, Bauman, 2014, Patton, 2012).

Evaluation theories and approaches

When planning an evaluation, consideration must be given to the paradigm (and theory) along with the evaluation approach that will be adopted. A paradigm refers to the beliefs about the nature of reality and the types of knowledge required to facilitate an understanding of reality (Fox, Grimm, and Caldeira, 2016). The paradigm being used for an evaluation will inform methodological and data collection choices.

The literature concerning paradigms, theories, and approaches in evaluation is extensive, and continues to be an arena for considerable debate. Lucas and Longhurst (2010) provided a useful introduction to different perspectives that can be brought to evaluation and the approaches associated with these. In a widely cited monograph, Stufflebeam (2001) provided a description of 22 program evaluation approaches. These approaches are distinguished by factors such as their purpose, scope, engagement with stakeholders, methods, timing, and applications. Most can be seen to have foundations in the evaluation paradigms described in this section. This section introduces two commonly used paradigms, and their associated evaluation approaches that assist in operationalising these paradigms when planning a health and social care evaluation.

Theories of change and the realist evaluation approach

The theories of change perspective is described as an expression of the critical realist paradigm; that is, it sets out to understand the complexity and contextual dependence of programs and services. Poland, Frohlich and Cargo (2009, p. 307) described critical realism as: ‘…a logic of inquiry that privileges neither “objective” facts nor subjective lived experience or narrative accounts, but rather seeks to situate both in relation to a theoretical understanding of the generative mechanisms that link them together, as a basis for interpreting the empirical or observable world’.

Based on the critical realist paradigm, the realistic evaluation approach shares many elements of the theory of change perspective (Pawson and Tilley, 1997). Those adopting this approach operate from a position whereby the interventions delivered are considered working (or real world) theories concerning how a set of activities will function in given contexts to create change and achieve the desired program and service objectives.

A key role for evaluators is to work with program managers and other stakeholders to make the theory of change inherent within a program explicit, often represented as logic models, and to use this to guide the evaluation. This approach is characterised by the purposive sampling of a wide variety of quantitative and qualitative data to shed light on the multiple mechanisms of change that take place in the program, and the contextual factors that act as enablers or inhibitors of change, both intended and unintended (Fox, Grimm, and Caldeira, 2016). Harris et al. (2020) provided a practice example of realist evaluation stages and how this paradigm can be applied to the evaluation of a recovery-oriented program in Australia for those with complex, severe, and persistent mental illness.

Within a theories of change paradigm and a critical realists approach the primary focus is on learning about how change is achieved focusing on context and unintended and intended outcomes. This makes it an appealing approach for planning an evaluation of complex health and social services.

Pragmatic perspective and the utilisation-focused evaluation approach

The pragmatic perspective (or paradigm) is guided by collecting information useful for stakeholders and communities rather than being concerned with objective measurement or proving causal pathways of change within programs and services (Patton, 2012). This approach to evaluation focuses on the practical use of evaluation findings to make decisions and improve programs. For pragmatists, questions of truth and validity are of far less importance than obtaining information that ‘works’ for funding bodies, policy makers, service managers, and program beneficiaries.

The utilisation-focused evaluation approach is firmly ground in the pragmatic perspective. Patton (2012) argued that the utilisation-focused evaluation approach allows evaluations to be undertaken with specific users in mind. In this approach, the role of an evaluator is to undertake an analysis of program stakeholders, to identify the primary users of the evaluation findings, and determine the needs and associated questions they have concerning the program. The utilisation-focused approach is suitable for quantitative, qualitative, or mixed methods – these decisions are guided by the interests of stakeholders and their views about credible and useful evidence and data. The evaluator can facilitate this decision making by presenting a menu of context appropriate and rigorous methods to stakeholders and expert advice about the utility, validity, and cost-effectiveness of different options.

In practice, most evaluations will draw on several paradigms to ensure each evaluation question can be addressed and to recognise the interests of various stakeholders, known as a pluralist perspective (Lucas and Longhurst, 2010).

Planning for evaluation

When planning for an evaluation, decisions about the questions you want your evaluation to answer need to be developed and clarified. This will help you develop a statement of purpose or purposes.

The statement of purpose for an evaluation should address the following:

  • What is the overarching rationale for evaluating this program?
  • Who stands to benefit from the evaluation findings?
  • How will this evaluation meet the needs of stakeholders and partners?
  • What are the key aspects of the service or program that are to be evaluated and why? Note: this is linked to all previous questions.

When considering the purpose of the evaluation, it is worth reflecting on how the evaluation will meet the following needs:

  • the funding requirements;
  • the need for evidence in the field;
  • your organisation’s need to improve practice, process, and outcomes;
  • the community’s needs and interests;
  • the interests and needs of decision makers that might be the focus for future advocacy efforts or funding applications.

Numerous frameworks and models can be used for program and evaluation planning. Some are generic in nature and can be used regardless of the paradigm and evaluation approach adopted. Regardless of which framework is adopted to assist you to plan your evaluation, it should include the following elements to ensure your evaluation is well considered:

  • a description of the program;
  • an evaluation preview or overview (this will include your evaluation purpose and key questions);
  • the evaluation design (based on the paradigm, approach, purpose, and questions);
  • data collection (the data collection plan, including data sources, indicators, and timelines);
  • data analysis and interpretation (a plan for how data will be analysed and interpreted);
  • dissemination plan (a plan for how you will communicate, report, and share the findings).

Levels of evaluation and the program planning cycle

Following on from decisions about the nature and reality of knowledge (paradigms and approaches), evaluators then turn their attention to further refinement of the evaluation purpose. That is, what do we want to know from the evaluation and what evidence do we want this evaluation to produce? Being clear on the purpose of the evaluation will help in deciding what level/s of evaluation, or level of change your organisation is most interested in, as this will assist in deciding the frameworks most appropriate to draw on for your evaluation.

Four levels of evaluation are commonly used when assessing the value, merit, or worth of health and social care programs; however, it should be noted that across the literature other terms are often used to describe the levels of evaluation presented here. Each level of evaluation contains several key elements that can be investigated throughout an evaluation, depending on the program or service under consideration.

Levels of evaluation and program relationship

Evaluation Level Evaluation purpose description Program/project or service delivery component
Outcome evaluation Determines whether the health or social service long term program or project goals/long term aims have been reached. Goals/long term aims.

The long-term measurable changes in a health, social or health service issue.

Impact evaluation Also known as summative evaluation Investigates whether the health or social service program or project or intervention objectives/aims have been achieved. Objectives/short terms aims.

Short-term changes required to achieve the goals/long term aims.

Process evaluation Also known as auditing and monitoring; however, auditing and monitoring do not capture the full range of process evaluation activities. Monitors the implementation of strategies. Strategy components assessed in process evaluation include:

  • delivery (recruitment, fidelity, dose delivered)
  • exposure
  • reach
  • dose received
  • context.
Strategies/activities The actions and activities implemented to achieve the stated objectives/short term aims
Formative evaluation Pre-tests strategies/activities before they are fully implemented. This is often undertaken with participants and partners.

 

Choosing evaluation indicators

Once decisions about the level/s of evaluation and the evaluation questions aligned with the level/s have been determined, indicators need to be identified. The United States Centers for Disease Control and Prevention (CDC) described evaluation indicators as:

Measurable information used to determine if a program is implementing their program as expected and achieving their outcomes. Not only can indicators help understand what happened or changed but can also help you to ask further questions about how these changes happened (CDC, n.d).

Indicators provide a framework for data collection within an evaluation plan. These data collected against each indicator allow evaluators to determine an answer to the posed evaluation question. For example, if a program has introduced a new model of care to reduce hospital readmissions, a logical indicator might be pre- and post-data on the number and nature of readmissions to the hospital. Indicators can be quantitative or qualitative, and both are often drawn on in an evaluation. Table 2 provides a detailed example evaluation plan for a program delivered with a residential aged care setting to increase physical activity participation among residents.

Study designs and data collection options

Following identification of indicators, study design/s and data collection methods for measuring the outcomes, impacts, and processes of an evaluation must be determined.

Study designs provide a framework and set of methods and procedures used to determine whether a change in an impact or outcome of interest occurred, and to what extent (Greenhalgh, 2019). Broadly, study design can be categorised as either experimental or observational and quasi-experimental. Experimental designs have clear strengths for determining the causal effects of interventions, yet are less frequently used in the evaluation of health and social care policies, services, programs, and projects. They are most common in large, well-funded evaluations, where the intervention is novel and/or has the potential for future use at the population-wide level (for example, randomised controlled trials (RCT) and cluster RCT).

More commonplace health and social care evaluations are observational and quasi-experimental designs. These include the pre- and post-test, post-test only, controlled pre-and post-test (without random allocation to groups), and time-series designs. To understand the alignment between evaluation level, indicators, study design, and data collection methods, refer to Table 2.

The health research methods literature contains extensive descriptions, applications, and discussions of study designs and methodologies, including the strengths and limitations of each option. It is beyond the scope of this chapter to address each individual study design and methodology available for data collection within an evaluation. Head to Better Evaluation, a global interdisciplinary network that provides practitioners and researchers with accessible evidence-based and robust evaluation resources, including examples of the application of paradigms, evaluation approaches, methodologies, and frameworks. The Evaluation Journal of Australasia publishes a wealth of interdisciplinary peer reviewed papers on evaluation theory, research, and evaluation practice.

Example Evaluation Plan Preview/Overview

Evaluation Level Evaluation question Indicators Study design and data collection methods
Outcome Evaluation Program Goal To increase physical activity participation among independently mobile residents living in residential care home A by 20% within 3 years  To what extent has low-moderate physical activity increased among independently mobile residents living in residential care home A at the end of years 1, 2 and 3? Minutes of low-moderate physical activity among independently mobile residents living in residential care home A at the end of years 1, 2 and 3.

Proportion of independently mobile residents living in residential care home A that are meeting age-appropriate Australian physical activity guidelines at the end of years 1, 2 and 3.

Design Time-series design: taking several measures before the implementation of strategies and at yearly intervals to decide whether the strategies influenced physical activity participation when compared with its background trend-line. This design can be used effectively when there is an existing data collection system in place to obtain the multiple measurements required, such as health service user statistics, telephone helpline databases, or ongoing population health surveys.

Data collection methods Pedometer worn during waking hours by consenting independently mobile residents living in residential care home A for 7 days.

Impact Evaluation Program Objective 1Increase knowledge among residential aged care staff employed at residential care home about the benefits of physical activity for independently mobile residents by 70% by the end of year 2.    To what extent has knowledge on the benefits of physical activity for independently mobile aged care residents increased among the staff at residential care home? Knowledge of the benefits of physical activity participation among for independently mobile residents among the staff at residential care home A Design Pre- and post-test design. Prior to the delivery of and 1 week post workforce and professional development training, data collected from residential aged care staff employed at residential care home A on the benefits of physical activity for independently mobile residents.

Data collection methods Knowledge survey on physical activity benefits with residential aged care staff employed at residential care home A. survey administered prior to and 1 week post training sessions participation.

Process Evaluation Program Strategy 1Deliver workforce and professional development training for staff employed at residential care home A about the benefits of physical activity for independently mobile residents. How many staff employed at residential care home A completed the training by the end of year 2?
To what extent were all training components delivered to those in attendance in each training session?
Proportion of staff employed at residential care home A that have completed professional development training (reach).
Proportion of intended training session components delivered in each session (delivery).
Design Process evaluation does not require study design selection, as the purpose is not to determine effects or change, but to monitor and understand factors around strategy implementation and participation.

Data collection methods Administrative attendance data, number of staff that attended and completed training. Audit tool of session components. Facilitators to complete at the end of each session delivered.

Formative Evaluation Program Strategy 1Deliver workforce and professional development training for staff employed at residential care home A about the benefits of physical activity for independently mobile residents. Are the workforce and professional development training sessions engaging and acceptable to staff at residential care home A?
Are the workforce and professional development training materials clearly delivered
Experiences of the design, perceived meaning and relevance, attractiveness, and acceptability of the professional development training from the perspective of staff from residential care home A.
Facilitators’ experiences of the workforce and professional development materials following the initial delivery.
Design No design is required, formative evaluation questions do not seek to determine effects or change resulting from the professional development session, rather seek to understand the acceptability and relevance factors influencing strategy implementation and participation. Data are collected following the delivery of one professional development session to allow for adaptions to be made ahead of the full roll out of professional development sessions.

Data collection methods Focus groups with staff from residential care home A following participation in the first delivery of the professional development training to explore their perceptions of relevance, attractiveness, and acceptability of the professional development training.
Semi-structured interview/s with facilitators to explore their experiences of the workforce and professional development materials following the initial delivery. 

Impact evaluation Objective 2 Provide one additional walkable secure, supervised, and safe outdoor green space for independently mobile residents at residential care home A by the end of year 3. Has one additional walkable secure, supervised, and safe outdoor green space for independently mobile residents at residential care home A become available for use by residents? Availability of additional walkable secure, supervised, and safe outdoor green space for independently mobile residents at residential care home A Design Post tests are useful when the baseline value is known (such as the number of walkable, secure, supervised, and safe outdoor green spaces at a residential care home).

Data collection methods  Environmental audit of residential care home A for to determine the number of available walkable secure, supervised, and safe outdoor green spaces for independently mobile residents.

Process Evaluation Strategy 2 Co-design a walkable, secure, supervised, and safe outdoor green space with independently mobile residents at residential care home A. What barriers and enablers were experienced by independently mobile residents when participating in the co-design of a walkable, secure, supervised, and safe outdoor green space? Barriers and enablers to involvement as experienced by independently mobile residents in residential care home A.
Number of independently mobile residents in residential care home A who experienced barriers to initial and ongoing involvement in the co-design process.
Design Process evaluation does not require study design selection, as the purpose is not to determine effects or change, but to monitor and understand factors around strategy implementation and participation.

Data collection methods Focus groups with independent mobile residents in residential care home A, held periodically (6 monthly) throughout the co-design phase to explore their experiences of barriers and enablers to participation.

Evaluation Management

With any project or program, having the right mix of individuals is critical across the planning and evaluation life cycle. Begin by identifying the stakeholders, that is, people who may have an interest in the project evaluation, such as consumers, community representatives, and sponsors. It will probably not be feasible to include all stakeholders, so you would have to decide how to prioritise who should be involved. When planning for an evaluation and bringing together a team, it is worth paying particular attention to the mix of skills, time available, resources, and creditability. These factors may result in the scale, size, or focus on the evaluation shifting to ensure it is practical to implement within the bounds of the team’s skills and resources.

Stakeholder engagement is not only an important element of successful implementation, it is essential for a successful evaluation. Stakeholder engagement can be in the form of consultation, right through to active participation of stakeholders in the planning, delivery, and reporting of an evaluation (Wensing and Grol, 2019). Large evaluations will often involve the establishment of an evaluation working group or committee comprising of representatives from stakeholder organisations and groups. Stakeholder involvement in evaluation and other research strategies has been encouraged, as it is believed their engagement could increase the relevance of evaluation findings, thereby promoting adaptions in practice and helping to close the knowledge-to-practice-gap. Strategies likely to promote stakeholder engagement include the use of plain language summaries and including ongoing consultation. Barriers to stakeholder engagement include limited time and resources that may hinder the process. To date, evidence on the benefit of involving stakeholders is scarce, and future research should address this gap by having a standardised approach and defined outcome measures to determine the benefit of stakeholder engagement.

The evaluation timeline should clearly indicate the period over which the evaluation is expected to occur and the specific tasks. The timeline can serve as a communication tool to keep stakeholders and staff up-to-date and track the progress of the evaluation. When preparing a timeline for an evaluation, it is important to ensure adequate time has been built in to allow for development and testing of evaluation data collection instruments, recruitment of participants, data collection and analysis, and dissemination of findings.

The resourcing of an evaluation is often overlooked at the planning stage or underestimated in terms of work hours and cost. Evaluation budgets should be scaled to the size and complexity of the health or social care program under consideration, realistically 15%–25% of the program’s budget should be quarantined to fund an evaluation (Zarinpoush, 2006). Horn (2001) developed a robust checklist to support evaluators in planning their evaluation budgets, it is freely available here.

There is the expectation that as evaluators, we will address all ethical requirements when leading an evaluation. For example, ethical approvals maybe required if we are working with another agency/organisation in planning an evaluation of a program or service, such as a hepatitis C treatment clinic within several public hospitals, and collecting data through a survey, interviews, or focus groups. The evaluation team is responsible for including this in the plan and obtaining all required ethics approvals. There may be specificities in each jurisdiction’s ethics processes and protocols and these must be complied with accordingly. In Australia, the National Health and Medical Research Council is responsible for ensuring that research is conducted ethically and with informed consent. The conduct of an evaluation often involves participation of individuals from vulnerable groups. At the planning stage, it is necessary to check the specific ethics requirements for engaging these sub-populations in the evaluation.

The ability of evaluators to anticipate risks to evaluation rigour and minimise them is an essential evaluation management skill. Threats to rigour can arise from factors such as recruitment and follow-up issues, quality of data collection instruments, timing of data collection (seasonal effects), and problems with data management. Putting a detailed work plan and communication plan in place for data collectors supported by adequate training in the protocols for data collection, management, and analysis is the most adopted strategy to manage these risks. This can be further supported by ensuring supervision is readily available for data collection staff and that immediate troubleshooting with updates to the team is undertaken. The best solution for minimising risks to rigour is the employment of appropriately trained staff with the right mix of personal skills, knowledge, and qualities. Data collection staff need to be able to work systematically, be sensitive to ethical and community considerations,  well versed in the potential risks to data quality, be personable, and excellent communicators.

Key Implications for Practice

In Australia, government funded programs have been initiated in health services to improve the healthcare system, with significant resources being dedicated to their implementation.

Evidence-based evaluation of these initiatives is essential to ensure that resources are used appropriately and delivering their intended outcomes. It is important to consider that there are multiple concepts, frameworks, theories for evaluation for different contexts.

The main activities of any evaluation program include: identifying the purpose of the evaluation, identifying stakeholders, assessing evaluation expertise, gathering the relevant evidence using various methods, and building consensus, which is usually an iterative process.

Health services managers and leaders need to be engaged in all steps of any evaluation program and ensure a collaborative approach is used to achieve the required results. Other important aspects for leaders and managers are to be aware of the ethics of the evaluation, managing expectations, sharing both positive and negative findings with the team, and ensuring that the results provided are both useful and usable.

References

Bauman, A. E., King, L. and Nutbeam, D. 2014. Rethinking the evaluation and measurement of health in all policies. Health Promotion International, 29(suppl_1), pp.i143-i151.

Centers for Disease Control and Prevention. n.d. Indicators: CDC approach to evaluation. https://www.cdc.gov/evaluation/indicators/index.htm

Greenhalgh T. M. 2019. Understanding Research Methods for Evidence-Based Practice in Health. Melbourne, Melbourne: Wiley.

Harris, P., Barry, M., Sleep, L., Griffiths, J., and Briggs, L. (2020). Integrating recovery-oriented and realistic evaluation principles into an evaluation of a Partners in Recovery programme. Evaluation Journal of Australasia.20(3), 140–156. https://doi.org/10.1177/1035719X20944010

Horn, J. 2001. A checklist for developing and evaluating evaluation budgets. Western Michigan University. Available at http://www.wmich.edu/evalctr/checklists/evaluation-checklists Accessed 15 March 2023.

Lucas, H. & Longhurst, R. 2010. Evaluation: Why, for Whom and How? IDS Bulletin, 41, 28-35.

Patton, M. Q. 2012. Essentials of Utilization-Focused Evaluation. Los Angeles, SAGE Publications, Inc.

Pawson, R. T. and Tilley, N. 1997. Realistic evaluation. London Sage Publications.

Poland, B., Frohlich, K. L. and Cargo, M. 2009. Context as a fundamental dimension of health promotion program evaluation. Health promotion evaluation practices in the Americas: Values and research, pp.299-317.  In: Potvin, L., MCQeen, D. V., (eds.) Health Promotion Evaluation Practices in the Americas: Values and Research. New York, NY: Springer New York.

Scarth, L. 2005. Encyclopedia of Evaluation. The Booklist, 101, 1602.

Steckler, A. E. and Linnan, L.E. 2002. Process evaluation for public health interventions and research. Jossey-Bass/Wiley.

Stufflebeam, D. 2001. Evaluation Models. New Directions for Evaluation, 2001. https://doi.org/10.1002/ev.3

Wensing, M. and Grol, R. 2019. Knowledge translation in health: how implementation science could contribute more. BMC Medicine, 17(1), pp.1-6.

Zarinpoush, F. (2006).  Project Evaluation Guide for Non-profit Organizations. Imagine Canada. https://sectorsource.ca/sites/default/files/resources/files/projectguide_final.pdf

 

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Leading in Health and Social Care Copyright © 2023 by Ruth Mackenzie-Stewart and Hanan Khalil is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book