This policy briefing focuses on evaluability assessment, a systematic and collaborative approach to deciding whether and how an evaluation should be done. 

Published: October 2018

Contents

Introduction

Evaluation is essential for measuring the effectiveness of public policies and services. A well-designed evaluation can go beyond the monitoring of inputs, outputs and outcomes, to identify whether observed changes in outcomes are attributable to a change in policy or a service redesign. It can also identify the mechanisms through which change takes place, generating evidence that is applicable to other policies and services.

Of course, not every policy or service innovation can be evaluated. Resources are scarce and sometimes the effects are so small or take so long to emerge that the cost of evaluation would be disproportionate to the investment in the policy change itself. Evaluation planning is complicated by the wide range of evaluation approaches available; identifying which methods, or combination of methods, is likely to work best is a skilled task.

Evaluability Assessment is a systematic and collaborative approach to deciding whether and how an evaluation should be done. It involves researchers, policymakers, practitioners and other stakeholders working together to reach a consensus view of what the policy or service change is expected to achieve, what data sources are available to measure change processes and outcomes, and what, if any, is the best approach to evaluation.

This briefing paper draws on the experience of What Works Scotland and our partners (NHS Health Scotland and the Scottish Collaboration for Public Health Research and Policy at the University of Edinburgh) in developing and applying Evaluability Assessment methods in Scotland. We did not invent the method, but we have built up one of the most extensive portfolios of completed Evaluability Assessments in the UK, and learnt important lessons about how and when the approach can be applied to best effect.

The paper is organised around three questions:

  1. What is Evaluability Assessment?
  2. What have we learnt from conducting Evaluability Assessments in Scotland?
  3. Where next for Evaluability Assessment?

What is evaluability assessment?

Evaluability Assessment was developed in the US, as a way of improving evaluation planning and reducing the waste associated with weak evaluations of interventions that were so poorly designed or implemented that it was unrealistic to expect any measurable change. In the UK it has been most extensively used in planning the evaluation of overseas development aid projects (Davies, 2013; Davies and Payne, 2015). Its potential to improve decision-making around the evaluation of policy and practice innovation in the UK was first identified by public health researchers, (Ogilvie et al., 2011) but What Works Scotland and its partners have applied the approach to evaluation planning in a range of policy sectors.

The approach we have adopted varies from case to case, but has a number of common core elements.

Engaging stakeholders

An important function of EA is to ensure that evaluation findings are useful for decision-makers. Involving stakeholders throughout the process means that key decisions about what form a subsequent evaluation should take are jointly owned and reflect stakeholders’ priorities, as well as the practical and methodological constraints on evaluation study design. Who to involve will depend on the nature of the intervention, but typically will include both policy-makers and practitioners who are responsible for delivering the intervention. It is often useful to involve people involved in routine data gathering or monitoring. Involving stakeholders directly, rather than relying on documentary information, should provide a more accurate, detailed and up-to-date characterisation of the goals and design of the intervention. It should also help to ensure a shared understanding and realistic expectations about what an evaluation can and cannot deliver.

Developing a theory of change

A key motivation for Evaluability Assessment is to achieve a common understanding of what an intervention is intended to achieve. Setting out the goals and components of the intervention, and linking these to the intended outcomes in the form of a visual model is a good way of achieving such a shared understanding. A draft model can be sketched out by the researchers, based on documentary information, and then refined and elaborated in interviews or workshops with stakeholders.

Reviewing existing evidence and data sources

The focus of an evaluation will depend on what is already known about the intervention in question and what are the most important remaining uncertainties. Data sources may include published literature, including previous evaluations of similar interventions, policy or programme-specific documents and routinely collected monitoring or outcome data. Access to administrative data, especially if information on exposure can be linked to information on outcomes, is often the key to an efficient, affordable evaluation design.

Appraising options and making recommendations

An Evaluability Assessment is a decision-making tool, so it is important to provide a clear appraisal of the evaluation options given the questions that stakeholders want to answer, what is already known from previous research, and what data sources are available for future evaluation. Even if these considerations support one particular approach, it is useful to present an appraisal of a range of options, including the option of not proceeding with an evaluation, so that the grounds for any recommendations are explicit and persuasive. Stakeholders should be involved in reviewing and agreeing a draft set of options before a final report is presented.

We have found it useful to approach these tasks through a series of workshops with stakeholders. Although there may be some overlap between the workshops, having one focused on refining and agreeing the theory of change, one on existing evidence and potential data sources, and a third on appraising evaluation options, is a good way of sustaining momentum and engagement.

What have we learnt from conducting Evaluability Assessments in Scotland?

Evaluability Assessment has proven to be a useful and flexible approach to evaluation planning, and is increasingly being incorporated into implementation plans for significant national policy initiatives in Scotland. It offers value by sharpening the focus of interventions that are put forward as candidates for evaluation, and establishing the likelihood of measurable impact, before resources are committed to a full scale evaluation.

At least fifteen Evaluability Assessments have now been undertaken in Scotland (see below), covering a wide variety of interventions, on behalf of the Scottish Government, local government, NHS Scotland and other public organisations such as Scottish Natural Heritage. Evaluability Assessments have led to the commissioning of several evaluations, for example of the Family Nurse Partnership, Distress Brief Interventions Programme, and the Enhanced Health Visiting Pathways.

  1. Free school meals for all P1-3 children
  2. Family Nurse Partnership
  3. Pregnancy and parenthood in young people
  4. Enhanced Health Visiting Pathway
  5. Glasgow’s Thriving Places Initiative
  6. Preventing unintentional injuries in children
  7. Community Hub/GP fellows pilot
  8. Primary Care transformation
  9. Expansion of early learning and childcare
  10. Distress Brief Interventions Programme
  11. Hospital Healthy Retailer Standard
  12. Community Empowerment Act, parts 3 and 5
  13. Fair Start Programme
  14. Scotland’s Baby Box
  15. Local Green Health Partnerships

We have also found that Evaluability Assessment can also forestall commitments to evaluate programmes where further development is required, or where there is little realistic expectation of benefit, and make the evaluations that are undertaken more useful. Evaluability Assessments can provide a basis for constructive engagement with stakeholders, whether or not a full scale evaluation is undertaken.

Other key lessons from our experience so far are that:

Evaluability Assessment is most useful when there is a clear commitment to evaluation on the part of stakeholders. This does not mean that resources must be committed in advance, because an Evaluability Assessment could help to make the case for allocating a budget for evaluation, or demonstrate that an evaluation would not be worthwhile. But a commitment to acting on the recommendations lessens the risk that Evaluability Assessment becomes a form of window dressing, or a way of deferring the key decisions.

Evaluability Assessment is a way of developing evaluation plans, not a way of developing the intervention itself. Without a clear model of the intervention that can be captured in an agreed theory of change, it will only be possible to make very general recommendations about evaluation.

Clarifying expectations and assumptions is therefore an important first step to undertaking a useful Evaluability Assessment, so that all those involved are clear at the outset what the process can and cannot deliver. The key output is an appraisal of the options for evaluation, rather than a detailed research specification. Developing such a specification is a substantial task in its own right, which can only be carried out once a preferred option has been identified.

It is difficult to reach and sustain consensus if the stakeholders involved change substantially after the process has begun – and especially difficult if senior stakeholders join towards the end. Ideally, all the key stakeholders should be engaged at the outset, on the basis of a clear understanding of the level of commitment required.

Where next for Evaluability Assessment?

Evaluability Assessment has been well-characterised as a “low-cost pre-evaluation activity to prepare better for conventional evaluations of programmes, practices and some policies” (Leviton et al, 2010). It is low cost, relative to a full evaluation, but not free. As interest in EA grows, questions of how to meet the demand become more pressing. As experience accumulates, researchers and stakeholders will develop a better understanding of the circumstances in which an Evaluability Assessment is likely to be worthwhile. Development of further guidance for conducting EAs based on this experience (forthcoming from What Works Scotland) will also help researchers and stakeholders to discriminate cases where a full Evaluability

Assessment would be useful, and those where the evaluation options are so constrained that decisions could be reached more quickly. Finally, if stakeholders accept that Evaluability Assessment is an efficient use of resources, they should be prepared to make the relatively modest outlay of time and money required to maximise the value of the potentially much larger investment in a full scale evaluation.

References and further reading

  • Craig, P. and Campbell, M. (2015) Evaluability Assessment: A Systematic Approach to Deciding Whether and How to Evaluate Programmes and Policies. Working Paper. What Works Scotland.
  • Davies R (2013) Planning evaluability assessmentsA synthesis of the literature with recommendations. London: Department for International Development.
  • Davies R, Payne L (2015) Evaluability Assessments: Reflections on a review of the literature. Evaluation, 1(2): 216-231.
  • Leviton L, et al. (2010) Evaluability Assessment to Improve Public Health Policies, Programs, and Practices. Annual Review of Public Health, 31: 213-233.
  • Ogilvie D et al. (2011) Assessing the evaluability of complex public health Interventions: five questions for researchers, funders, and policymakers. The Milbank Quarterly, 89(2): 206–225.

Contact

This policy briefing was written by Dr Peter Craig, Co-director of What Works Scotland.

More research

See all of What Works Scotland’s evaluability assessment resources

Facebooktwittergoogle_plusreddittumblrmail