Frequently Asked Questions - FAQs
- What is an Impact Evaluation?
- Why are Impact Evaluations important?
- When should I conduct an impact evaluation?
- When should I start an impact evaluation?
- How long does an impact evaluation take?
- What is the cost of an impact evaluation?
- How should I fund an impact evaluation?
- What are the steps to implement an impact evaluation?
- How do I choose an impact evaluation methodology?
- What data should I collect for an impact evaluation?
- Are impact evaluations ethical?
- How is the impact evaluation portal organized?
- Where can I find impact evaluation experts?
- Where can I find training for IE?
An impact evaluation is a type of evaluation that uses statistics and appropriate research designs such as randomized controlled trials to establish the causal link between an intervention and a set of outcomes. In the context of development, interventions include programs or policies such as paving roads, paying cash transfers or equipping health clinics, and outcomes might include increasing productivity, reducing poverty, or improving health. An impact evaluation would determine whether an intervention that equips health clinics resulted in better health outcomes for patients. Impact evaluations establish not only the direction of the program effect, but also its magnitude, confidence intervals, and the heterogeneity of impacts. The estimated impacts are a key input to cost-benefit and cost-effectiveness analysis which guide policy decisions by informing the cheapest way to achieve a given policy objective.
Impact evaluations provide rigorous evidence on the effectiveness of programs and policies. Development programs are typically designed to improve outcomes. Whether or not a program actually achieves its intended objectives and/or has unintended consequences is a crucial public policy question that impact evaluations seek to answer. Impact evaluations contribute to:
Guide policy decisions. Given finite resources, the evidence from impact evaluations helps guide policy decisions and improves the use of resources, for example by expanding cost-effective interventions and eliminating or reformulating interventions with negative impacts.
Improve program design. Impact evaluations can help us understand why a given intervention works and how to make programs even more effective.
Increase accountability. By generating and reporting evidence about the effectiveness of public policies, impact evaluations inform donors, media, civil society, academia and other audiences about the returns to social investments.
Impact evaluations should be conducted when there is a need for rigorous evidence on the effectiveness of an intervention, or when you want to test innovations to improve an existing intervention. Large or unproven programs are good candidates for an impact evaluation when the intended purpose of the evaluation is to guide policy decisions and provide accountability. Large projects are those that will occupy a sizeable proportion of a country or institution’s budget and/or affect a large number of people. Untested interventions are those for which little rigorous empirical evidence exists. Impact evaluations can also test adjustments to an existing intervention to assess whether the program can be further improved.
The planning of an impact evaluation should ideally start at the earliest stages of project preparation. Prospective evaluations are those that are designed and planned together with the intervention. Early planning will allow the project team and the evaluators to identify the most rigorous methodology that is compatible with program implementation requirements. On the other hand, retrospective evaluations are those designed and conducted at the conclusion of a program, and as such, are typically more limited in terms of methodologies and data.
The duration of an impact evaluation will depend on various factors, including whether the evaluation is prospective or retrospective, the sources of data, and the time of exposure to treatment. As such, the duration of an evaluation can vary widely, from a few months for small efficacy trials to multiple years for large effectiveness trials. For a detailed planning tool, see the following Impact Evaluation Scheduling Tool.
he primary costs of impact evaluation are data, which may represent upwards of 70 or 80% of an evaluation budget, and technical assistance for activities such as evaluation design and data analysis. The cost of an impact evaluation can be quite modest when using data from existing sources such as administrative information systems or existing household surveys. On the other hand, evaluations that require extensive primary data collection will tend to be more expensive. Furthermore, in the context of large effectiveness trials the cost of an evaluation may be large in absolute terms, but small relative to the overall cost of the program. In the context of a small efficacy trial the cost of the evaluation may represent a larger share of total costs. To estimate the costs of your evaluation, see the IDB’s Budgeting Template Tool.
The sources of funding for an impact evaluation depend fundamentally on whether the evaluation is intended primarily to produce public or private knowledge. Impact evaluations conducted for the purposes of informing the improvement or scale up of a particular country program are more likely to be funded from a program’s own monitoring and evaluation budget. On the other hand, impact evaluations that are primarily intended to expand the global evidence base may be more heavily subsidized by external sources such as grants. In many cases, since impact evaluations serve a public and private purpose, program budget is supplemented with external funding sources to cover the full cost of the evaluation.
An impact evaluation should be considered an “operation within an operation” and requires adequate funding, staffing and technical support. For a detailed checklist of activities, see the Comprehensive Checklist.
The central methodological challenge of any impact evaluation is to separate the change in an outcome that is attributable to the intervention of interest from other variables or “confounders” that affect the same outcomes. To appropriately control for potential confounders, evaluations compare the outcomes of treatment and control (or comparison) groups. An impact evaluation methodology is determined by the process through which treatment is assigned. Experimental evaluations, also known as randomized evaluations or randomized controlled trials (RCTs) compare the outcomes of groups where treatment was assigned at random. Under certain conditions, random assignment is the most robust way of guaranteeing that estimated impacts are free of bias and as such represents the “true” impact of the program. A second type of impact evaluation uses quasi-experimental methods (or non-experimental methods). Quasi-experimental impact evaluations attempt to construct comparison groups from populations that were left untreated, even though the process was not random, and usually rely more heavily on statistical and econometric models to estimate impacts. Evaluations with randomized assignment are typically superior to quasi-experimental evaluations in that they minimize the risk of bias and allow for the estimation of population average treatment effects. Quasi-experimental evaluations may require stronger assumptions and estimate impacts that are valid only for certain population sub-groups. More information about impact evaluation methods can be found HERE.
Most importantly, impact evaluations require high-quality measures of final outcomes measured in the same way for treatment and control groups. It is also useful to measure intermediate outcomes along the results chain to gain a more complete understanding of how final outcomes are affected. Impact evaluations also require detailed information about program implementation, such as what units received treatment and when. Finally, many evaluations will collect a rich set of descriptive variables about the study population. Evaluation data include:
a) Outcome indicators should preferably be selected so that they are “SMART”: specific, measurable, attributable, realistic, and targeted. You must determine when to measure the outcome indicators and establish a hierarchy of outcome indicators, ranging from short-term indicators to longer-term ones.
b) Administrative data on the delivery of the intervention. Monitoring data are needed to know when a program starts and who receives benefits, as well as to provide a measure of the “intensity” of the intervention in cases when it may not be delivered to all beneficiaries with the same content, quality, or duration.
c) Data on exogenous factors that may affect the outcome of interest. These make it possible to control for outside influences.
d) Data on other characteristics. Including additional controls or analyzing the heterogeneity of the program’s effects along certain characteristics makes possible a finer and more nuanced estimation of treatment effects.
Questions of ethics are sometimes raised when planning and conducting an impact evaluation. In fact, not conducting an impact evaluation may be unethical. Untested programs and policies can lead to wasted resources and have negative unintended consequences. As such, implementing interventions NOT supported by evidence may be seen as unethical. The ethics of impact evaluation is sometimes questioned when it is perceived that a control group is being “denied” benefits for the sole purpose of evaluation. This assumes that we know that the intervention actually produces positive benefits, and that there are sufficient resources to benefit every eligible unit in a population (person, community, firm, etc.). Impact evaluations can and should be conducted in an ethical manner, including adhering to program assignment rules and constraints so benefits are not denied for the sole purpose of evaluation, and following best practices for ethical research and protection of human subjects.
How is the impact evaluation portal organized?
The Impact Evaluation (IE) Portal is a one stop shop for information and tools used for impact evaluation. This portal’s structure follows the sequential steps of an impact evaluation. In particular, it is organized in five sections: (1) Design, (2) Implementation, (3) Data collection, (4) Analysis and dissemination, and (5) Learning. The first four sections are sequential and the last one is transversal to all of the other sections.
a) Design will help you to define a solid methodological framework for your impact evaluation, with tools to determine what to evaluate and how to evaluate it.
b) Implementation provides tools to assist in practical tasks such as contracting of technical support and identifying funding sources.
c) Data collection provides resources for surveys and data management, including examples of questionnaires, manuals and data entry-programs that can be tailored for specific applications.
d) Analysis and dissemination shows examples of dissemination strategies.
e) Learning includes training materials and links to impact evaluation courses.
The Impact Evaluation Portal provides two lists of impact evaluation experts with their corresponding area of expertise. These resources include the 3ie roster of international impact evaluation specialistsand J-PAL affiliates.
The Learning page within the Impact Evaluation portal provides several learning resources, both IDB and external. The first link (Workshops and courses) presents the impact evaluation trainings organized by the IDB, followed by initiatives sponsored by other partners. In the second part (Training materials) of the Learning webpage you will be able to download different learning materials organized by topics ranging from impact evaluation methodologies, to fieldwork, sampling and others. You can also view the IDB IE Virtual lessons in the third link, where some of the most common impact evaluation topics are explained in a very intuitive way.