Impact evaluation: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Nick Bagnall
mNo edit summary
 
(26 intermediate revisions by one other user not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}
An '''impact evaluation''' is a study designed to estimate the effects that can be attributed to a policy program or intervention. Impact evaluation is a useful tool for measuring a program’s effectiveness because it doesn’t merely examine whether the program’s goals were met; it also determines whether those goals would have been met in the absence of the program by establishing a cause-and-effect relationship between program activities and the outcomes of interest.  
An '''impact evaluation''' is a study designed to estimate the effects on a group of people that can be attributed to a policy program or intervention. Impact evaluation is a useful tool for measuring a program’s effectiveness because it doesn’t merely examine whether the program’s goals were met; it also determines whether those goals would have been met in the absence of the program by establishing a cause-and-effect relationship between program functions and the outcomes of interest.  


Because impact evaluation reliably quantifies program efficacy, it is used in a variety of program studies and has especially found application in development [[economics]] to increase effectiveness of aid delivery in [[developing nation]]s.  
Because impact evaluation reliably quantifies program efficacy, it is used in a variety of program studies and has especially found application in development [[economics]] to increase effectiveness of aid delivery in [[developing nation]]s.  
==How is a policy’s impact measured?==
==How is a policy’s impact measured?==
There are multiple methods for conducting rigorous impact evaluations, yet all necessarily rely on simulating the counterfactual—in other words, estimating what would have happened to the scrutinized group in the absence of the intervention. [[counterfactual history|Counterfactual analysis]] thus requires a ‘control’ group—people unaffected by the policy intervention—to compare to the program’s beneficiaries, who comprise the ‘treatment’ group of a [[Sample_(statistics)/Definition|population sample]]. The ability to draw causal inferences from the impact evaluation crucially depends on the two groups being statistically identical, meaning there are no systematic differences between them.
There are multiple methods for conducting rigorous impact evaluations, yet all necessarily rely on simulating the counterfactual—in other words, estimating what would have happened to the scrutinized group in the absence of the intervention.<ref name=nonexperimental>Merely comparing what the situation was before and after program implementation requires an enormous assumption—that all the difference between the "before" state and the "after" state was due to the program. (This assumption is, in fact, precisely what an impact evaluation is designed to test.) Thus, non-experimental evaluations only say what happened over a period of time, not what ''would have'' happened.</ref> [[counterfactual history|Counterfactual analysis]] thus requires a ‘control’ group—people unaffected by the policy intervention—to compare to the program’s beneficiaries, who comprise the ‘treatment’ group of a [[Sample_(statistics)/Definition|population sample]]. The ability to draw causal inferences from the impact evaluation crucially depends on the two groups being statistically identical, meaning there are no systematic differences between them.  


To minimize systematic differences, researchers design impact evaluations to be at least quasi-experimental. Quasi-experimental approaches can remove bias arising from selection on observables and, where panel data are available, time invariant unobservables. Quasi-experimental methods vary but are usually carried out by multivariate [[regression analysis]].
To minimize systematic differences, researchers design impact evaluations to be at least quasi-experimental. Like non-experimental approaches, quasi-experimental approaches result in bias because beneficiaries are not randomly selected, but the advantage of quasi-experiments is that [[selection bias]] can often be removed from observable characteristics between people in the treatment group and people in the control group; in cases where panel data are available, time invariant unobservable characteristics may be controlled for as well. Quasi-experimental methods vary but are usually carried out by multivariate [[regression analysis]].


Although quasi-experimental methods have their advantages, systematic differences are best eliminated through a full experimental approach, which involves random assignment: a law of [[statistics]] guarantees that a large enough sample size of people randomly assigned will generate statistically identical comparison groups. Thus, the control group mimics the counterfactual, and any differences that arise between the two groups after the program is implemented may be reliably attributed to the program provided that threats to the study's validity are controlled for. These threats include:
Although quasi-experimental methods have their advantages, systematic differences are best eliminated through a full experimental approach, which involves random assignment: a law of [[statistics]] guarantees that a large enough sample size of people randomly assigned will generate statistically identical comparison groups.<ref name="randomassignment">In more technical terms, "random" means that every person in a population has the same probability of being selected for the treatment group; "statistically identical" means the distribution of both the observable and unobservable characteristics in the treatment and control groups are statistically identical—so their distributions have statistically identical shape, center, and spread.</ref> Thus, the control group mimics the counterfactual, and any differences that arise between the two groups after the program is implemented may be reliably attributed to the program provided that threats to the study's validity are controlled for. These threats include:
*The [[Hawthorne effect]], which occurs when members of the treatment group change their behavior in response to the knowledge that they are being studied, not in response to any particular experimental manipulation. The [[John Hendry effect]] occurs when members of the control group do so.  
*The [[Hawthorne effect]], which occurs when members of the treatment group change their behavior in response to the knowledge that they are being studied, not in response to any particular experimental manipulation. The [[John Hendry effect]] occurs when members of the control group do so.  
*No-shows, members of the treatment group who fail to attend some function where their attendance is necessary to the study's design.
*No-shows, members of the treatment group who fail to attend some function where their attendance is necessary to the study's design.
Line 14: Line 14:
*Contamination, which occurs when members of treatment and/or comparison groups have access to another intervention which also affects the outcome(s) of interest.
*Contamination, which occurs when members of treatment and/or comparison groups have access to another intervention which also affects the outcome(s) of interest.
*Crossovers, members of the control group who "cross over" into the treatment group.  
*Crossovers, members of the control group who "cross over" into the treatment group.  
These threats can affect the validity of all types of impact evaluation, randomized or otherwise. Non-experimental and quasi-experimental evaluations face additional methodological issues such as confounding factors and selection bias. While random assignment addresses these issues to maximize an impact evaluation’s internal validity (the ability to generalize the study’s results to the population the sample was drawn from) there remain inherent limitations to an impact evaluation’s [[external validity]] (the ability to generalize the study’s results to other populations). Testing a program in multiple disparate settings is a great way to determine whether the program's results are generally replicable and thus worth “scaling up.” Knowledge of a particular setting can help determine whether a program's results can indeed be replicated in that context. For example, consider a policy intervention designed to increase school enrollment by informing parents about the positive correlation between additional schooling and increased wages: If, in a given school system, parents are inclined to underestimate the effects of additional schooling on wages, they will more likely be influenced by the information than parents in another school system who generally overestimate the effects of additional schooling on wages.
These threats can affect the validity of all types of impact evaluation, randomized or otherwise. Non-experimental and quasi-experimental evaluations face additional methodological issues such as confounding factors and selection bias. While random assignment addresses these issues to maximize an impact evaluation’s internal validity (the ability to infer a causal relationship between the program's activities and outcomes) there remain inherent limitations to an impact evaluation’s [[external validity]] (the ability to generalize the study’s results to other populations). Testing a program in multiple disparate settings is a great way to determine whether the program's results are generally replicable and thus worth “scaling up.” Knowledge of a particular setting can help determine whether a program's results can indeed be replicated in that context. For example, consider a policy intervention designed to increase school enrollment by informing parents about the positive correlation between additional schooling and increased wages: If, in a given school system, parents are inclined to underestimate the effects of additional schooling on wages, they will more likely be influenced by the information than parents in another school system who generally overestimate the effects of additional schooling on wages.
 
==Other examples of impact evaluations==
==Other examples of impact evaluations==
The most renowned large-scale development experiment/impact evaluation is a [[conditional cash transfer]] program named [[Oportunidades]] (formerly known as Progresa). The program, launched by the Mexican government in 1997, targets poverty by providing cash payments to families whose children meet certain conditions such as regular school attendance. Inspired by the success of Oportunidades, similar conditional cash transfer programs have since been implemented by a number of governments in developing countries.
The most renowned large-scale development experiment/impact evaluation is a [[conditional cash transfer]] program named [[Oportunidades]] (formerly known as Progresa). The program, launched by the Mexican government in 1997, targets poverty by providing cash payments to families whose children meet certain conditions such as regular school attendance. Inspired by the success of Oportunidades, similar conditional cash transfer programs have since been implemented by a number of governments in developing countries.


Although Oportunidades has proven effective in improving a number of development outcomes among beneficiaries, it is very expensive, so a number of non-profit organizations have conducted a number of cost-effectiveness studies to compare alternative solutions. In comparing the cost effectiveness of various programs designed to improve school participation, for example, the [[Abul Latif Jameel Poverty Action Lab]] (J-PAL) at the [[Massachusetts Institute of Technology]] found that distributing de-worming tablets to children was substantially more cost-effective than conditional cash transfer programs. J-PAL offers a [http://www.povertyactionlab.org/evaluations?filters=type:evaluation&filters=type:evaluation searchable database] of hundreds of randomized impact evaluations conducted either by themselves or by affiliates.
Although Oportunidades has proven effective in improving a number of development outcomes among beneficiaries, it is very expensive, so a number of non-profit organizations have conducted various cost-effectiveness studies to compare alternative solutions. In comparing the cost effectiveness of various programs designed to improve school participation, for example, the [[Abdul Latif Jameel Poverty Action Lab]] (J-PAL) at the [[Massachusetts Institute of Technology]] found that distributing de-worming tablets to children was substantially more cost-effective than conditional cash transfer programs.<ref name=citation1>{{cite video
  | people = Levy, Dan
  | title = Poverty Action Lab Executive Training: Evaluating Social Programs
  | medium = Web video
  | publisher = Massachusetts Institute of Technology
  | location = Cambridge, MA
  | date = 2009 }}</ref> J-PAL has a [http://www.povertyactionlab.org/evaluations?filters=type:evaluation&filters=type:evaluation searchable database] of hundreds of randomized impact evaluations conducted either by themselves or by affiliates.


==Conclusion==
==Conclusion==
An accumulation of studies on impact evaluations—such as JPAL’s—offers a wealth of knowledge as those studies are the building blocks for more general lesson learning provided that the impact evaluations are properly designed. Properly designed impact evaluations test hypotheses to answer not only whether a program is effective, but also how and why it is effective—that is, the reasons for the program’s effectiveness and the circumstances under which its results would likely be replicated. This type of “theory-based” impact evaluation allows policy-makers to understand the reasons for differing levels of program participation and the processes determining changes in behavior—and one of the main goals of an impact evaluation is to guide policymakers in future decisions.
An accumulation of studies on impact evaluations offers a wealth of knowledge as those studies are the building blocks for more general lessons provided that the impact evaluations are properly designed. Properly designed impact evaluations test hypotheses to answer not only whether a program is effective, but also how and why it is effective—that is, the reasons for the program’s effectiveness and the circumstances under which its results would likely be replicated. This type of “theory-based” impact evaluation allows policymakers to understand the reasons for differing levels of program participation and the processes determining changes in behavior—and, after all, one of the main goals of a good impact evaluation is to guide policymakers in future decisions.
 
==Notes==
<references />[[Category:Suggestion Bot Tag]]

Latest revision as of 11:00, 31 August 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

An impact evaluation is a study designed to estimate the effects on a group of people that can be attributed to a policy program or intervention. Impact evaluation is a useful tool for measuring a program’s effectiveness because it doesn’t merely examine whether the program’s goals were met; it also determines whether those goals would have been met in the absence of the program by establishing a cause-and-effect relationship between program functions and the outcomes of interest.

Because impact evaluation reliably quantifies program efficacy, it is used in a variety of program studies and has especially found application in development economics to increase effectiveness of aid delivery in developing nations.

How is a policy’s impact measured?

There are multiple methods for conducting rigorous impact evaluations, yet all necessarily rely on simulating the counterfactual—in other words, estimating what would have happened to the scrutinized group in the absence of the intervention.[1] Counterfactual analysis thus requires a ‘control’ group—people unaffected by the policy intervention—to compare to the program’s beneficiaries, who comprise the ‘treatment’ group of a population sample. The ability to draw causal inferences from the impact evaluation crucially depends on the two groups being statistically identical, meaning there are no systematic differences between them.

To minimize systematic differences, researchers design impact evaluations to be at least quasi-experimental. Like non-experimental approaches, quasi-experimental approaches result in bias because beneficiaries are not randomly selected, but the advantage of quasi-experiments is that selection bias can often be removed from observable characteristics between people in the treatment group and people in the control group; in cases where panel data are available, time invariant unobservable characteristics may be controlled for as well. Quasi-experimental methods vary but are usually carried out by multivariate regression analysis.

Although quasi-experimental methods have their advantages, systematic differences are best eliminated through a full experimental approach, which involves random assignment: a law of statistics guarantees that a large enough sample size of people randomly assigned will generate statistically identical comparison groups.[2] Thus, the control group mimics the counterfactual, and any differences that arise between the two groups after the program is implemented may be reliably attributed to the program provided that threats to the study's validity are controlled for. These threats include:

  • The Hawthorne effect, which occurs when members of the treatment group change their behavior in response to the knowledge that they are being studied, not in response to any particular experimental manipulation. The John Hendry effect occurs when members of the control group do so.
  • No-shows, members of the treatment group who fail to attend some function where their attendance is necessary to the study's design.
  • Spillover, which occurs when members of the control group are affected by the intervention.
  • Contamination, which occurs when members of treatment and/or comparison groups have access to another intervention which also affects the outcome(s) of interest.
  • Crossovers, members of the control group who "cross over" into the treatment group.

These threats can affect the validity of all types of impact evaluation, randomized or otherwise. Non-experimental and quasi-experimental evaluations face additional methodological issues such as confounding factors and selection bias. While random assignment addresses these issues to maximize an impact evaluation’s internal validity (the ability to infer a causal relationship between the program's activities and outcomes) there remain inherent limitations to an impact evaluation’s external validity (the ability to generalize the study’s results to other populations). Testing a program in multiple disparate settings is a great way to determine whether the program's results are generally replicable and thus worth “scaling up.” Knowledge of a particular setting can help determine whether a program's results can indeed be replicated in that context. For example, consider a policy intervention designed to increase school enrollment by informing parents about the positive correlation between additional schooling and increased wages: If, in a given school system, parents are inclined to underestimate the effects of additional schooling on wages, they will more likely be influenced by the information than parents in another school system who generally overestimate the effects of additional schooling on wages.

Other examples of impact evaluations

The most renowned large-scale development experiment/impact evaluation is a conditional cash transfer program named Oportunidades (formerly known as Progresa). The program, launched by the Mexican government in 1997, targets poverty by providing cash payments to families whose children meet certain conditions such as regular school attendance. Inspired by the success of Oportunidades, similar conditional cash transfer programs have since been implemented by a number of governments in developing countries.

Although Oportunidades has proven effective in improving a number of development outcomes among beneficiaries, it is very expensive, so a number of non-profit organizations have conducted various cost-effectiveness studies to compare alternative solutions. In comparing the cost effectiveness of various programs designed to improve school participation, for example, the Abdul Latif Jameel Poverty Action Lab (J-PAL) at the Massachusetts Institute of Technology found that distributing de-worming tablets to children was substantially more cost-effective than conditional cash transfer programs.[3] J-PAL has a searchable database of hundreds of randomized impact evaluations conducted either by themselves or by affiliates.

Conclusion

An accumulation of studies on impact evaluations offers a wealth of knowledge as those studies are the building blocks for more general lessons provided that the impact evaluations are properly designed. Properly designed impact evaluations test hypotheses to answer not only whether a program is effective, but also how and why it is effective—that is, the reasons for the program’s effectiveness and the circumstances under which its results would likely be replicated. This type of “theory-based” impact evaluation allows policymakers to understand the reasons for differing levels of program participation and the processes determining changes in behavior—and, after all, one of the main goals of a good impact evaluation is to guide policymakers in future decisions.

Notes

  1. Merely comparing what the situation was before and after program implementation requires an enormous assumption—that all the difference between the "before" state and the "after" state was due to the program. (This assumption is, in fact, precisely what an impact evaluation is designed to test.) Thus, non-experimental evaluations only say what happened over a period of time, not what would have happened.
  2. In more technical terms, "random" means that every person in a population has the same probability of being selected for the treatment group; "statistically identical" means the distribution of both the observable and unobservable characteristics in the treatment and control groups are statistically identical—so their distributions have statistically identical shape, center, and spread.
  3. Levy, Dan. (2009). Poverty Action Lab Executive Training: Evaluating Social Programs [Web video]. Cambridge, MA: Massachusetts Institute of Technology.