How do I know if an intervention is evidence based? Why is it important to consider both study quality and results?

How do I know if an intervention is evidence based? Why is it important to consider both study quality and results?

Resource Type
Videos
Developed By
National Center on Intensive Intervention

In this video, Ralph P. Ferretti, Professor of Education and Psychological & Brain Sciences at the University of Delaware explain why it is important to consider both the study quality and the study results when determining the evidence base of an intervention. 

 

 

 

 

Question: How do I know if an intervention is evidence based? Why is it important to consider both study quality and results?

Answer: So let’s start quickly by saying, “What is an evidence-based intervention?” It’s a treatment or an intervention that we know is effective. And we know it is effective because we have implemented that procedure faithfully. Technically, when we speak about the faithfulness of an intervention, we are speaking about the fidelity of intervention. And that means, really simply, what procedure you put in place was actually done the way it should be done. That’s critical because if you don’t do it the right way, you can have one of two outcomes. One outcome is that an otherwise effective intervention is shown to be ineffective and then people draw the wrong conclusions about the effectiveness of the intervention. Or, in fact, it could appear as if it is effective, when in fact the reason why some change took place —an improvement or a reduction in the behavior— was because there was some other factor associated with the treatment that was responsible for the outcome. That’s called a confounding variable and we want to do everything we possibly can to insure that those confounds are removed, so we can be sure that when we say a treatment or an intervention is effective, it actually is. We have to be sure that we can measure an outcome that is relevant to what we are interested in and we have to be sure that the measures that we choose that reflect the changes in some outcome that we are concerned about are both reliable and valid. And the reliability dimensions simply means that measure will yield consistent results when we administer the measure. We don’t get changes in the instrument that we are using to measure, but also that it is relevant; that it actually reflects the idea or the outcome that we are concerned about. That’s really at a core what validity is. Now, many of you have been involved in a situation in which you had an intervention, you implemented the intervention, and you saw a change in the outcome and you liked to be able to attribute the change to the intervention that you used. Maybe you did this in a classroom or you did this with a child. It would make sense that you could measure the outcome of interest, implement the intervention, and then measure the outcome again to see if there was a change in the outcome that you were interested in and it would make sense that you would attribute that change to the intervention that you used. Technically speaking, in our procedures associated with NCII, the National Center for [Intensive] Intervention, that is not enough. It is not enough to show that an outcome changed because you implemented it and the reason is, it can be that an outcome changed say after an intervention because of some confounding variable. Some other variable. So for example, one of the factors that could make a difference is that some of the participants who had a certain kind of behavioral profile at the beginning were dropped from the study and as a result the outcome changes when you measure the group, but in fact, the behavior of the group would not have changed if you had included those participants in. Or kids change, right, they develop, so things can improve or they can get worse because of some maturational process. The way we take care of that situation is through random assignment of participants to an intervention condition and then to compare the performance of another group, a control group who doesn’t get the intervention or gets a comparison intervention. We randomly assign participants to those conditions. We measure them prior to the intervention. We then implement the intervention as well as the comparison condition and then we measure on the outcome. And if in fact, a difference exists under those conditions, we can be reasonably confident that the result that we would like to take is evidence of the effectiveness of the intervention is in fact attributable to the intervention. So I guess what I would like to say, is that in summing this up, is that we need results, but having a result without having certain conditions in place, will not allow us to be certain that the intervention is responsible for the outcome. We need to have ideally, we need to have random assignment of participants to conditions, comparison conditions, as well as the intervention condition so we can see if the intervention is truly responsible for the change. Ideally we would like to have both measures before the intervention, relevant measures, as well as after to be sure that the groups are comparable before the intervention but different in the end as a result of the intervention. We want to be sure that the measures we choose are reliable and valid. And if we put those conditions in place, we can be more confident that the outcome that we are interested in really is attributable to the intervention.

 

Supplemental Resources/Documents

View related resources

Academic Intervention Tools Chart

Behavioral Intervention Tools Chart

Resource Type
Videos
DBI Process
Validated Intervention Program
Audience
Trainers and Coaches
State and Local Leaders
Educators