What should educators avoid doing when collecting progress monitoring data?

What should educators avoid doing when collecting progress monitoring data?

Resource Type
Videos
Developed By
National Center on Intensive Intervention

In this video, Dr. Devin Kearns, an Assistant Professor of Special Education in the Department of Education Psychology at the Neag School of Education at the University of Connecticut and NCII Trainer & Coach, discusses importance of consistency when selecting, administering, and scoring progress monitoring tools. 

 

 

 

 

 

Question: What should educators avoid doing when collecting progress monitoring data?

Answer: We are going to talk about some important things to avoid doing when you collect progress monitoring data. The first thing I want to talk about is, about using the same measure every time you do progress monitoring. It’s really important to administer the same progress monitoring measure every time that you collect progress monitoring data. What I sometimes see happen, that you do not want to do, is to collect progress monitoring data using different progress monitoring systems and make the assumption that the data you get from them are all the same. For example, schools often use oral reading fluency as a way to measure students’ progress and for many grade levels, it’s a great measure to use. And schools will select something like DIBELS or AIMSweb or any system. It doesn’t matter which one you use provided it has reliability data, perhaps from the NCII tools chart. You can select a measure from there. So I will see a school use something like DIBELS or AIMSweb and in addition they will have data from a running record, maybe Teachers College formal running record or even an informal one done on a grade level text, and they will mix that in with the progress monitoring data from something like DIBELS. And I might see them include data from different grade level passages. So they have a student who is reading below grade level, and so they will, let’s say a student is a fourth grader. They will collect the fourth-grade progress monitoring data and then the second-grade progress monitoring data.  And they will put the data from the fourth-grade progress monitoring, from the running records, and from the below-grade level progress monitoring all in the same graph. And when you do that, you get data that often sort of zigzags. It sort of has spikes and valleys. And the reason is, those data are not equivalent because they are different systems. Even if you use DIBELS one week and AIMSweb a different week you are not going to get consistent data because those are different systems that have different norms, so you have to use just one system.

One related point is that schools will sometimes ask, “Well we have a student who is ready for a new grade level. We have made a lot of progress with our intensive intervention plan. We would like to move the student, let’s say from second-grade to third-grade progress monitoring.” If schools want to do that, it’s a great thing. Congratulations to that student for making that great improvement. It’s a good idea then to use a couple weeks of overlap where you are administering the second-grade and third-grade progress monitoring measures at the same time, so when you switch from the third-grade one only you will have some data to show where the third-grade data started. And you will often find that when you do that the students’ scores at the next grade level are a bit lower because the words are more difficult. Again, that is not a problem, it is just important to make sure that you are using that data at the same time. And once you switch to third grade you can’t go back. You are stuck with the measure you have chosen because you have to use the same measure every time.

The second important point about progress monitoring is to administer the progress monitoring the same way every time. I know it is tempting to change the way you administer progress monitoring, especially if students aren’t doing well. And you want to change the context or change something else so that they will do better on the progress monitoring. For example, you might find that a student seems to do poorly because you collect the data in a room full of kids in the middle of intervention times and you feel like the kid is distracted, the student is distracted. If distracted, it does make sense to change the location, but you have to do that the same way every time. In other words you can’t collect the data in a classroom with a bunch of kids one week and then in a different room another week and then back to the main classroom another week. If you change it, again you are going to get sort of spikes and valleys in the data and that’s not going to allow you to see how the student is doing.

Sometimes people won’t use the same instructions every time. They will change them. For example, sometimes with a math progress monitoring, students forget to skip around. It’s important for some of the progress monitoring measures for students to skip the ones that they don’t understand, and schools will teach the students explicitly how to do that really carefully so that students can complete more problems. It’s fine to do that the first time you administer it and if you are going to do that, you need to do it consistently because once you change it, it’s hard to know if the data are the same. So if you notice there is improvement in the data from the week before you gave the extra instruction to the week after you gave the extra instruction. You don’t know if the reason the student is improving is because of your intensive intervention or because you gave them more instruction about how to do progress monitoring. So it is critical that you use the same instructions every time.

 Another issue in terms of administering assessment is in terms of providing the same amount of encouragement for the student. Sometimes if students are struggling people will add extra incentives and sort of pump them up a lot more before they do the progress monitoring. It’s great to give them some incentive every time. It’s great to encourage them every time, but you have to do that every time not just when you feel like the data is sort of flattening out. Because if you do that, you don’t know that if you get an increase again, whether the increase is due to the student improving from intervention or whether the student is just improving because you gave them more encouragement.

One really problematic thing that I don’t see often but I have seen in some schools, is that people actually change the amount of time that they give to students to complete the assessment based on the perception of whether the student can sort of handle additional time. So for example, the MAZE or in DIBELS the DAZE is designed to be administered for two and a half or three minutes. And sometimes teachers will give the students additional so they have an opportunity to get more correct. If you change the amount of time the students get your data are not at all useful because if you give them more time, they will be able to get more right. And we are trying to see whether instruction is improving the number they got right, not the amount of time when you collect the assessment data. So it is critical that you collect data in the exact same way in the exact same amount of time every time you do it because if you don’t do it that way, students improvement could be due to the fact that you changed the amount of time that students were allowed to take the assessment.

It is also critical to use the same scoring procedures. Sometimes you have a student with a speech concern who tends to say a sound in a non-standard way. So a student might say the S sound more like a W or the R sound more like a W. Sometimes students will say “twee” for “tree” or something similar to that. And one of the things that happens is that schools will decide that it is important to allow the TWEE to count as TREE knowing that the student has a speech problem. That is completely appropriate to do, but if you do it, you need to make it clear that you have done it because you are going to be changing the scoring. Previously maybe you counted TWEE as incorrect and now you are counting it as correct and as soon as you do that you are changing the data, so they are getting more correct because you are giving this new scoring procedure. So, I want to say that it is important to make that change because you want to give the student credit for what they have completed, but you also need to note that you have done it, because every time you change it you might get an increase in the data and it might look to you like it is due to intervention but it might be really due to the change in the way that you did progress monitoring.

Another question about scoring is about regional differences or dialectical differences in students’ pronunciation. Similar to kids with speech difficulty, I think it is completely appropriate to adjust the scoring for a students’ dialect. So if you know that they say a particular word the same way every time and it’s not a question of not knowing how to say it, it’s just a question of dialect, it’s fine to use the scoring procedure that gives them credit for saying that word. But again, you need to note that you have done it,  you need to score it consistently, because if you don’t, again, you don’t know if the students’ improvement is due to the scoring rules or the intervention and so you need to make that clear every time. 

Resource Type
Videos
DBI Process
Progress Monitoring
Implementation Guidance and Considerations
Fidelity
Policy & Guidance
Audience
Trainers and Coaches
Educators