Hattie, Ritter offer help interpreting Carnegie Learning’s Cognitive Tutor research

Last week I was racking my brain trying to understand the nuances of such research terms as “statistically significant” and “educationally meaningful” as it relates to Carnegie Learning’s Algebra 1 Cognitive Tutor.

Steve Ritter, one of the founders of Carnegie Learning, and John Hattie, professor and author of Visible Learning, provided worthwhile commentary and helped me navigate the world of research.

Below are their comments.

Ritter

I’m one of the founders of Carnegie Learning, and I hope I can answer your questions about the study.

Characterizing the effects as “educationally meaningful” or “nearly double the expected gain” follows from a study by Lipsey and others. That study reports that year-over-year gains on standardized math tests are about .22-.25 standard deviations for 8th and 9th grade students (Table 5). Cognitive Tutor students gained about .2 standard deviations relative to the control group (i.e. on top of the “normal” gain of .22-.25), which is where the “nearly double” comes from.

Hattie does talk about the difference between a pre- to post-test effect size and an effect relative to a control group, but it isn’t clear that he accounts for this in his meta-analyses. If you assume that the control group gained .25 standard deviations over the school year, then the Cognitive Tutor group gained .45 standard deviations, relative to pretest (the study used a prealgebra test as the pretest and an algebra test as posttest, so you can’t directly measure gain). So maybe Hattie would agree that this is educationally meaningful.

Part of the point of the Lipsey work is to challenge the notion that effect size is a “common ruler” that can be consistently used across contexts. Hattie might not agree.

Hattie

This is an intriguing argument  — of course I would want to be careful about using the d> .40 as if it applied willy nilly to everything.  It is tough out there making changes and even changes of d> .20 can be worth striving for.  Yes, double a control group is worth looking at.  I have split many of the effects in VL into those compared to a control group and those that are more pre-post.  Even for the former (control – comparison) a contrast of .40 is average.  So I would not completely agree with the comments above but would note that they (and indeed all in the VL book) are probabilities – so the probability of this program having a worthwhile impact is pretty good –but the true test is your implementation and this is what I would be focusing on – for example you may be implementing this program with greater than .20 effects found in the report (or not) – so I would make this my focus.

The take-aways?

1.) d> .40 is a guide.

2.) An effect of d> .20 can be worth striving for.

3.) Focus on the implementation. Know if you are getting greater or lesser than the effects found in research.

Thanks, gentlemen.

Advertisements

One thought on “Hattie, Ritter offer help interpreting Carnegie Learning’s Cognitive Tutor research

  1. It’s very nice to see John Hattie’s response. I totally agree with the idea of focusing on the implementation. Some believe that technology somehow reduces the role of the teacher,but it doesn’t work that way. The study found no effect in the first year of implementation. This should not be surprising. We’re asking teachers to change the way they teach. That doesn’t happen overnight, and there needs to be much more focus on understanding what needs to change to ensure that year 2 does better than year 1 (and year 3 even better).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s