SMART goal—raising students’ self-efficacy

I’m placing a bet on a pony named Self-efficacy. I got the tip from Boaler, Dweck, Hattie, and Marzano. It may come out of the starting gate slow, and it’s a long distance race, but it has incredibly good odds. Take a look at the graphic below. As a rule of thumb any strategy that has an effect size of d > 0.40 is worth considering.

self efficacy

There are so many components to increasing self-efficacy:  providing timely, constructive feedback, fostering a growth mindset, creating a classroom culture where mistakes are encouraged—all of this should sound familiar to those who are taking Boaler’s course How to Learn Math. But the puzzle piece I want to focus on is building self-efficacy through challenging, individual and group problem solving tasks.

I wanted to see what this goal looks like as a SMART goal so I downloaded a template. Here’s what I have so far. I have never written a SMART goal so I would appreciate your feedback. I am NOT crazy about measuring success using MAP scores, and I’m not even sure I am using it accurately, but here’s the draft:

Mary will raise students’ self-efficacy in math by providing rich, problem solving activities. She will build into her plan book a minimum of 4 challenging, individual and/or group tasks per quarter. At least one of the tasks will be non-routine or where the problem is not directly linked to the current unit of study. Results will be measured using spring to spring MAP RIT scores. The quantifiable goal is for 80%  students to exceed the average growth by one or more points.

What do I need to do to take this draft to final form?  I appreciate your feedback.

BTW: Thanks, Julie for selecting this MS Sunday Funday topic. I probably spent too much time on it; then again, it was well worth it 🙂

Hattie, Ritter offer help interpreting Carnegie Learning’s Cognitive Tutor research

Last week I was racking my brain trying to understand the nuances of such research terms as “statistically significant” and “educationally meaningful” as it relates to Carnegie Learning’s Algebra 1 Cognitive Tutor.

Steve Ritter, one of the founders of Carnegie Learning, and John Hattie, professor and author of Visible Learning, provided worthwhile commentary and helped me navigate the world of research.

Below are their comments.

Ritter

I’m one of the founders of Carnegie Learning, and I hope I can answer your questions about the study.

Characterizing the effects as “educationally meaningful” or “nearly double the expected gain” follows from a study by Lipsey and others. That study reports that year-over-year gains on standardized math tests are about .22-.25 standard deviations for 8th and 9th grade students (Table 5). Cognitive Tutor students gained about .2 standard deviations relative to the control group (i.e. on top of the “normal” gain of .22-.25), which is where the “nearly double” comes from.

Hattie does talk about the difference between a pre- to post-test effect size and an effect relative to a control group, but it isn’t clear that he accounts for this in his meta-analyses. If you assume that the control group gained .25 standard deviations over the school year, then the Cognitive Tutor group gained .45 standard deviations, relative to pretest (the study used a prealgebra test as the pretest and an algebra test as posttest, so you can’t directly measure gain). So maybe Hattie would agree that this is educationally meaningful.

Part of the point of the Lipsey work is to challenge the notion that effect size is a “common ruler” that can be consistently used across contexts. Hattie might not agree.

Hattie

This is an intriguing argument  — of course I would want to be careful about using the d> .40 as if it applied willy nilly to everything.  It is tough out there making changes and even changes of d> .20 can be worth striving for.  Yes, double a control group is worth looking at.  I have split many of the effects in VL into those compared to a control group and those that are more pre-post.  Even for the former (control – comparison) a contrast of .40 is average.  So I would not completely agree with the comments above but would note that they (and indeed all in the VL book) are probabilities – so the probability of this program having a worthwhile impact is pretty good –but the true test is your implementation and this is what I would be focusing on – for example you may be implementing this program with greater than .20 effects found in the report (or not) – so I would make this my focus.

The take-aways?

1.) d> .40 is a guide.

2.) An effect of d> .20 can be worth striving for.

3.) Focus on the implementation. Know if you are getting greater or lesser than the effects found in research.

Thanks, gentlemen.

Carnegie Learning’s Algebra I Cognitive Tutor–a game changer?

I really want to understand this and I hope you can help me. Matt Townsley’s blog post and a recent direct marketing email from Carnegie Learning have got me thinking.

I’ve had an opportunity to read and share John Hattie’s research on Visible Learning in middle level endorsement classes and I do believe, overall, he’s onto something. There are hundreds of strategies or programs teachers can implement to raise student achievement. Some have greater impact than others.

Hattie’s research has taught me to look for the game changers–any strategy that has an effect size of 0.40 or greater is worth implementing. I’m not a statistician but I’m using that effect size to help me navigate through the maze of educational products being touted.

There’s been a lot of PR of late regarding the results from a study on Carnegie Learning’s Algebra 1 Cognitive Tutor. According to the research students grew from the 50th to the 58th percentile.

This is where I become confused. The research abstract states, “The estimated effect is statistically significant for high schools…” And the conclusions state, “The effect size of approximately 0.20 is educationally meaningful” (page 27).

I’m perplexed. The RAND research says this is statistically significant and educationally meaningful. Hattie’s yard stick would say it has low impact.

What are your thoughts? Can you clarify my befuddlement?

Is tracking a form of segregation and oppression?

There’s talk in our building of eliminating the academic math classes. My initial reaction was: This is a terrible idea. Students need these classes. We need to meet them where they are. Maybe I should go on the record and send an email to admin. I shared my frustration with a trusted colleague who held a very different opinion. She basically said, “You’re going to the wrong person if you think I agree with you.” She believes the academic level students need positive role models. Eliminating that track and placing them in an at-grade level math class is a good thing. I respect her opinion, but I still disagreed. I felt we would be doing the students a disservice.

I couldn’t let it go so this morning I spent some time re-reading portions of Linda Darling-Hammond’s book The Flat World and Education: How America’s Commitment to Equity Will Determine Our Future. In part, her position on tracking is that it denies equity and access. It caused me to begin reassessing my position. I was still uncertain because I teach in an upper middle class community. I’m thinking Darling-Hammond is talking primarily about poverty. That doesn’t apply to my district.
Then one of Diane Ravitch’s posts caught my attention: Rothstein: Cannot Close Achievement Gap without Ending Segregation. The article struck me and I began to think…

Could tracking be considered a form of segregation in that a tracked curriculum denies students access or equitable access? Could tracking be considered a form of oppression?

I never before thought of tracking in those terms. I learn so much by reading Ravitch’s blog that I posed the two questions in that thread.

Here’s part of the discussion.

Ponderosa says:

February 23, 2013 at 12:54 pm

Mary –I was thinking the same thing. Tracking is segregation within the school. And I think tracking is GOOD because it allows me as a history teacher to tailor my instruction to meet kids in their “zone of proximal development”; that is, to give them knowledge that makes them stretch but is not beyond their reach. Tracking is differentiated instruction, but without the pretense that all kids of the same age are getting the same education. Can’t we live without this pretense? Or will we continue to lie to ourselves about the real state of the content of our kids’ minds? Will we continue to preserve the pretty-looking heterogeneous classrooms that frequently bore the top kids and bewilder the low kids? (And will we divert this argument from one about substance to one about semantics [“He said, ‘top’ and ‘low’ kids!”]?)     

teachingeconomist says:

February 23, 2013 at 6:19 pm

It is widely held here that poverty is the greatest disadvantage that poor and disadvantaged students face. It is the explanation for poor assesment scores and can not be overcome by what happens in the school. It seems perfectly reasonable that poverty would also have a significant impact on learning in the classroom.

This is an important and very interesting thread, tying together discussions on tracking within schools, “skimming” between schools, the relationship between poverty, test scores, and student performance, and the role of traditional zoned school in SES segregation in the country. I look forward to seeing more comments.

I wonder how the grades taught influence perceptions about the issues here. Would an elementary teacher have the same intuition and experience as a high school teacher?

DNAmartin says:

February 23, 2013 at 6:13 pm

Rothstein argues, “When disadvantaged students are grouped together in schools, their challenges are compounded and build upon each other.” The knowledge deficits that you speak about in this post are compounded when students are place in segregated or tracked school environments. Disadvantages students need to interact in classrooms with many speakers and readers who have this powerful general knowledge that you describe. Research shows that they make greater progress in achievement when they have access to peers with the general knowledge. When they interact and talk with students with the same deficits as their own then these deficits are compounded. Disadvantaged students can bring great assets of creativity, problem solving, compassion, unique and valuable cultural experiences, perseverance, empathy, morality, patience, story-telling, and funds of knowledge in areas of nature or music to their more advantaged peers. Their assets are many and diverse when we look.

I do agree with you that we need to give these kids what upper class parents have been giving their kids. However, I don’t see upper class parents assigning their kids the KIPP boot-camp style school discipline that you mention. Kids do need the daily one-on-one annotation and thinking about daily life events that you describe.

I’m beginning think tracking is a form of segregation and oppression. Have I been a party to academic apartheid?

SLOP-py research from the Flipped Learning Network via ClassroomWindow.com

The  Flipped Learning Network wants you to flip. An info-graphic, touted on their homepage, is data collected from their survey’s preliminary results.  The survey is ongoing so go ahead and take it here.

Soft, anecdotal data or hard research?

Something didn’t sit right with me when I took it, so I stopped.  The questionnaire seemed legitimate but my cynical side said, “Statistics can conceal as well as reveal.” So let’s examine why this survey is faulty.

These days I’m guided by best practices that have statistical significance grounded in research. As a result I’m looking for hard data on student achievement as it relates to the flipped classroom. I want to know if this strategy makes a significant impact on learning. The survey didn’t deliver because it is what it is–a survey, not a study.  But there’s a bigger problem with this survey, conducted in partnership with ClassroomWindow.com. It’s a self-selected survey which raises many red flags. I wanted to learn more so I asked my friend Joan, a market research and statistics expert, for her insight.

“Self-selected opinion polls go by the acronym SLOP, and it’s probably not a coincidence. Self-selection creates a biased sample, so the results of surveys that rely on the participation of self-selected respondents cannot be used to draw conclusions about the overall population.”

That makes sense. The sample consists of  flipped classroom teachers who have discovered the survey on a site that promotes flipped learning. Frequent visitors to the Flipped Learning Network may have had a favorable flipped experience, choose to opt into into the survey, and create a biased sample.  Teachers having a negative experience would be less likely to visit the site and take the survey.

Let’s look further and examine not the Flipped Learning survey specifically, but ClassroomWindow.com’s  crowd-sourced, Yelp-style approach of collecting data. I like the idea of teachers providing input. What’s wrong with that?

“I visited the web site, and found that I could sign up to take a survey, and the only controls they have over determining whether or not I am a teacher is a box that I am supposed to check off attesting to that fact.

I not only can choose to participate, I can pretend I’m a flipper.

Let’s say this site was going to survey the effectiveness of a new textbook I have written. I could have asked everyone of my Facebook friends (and only 4 or 5 are teachers) to fill that survey out positively for me, and to repost the request on their pages. This site doesn’t appear to have any controls to prevent that. So not only are they getting a biased sample, they may not even really be sampling actual teachers.”

OK. But let’s say only teachers respond. What’s wrong with that?

“Since these surveys are taken by self-selected respondents, statistically you can’t project ANY conclusions from the surveys to the general population of teachers–you need a random sample of teachers to be able to do that. If they aren’t using a randomized sample of teachers of sufficient size for their surveys, none of their survey results have statistical significance.”

Give me an example.

“Maybe the site draws the attention of teachers that are very enthusiastic about teaching, and they want to provide lots of feedback about everything in their classroom. They may report significant gains using a particular curriculum, and they may indeed have gains – but it might be attributable to their enthusiasm rather than the curriculum.”

Sounds like what I described earlier. If nearly all respondents are enthusiastic about flipping the classroom, we never hear from the dissatisfied flippers because they don’t complete the survey, plus any reported gains may be attributable to the teachers’ enthusiasm. What else?

“Say 1% of teachers visit this site to take a survey on a particular curriculum with which they are dissatisfied and want to see changed. The other 99% of teachers are happy with the program, and don’t even think about providing feedback because they don’t see a need for any corrections. That survey would give very negative results for a program that is essentially quite well accepted.”

What’s your opinion of ClassroomWindow.com’s idea of selling its research to school districts and education vendors? If they are looking for teacher input, this is a way to provide it.

“If the surveys have room for lots of comments, they could work a little like a focus group. The comments may be of interest to people that are looking for feedback on a product, although, again, the comments would not be statistically significant, either.

You’ve never been one to mince words, so give it to me straight.

If I were an author/company trying to get a better understanding for how the education market was responding to my book/product, and this site were to approach me about conducting one of these surveys, I would pay them zero dollars for their research. If I were a school district considering a variety of new curricula, and this site offered to sell me their survey results to use as an evaluation tool, I’d pay them zero dollars for their results. If I were a teacher, I’d perhaps be interested in reading some of the teachers’ comments, but since the comments could possibly be written by the marketing team for the product I am considering, I think I’d rather solicit opinions in the faculty lounge.”

Flipping may have a statistically significant impact on student achievement but that conclusion cannot be drawn by SLOP-py research. I cannot advise you to flip or not to flip.

Neither can the survey.