Students learn to give feedback with a growth mindset

Yesterday Algebra’s Friend introduced me to a new professional development blog Read…Chat…Reflect…Learn! While I enjoy reading blogs for lesson ideas, I also love reading journal and magazine articles, books, research studies, etc. as another way to stay current. I’m glad I found this blog and I’m thrilled to see that it’s in its infancy, only because it makes me feel that I haven’t missed out on too much!

The current topic is feedback. The article for discussion was How Am I Doing? It offered a good overview, but what was missing was how to provide feedback using a growth mindset. This excerpt, Types of Feedback and Their Purposes,  gives clear examples. One type of feedback I tend to focus on is descriptive as opposed to judgmental feedback.

When examining student work for feedback I’ve collected their work in progress, and provided descriptive feedback. I won’t kid you. It’s time consuming. But an idea was floating in my head. What if I taught students to give each other feedback with a growth mindset?  I don’t want you to think I am shirking my responsibilities, but could math students offer feedback similar to students who participate in peer editing compositions?

After reading Algebra’s Friend post I thought I would give it a try. Yesterday and today I showed my students this Would You Rather problem. They were to work in groups of four to solve.

After the groups worked for about 10-15 minutes, I collected their work, redistributed them to different groups and asked the new group to provide feedback. Here’s an example:

feedback1

Kids being kids, their feedback was somewhat judgmental and not of a growth mindset. Had they rephrased their wording to questions such as, “How could labeling help explain your thinking?” or a statement such as “The final answer is not clear,” would definitely put them on the track towards feedback with a growth mindset.

Doing this activity in groups made it quite manageable as only six responses were being critiqued. While I monitored each group I asked them what questions they had about the other group’s work. I also asked them to be positive with their feedback. Each problem was circulated twice. Doing so also gave groups the opportunity to see how others approached the problem and perhaps revamp their thinking.

Here’s how the above group implemented the feedback:

feedback2

 

The next time I see them we’ll continue the conversation of how to offer feedback with a growth mindset. With more practice they’ll get better at it.

I should have thought of this at the start of the year.

 

 

 

SMART goal—raising students’ self-efficacy

I’m placing a bet on a pony named Self-efficacy. I got the tip from Boaler, Dweck, Hattie, and Marzano. It may come out of the starting gate slow, and it’s a long distance race, but it has incredibly good odds. Take a look at the graphic below. As a rule of thumb any strategy that has an effect size of d > 0.40 is worth considering.

self efficacy

There are so many components to increasing self-efficacy:  providing timely, constructive feedback, fostering a growth mindset, creating a classroom culture where mistakes are encouraged—all of this should sound familiar to those who are taking Boaler’s course How to Learn Math. But the puzzle piece I want to focus on is building self-efficacy through challenging, individual and group problem solving tasks.

I wanted to see what this goal looks like as a SMART goal so I downloaded a template. Here’s what I have so far. I have never written a SMART goal so I would appreciate your feedback. I am NOT crazy about measuring success using MAP scores, and I’m not even sure I am using it accurately, but here’s the draft:

Mary will raise students’ self-efficacy in math by providing rich, problem solving activities. She will build into her plan book a minimum of 4 challenging, individual and/or group tasks per quarter. At least one of the tasks will be non-routine or where the problem is not directly linked to the current unit of study. Results will be measured using spring to spring MAP RIT scores. The quantifiable goal is for 80%  students to exceed the average growth by one or more points.

What do I need to do to take this draft to final form?  I appreciate your feedback.

BTW: Thanks, Julie for selecting this MS Sunday Funday topic. I probably spent too much time on it; then again, it was well worth it 🙂

Getting kids and parents to focus on the learning–the grade will come. Trust me.

Open House idea. I’m trying to think of a subtle way to impress upon parents that the focus is on the learning not on the grade. Maybe for some the focus is on the grade. However I think parents and students would agree that if we focus on the learning the grade will come.

Since I’m bored, I created two GoAnimate videos that I might use at Open House.  The first is a 30 second discussion about grades.

This one is less than a minute long and focuses on the learning.

This will lead into a nice segue where I can talk about the success students had  last year when they set goals and monitored their progress. I constantly tweak this document so you may notice that it’s not exactly like what’s represented in the image below. Maybe I’ll show the parents what progress monitoring looks like.

progress monitoring
This student needed to be assessed 3 times before she achieved mastery. As you can see the student was honest with respect to the amount of effort she put forth.

My assessments are designed in a standards based grading format (that’s why you see Scores 1-4). Students set a goal of either 3.0, 3.5 or 4.0. Three is the target. My formative assessments are ongoing and students can reassess as long as they complete and follow through on a study plan which is their evidence of study.

I cringe when I hear, “What can I do to raise my grade?” or, “Is there extra credit?”

I want to work with parents and students to reframe those questions to be more like, “What can I do to improve?”

Maybe Open House is the place to start.

Hattie, Ritter offer help interpreting Carnegie Learning’s Cognitive Tutor research

Last week I was racking my brain trying to understand the nuances of such research terms as “statistically significant” and “educationally meaningful” as it relates to Carnegie Learning’s Algebra 1 Cognitive Tutor.

Steve Ritter, one of the founders of Carnegie Learning, and John Hattie, professor and author of Visible Learning, provided worthwhile commentary and helped me navigate the world of research.

Below are their comments.

Ritter

I’m one of the founders of Carnegie Learning, and I hope I can answer your questions about the study.

Characterizing the effects as “educationally meaningful” or “nearly double the expected gain” follows from a study by Lipsey and others. That study reports that year-over-year gains on standardized math tests are about .22-.25 standard deviations for 8th and 9th grade students (Table 5). Cognitive Tutor students gained about .2 standard deviations relative to the control group (i.e. on top of the “normal” gain of .22-.25), which is where the “nearly double” comes from.

Hattie does talk about the difference between a pre- to post-test effect size and an effect relative to a control group, but it isn’t clear that he accounts for this in his meta-analyses. If you assume that the control group gained .25 standard deviations over the school year, then the Cognitive Tutor group gained .45 standard deviations, relative to pretest (the study used a prealgebra test as the pretest and an algebra test as posttest, so you can’t directly measure gain). So maybe Hattie would agree that this is educationally meaningful.

Part of the point of the Lipsey work is to challenge the notion that effect size is a “common ruler” that can be consistently used across contexts. Hattie might not agree.

Hattie

This is an intriguing argument  — of course I would want to be careful about using the d> .40 as if it applied willy nilly to everything.  It is tough out there making changes and even changes of d> .20 can be worth striving for.  Yes, double a control group is worth looking at.  I have split many of the effects in VL into those compared to a control group and those that are more pre-post.  Even for the former (control – comparison) a contrast of .40 is average.  So I would not completely agree with the comments above but would note that they (and indeed all in the VL book) are probabilities – so the probability of this program having a worthwhile impact is pretty good –but the true test is your implementation and this is what I would be focusing on – for example you may be implementing this program with greater than .20 effects found in the report (or not) – so I would make this my focus.

The take-aways?

1.) d> .40 is a guide.

2.) An effect of d> .20 can be worth striving for.

3.) Focus on the implementation. Know if you are getting greater or lesser than the effects found in research.

Thanks, gentlemen.

Carnegie Learning’s Algebra I Cognitive Tutor–a game changer?

I really want to understand this and I hope you can help me. Matt Townsley’s blog post and a recent direct marketing email from Carnegie Learning have got me thinking.

I’ve had an opportunity to read and share John Hattie’s research on Visible Learning in middle level endorsement classes and I do believe, overall, he’s onto something. There are hundreds of strategies or programs teachers can implement to raise student achievement. Some have greater impact than others.

Hattie’s research has taught me to look for the game changers–any strategy that has an effect size of 0.40 or greater is worth implementing. I’m not a statistician but I’m using that effect size to help me navigate through the maze of educational products being touted.

There’s been a lot of PR of late regarding the results from a study on Carnegie Learning’s Algebra 1 Cognitive Tutor. According to the research students grew from the 50th to the 58th percentile.

This is where I become confused. The research abstract states, “The estimated effect is statistically significant for high schools…” And the conclusions state, “The effect size of approximately 0.20 is educationally meaningful” (page 27).

I’m perplexed. The RAND research says this is statistically significant and educationally meaningful. Hattie’s yard stick would say it has low impact.

What are your thoughts? Can you clarify my befuddlement?

Visible Learning: surprising discoveries for new teachers

One of the blogger initiative questions this week is: All new teachers should learn___before entering the classroom. In my opinion every teacher, new or not, needs to become familiar with John Hattie’s work on Visible Learning. Of all the strategies, programs, etc. that are in our teacher toolbox, we should be selecting strategies that make the most impact on learning and achievement. Hattie conducted a meta-analysis of more than 800 studies and some of the results are surprising.

The single most effective strategy a teacher can implement is for students to set goals and monitor their own progress.

My mentor is a university professor in middle level education. She has often asked me to present my latest adventures to her MLE curriculum students. I presented Hattie’s work on Visible Learning for Teachers and many students were enlightened.

“Knowing a student’s learning style has been driven into our heads and now you are telling us it’s not as important as we think,” one MLE student commented.

Certainly learning styles is best practice, however nearly everything works to some degree. Why not first implement those strategies that maximize our leverage?

I think Hattie’s work directly connects to the strategies involved in implementing standards based grading. Jump to slide 19 of the presentation for proof.

Click above to view the Slide Rocket presentation