Last week during parent-teacher conferences an 8th grade parent stated their student saw the pattern between the study guide problems and the assessments. Mom didn’t say whether it was good or bad, it was just a statement of fact. Obviously we don’t want students to be surprised on an assessment, but are our study guides overly transparent?

This has been on my mind since Bryan Meyer tweeted images from @ilana_horn‘s book Motivated. Most of our grade 6-8 math study guides mirror problems that are on the assessment. Are we distorting grades by doing so? Could we include a few study guide problems as opportunities to deepen learning while “protecting the integrity” of the assessment? What would that look like?

When studying 6th grade ratios, the overarching goal of the standard is for students to use ratio reasoning to solve problems. For example if students use a double number line to solve for a missing quantity our typical study guide would replicate that problem using different numbers and ingredients. Instead of mirroring the problem, what if the study guide showed a double number line and the students were asked to pose a question then answer it? Would that develop deeper learning?

I tried this idea on two students during math support. At first they were stumped. It never occurred to them to think this way. They were used to “problem solving”, not “problem posing”. Once they understood what I was asking they were able to pose a question, but their questions lacked clarity. Then, as you can see from the photo, their questions were much more clear. As an aside, every question they posed, while accurate, resulted in an answer that was found in the second part of double number line.

A problem posing question is not on our assessment, however I could see this type of problem being valuable on a study guide as long as students have had some practice posing questions.

In seventh grade the end of unit assessment is on proportionality. One problem on the assessment asks students to examine an off the grid line graph where they are to determine all true answers about a proportional relationship when given a point on the line. Another problem from the assessment asks students to examine a different off the grid line graph with a point whose coordinates represents the unit rate. They are then asked to write the equation for the proportional relationship.

My idea for the study guide is to pose this problem–something I modified from Khan Academy. It’s still a work in progress.

Writing the equation for this relationship may be challenging since the constant of proportionality isn’t identified by the ordered pair of (1, 40) But there are enough clues for students create a table. Doing so might indirectly address identifying “true” statements about the relationships.

I would love to hear others’ thoughts on study guides. The conversation I am having in my head is also spurred by the piloting of the Illustrative Math resource. Study guides are currently not a part of this resource. If IM is in the process of creating study guides, I wonder if the problems mirror the assessment, or are they different?

Our math team has also been discussing study guides and how much they should mimic the actual unit assessment. We ended up creating new study guides last year since the district adopted an updated resource that didn’t align with past tests. Most of the study guides are now similar to the format of the tests. Each study guide is intentionally limited though and doesn’t include any type of open response or “challenge” section that students will have on the actual assessment. In addition to the unit tests, students also complete a cumulative assessment every two units. Study guides aren’t included for those assessments.

I think there is value to asking students to answer some questions on an assessment that apply the skills in an unfamiliar context. For that reason, I feel study guides can’t just spoon feed all the questions on the test with different numbers. This doesn’t teach our students to think critically about how to solve a new problem, and it doesn’t accurately assess student problem solving and critical thinking ability.

Sorry it took me so long to reply. I agree. I also think we need to consider student readiness when making these decisions. For example should middle schoolers generate 50% of the questions and the teacher provides the rest? Is that a reasonable expectation?