SLOP-py research from the Flipped Learning Network via

The  Flipped Learning Network wants you to flip. An info-graphic, touted on their homepage, is data collected from their survey’s preliminary results.  The survey is ongoing so go ahead and take it here.

Soft, anecdotal data or hard research?

Something didn’t sit right with me when I took it, so I stopped.  The questionnaire seemed legitimate but my cynical side said, “Statistics can conceal as well as reveal.” So let’s examine why this survey is faulty.

These days I’m guided by best practices that have statistical significance grounded in research. As a result I’m looking for hard data on student achievement as it relates to the flipped classroom. I want to know if this strategy makes a significant impact on learning. The survey didn’t deliver because it is what it is–a survey, not a study.  But there’s a bigger problem with this survey, conducted in partnership with It’s a self-selected survey which raises many red flags. I wanted to learn more so I asked my friend Joan, a market research and statistics expert, for her insight.

“Self-selected opinion polls go by the acronym SLOP, and it’s probably not a coincidence. Self-selection creates a biased sample, so the results of surveys that rely on the participation of self-selected respondents cannot be used to draw conclusions about the overall population.”

That makes sense. The sample consists of  flipped classroom teachers who have discovered the survey on a site that promotes flipped learning. Frequent visitors to the Flipped Learning Network may have had a favorable flipped experience, choose to opt into into the survey, and create a biased sample.  Teachers having a negative experience would be less likely to visit the site and take the survey.

Let’s look further and examine not the Flipped Learning survey specifically, but’s  crowd-sourced, Yelp-style approach of collecting data. I like the idea of teachers providing input. What’s wrong with that?

“I visited the web site, and found that I could sign up to take a survey, and the only controls they have over determining whether or not I am a teacher is a box that I am supposed to check off attesting to that fact.

I not only can choose to participate, I can pretend I’m a flipper.

Let’s say this site was going to survey the effectiveness of a new textbook I have written. I could have asked everyone of my Facebook friends (and only 4 or 5 are teachers) to fill that survey out positively for me, and to repost the request on their pages. This site doesn’t appear to have any controls to prevent that. So not only are they getting a biased sample, they may not even really be sampling actual teachers.”

OK. But let’s say only teachers respond. What’s wrong with that?

“Since these surveys are taken by self-selected respondents, statistically you can’t project ANY conclusions from the surveys to the general population of teachers–you need a random sample of teachers to be able to do that. If they aren’t using a randomized sample of teachers of sufficient size for their surveys, none of their survey results have statistical significance.”

Give me an example.

“Maybe the site draws the attention of teachers that are very enthusiastic about teaching, and they want to provide lots of feedback about everything in their classroom. They may report significant gains using a particular curriculum, and they may indeed have gains – but it might be attributable to their enthusiasm rather than the curriculum.”

Sounds like what I described earlier. If nearly all respondents are enthusiastic about flipping the classroom, we never hear from the dissatisfied flippers because they don’t complete the survey, plus any reported gains may be attributable to the teachers’ enthusiasm. What else?

“Say 1% of teachers visit this site to take a survey on a particular curriculum with which they are dissatisfied and want to see changed. The other 99% of teachers are happy with the program, and don’t even think about providing feedback because they don’t see a need for any corrections. That survey would give very negative results for a program that is essentially quite well accepted.”

What’s your opinion of’s idea of selling its research to school districts and education vendors? If they are looking for teacher input, this is a way to provide it.

“If the surveys have room for lots of comments, they could work a little like a focus group. The comments may be of interest to people that are looking for feedback on a product, although, again, the comments would not be statistically significant, either.

You’ve never been one to mince words, so give it to me straight.

If I were an author/company trying to get a better understanding for how the education market was responding to my book/product, and this site were to approach me about conducting one of these surveys, I would pay them zero dollars for their research. If I were a school district considering a variety of new curricula, and this site offered to sell me their survey results to use as an evaluation tool, I’d pay them zero dollars for their results. If I were a teacher, I’d perhaps be interested in reading some of the teachers’ comments, but since the comments could possibly be written by the marketing team for the product I am considering, I think I’d rather solicit opinions in the faculty lounge.”

Flipping may have a statistically significant impact on student achievement but that conclusion cannot be drawn by SLOP-py research. I cannot advise you to flip or not to flip.

Neither can the survey.

2 thoughts on “SLOP-py research from the Flipped Learning Network via

  1. Hello! Thanks for your very thoughtful analysis of the infographic we put out regarding flipped classrooms. As ClassroomWindow is a new organization, I thought it might be helpful to share a bit about who we are and what motivates us.

    First, ClassroomWindow is NOT a research firm – our mission is to give teachers a voice about what’s working — or not — in their classrooms. Our belief is that our education system will be better off if their voices are heard. Just like Yelp or Amazon, there is of course self-selection error. However, if we did nothing more than ask teachers for a 5 star rating about a given product or instructional practice, and enough teachers weighed in, we believe we could make an enormous impact on the quality of materials produced for classrooms.

    With respect to the Flipped Classroom survey, we do not claim any statistical validity. We asked teachers what they think about flipped classrooms, a group of them responded, and we’re reporting what they told us. That said, we do find it interesting that so many believe flipping their classroom has made a large impact on their professional satisfaction and their students’ learning. That’s a story, albeit incomplete and imperfect, that we’re happy to tell.

    We received no compensation from FLN for this work, nor did they influence in any way the questions we asked or the responses we received from teachers.

    Again, thank you for your analysis. This kind of thoughtful and professional dialogue about how best to report what’s working in classrooms can only help us all to find meaningful ways to make an impact.

    Kirby Salerno
    Co-Founder, ClassroomWindow

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s