The Flipped Learning Network wants you to flip. An info-graphic, touted on their homepage, is data collected from their survey’s preliminary results. The survey is ongoing so go ahead and take it here.

Something didn’t sit right with me when I took it, so I stopped. The questionnaire seemed legitimate but my cynical side said, “Statistics can conceal as well as reveal.” So let’s examine why this survey is faulty.
These days I’m guided by best practices that have statistical significance grounded in research. As a result I’m looking for hard data on student achievement as it relates to the flipped classroom. I want to know if this strategy makes a significant impact on learning. The survey didn’t deliver because it is what it is–a survey, not a study. But there’s a bigger problem with this survey, conducted in partnership with ClassroomWindow.com. It’s a self-selected survey which raises many red flags. I wanted to learn more so I asked my friend Joan, a market research and statistics expert, for her insight.
“Self-selected opinion polls go by the acronym SLOP, and it’s probably not a coincidence. Self-selection creates a biased sample, so the results of surveys that rely on the participation of self-selected respondents cannot be used to draw conclusions about the overall population.”
That makes sense. The sample consists of flipped classroom teachers who have discovered the survey on a site that promotes flipped learning. Frequent visitors to the Flipped Learning Network may have had a favorable flipped experience, choose to opt into into the survey, and create a biased sample. Teachers having a negative experience would be less likely to visit the site and take the survey.
Let’s look further and examine not the Flipped Learning survey specifically, but ClassroomWindow.com’s crowd-sourced, Yelp-style approach of collecting data. I like the idea of teachers providing input. What’s wrong with that?
“I visited the web site, and found that I could sign up to take a survey, and the only controls they have over determining whether or not I am a teacher is a box that I am supposed to check off attesting to that fact.

Let’s say this site was going to survey the effectiveness of a new textbook I have written. I could have asked everyone of my Facebook friends (and only 4 or 5 are teachers) to fill that survey out positively for me, and to repost the request on their pages. This site doesn’t appear to have any controls to prevent that. So not only are they getting a biased sample, they may not even really be sampling actual teachers.”
OK. But let’s say only teachers respond. What’s wrong with that?
“Since these surveys are taken by self-selected respondents, statistically you can’t project ANY conclusions from the surveys to the general population of teachers–you need a random sample of teachers to be able to do that. If they aren’t using a randomized sample of teachers of sufficient size for their surveys, none of their survey results have statistical significance.”
Give me an example.
“Maybe the site draws the attention of teachers that are very enthusiastic about teaching, and they want to provide lots of feedback about everything in their classroom. They may report significant gains using a particular curriculum, and they may indeed have gains – but it might be attributable to their enthusiasm rather than the curriculum.”
Sounds like what I described earlier. If nearly all respondents are enthusiastic about flipping the classroom, we never hear from the dissatisfied flippers because they don’t complete the survey, plus any reported gains may be attributable to the teachers’ enthusiasm. What else?
“Say 1% of teachers visit this site to take a survey on a particular curriculum with which they are dissatisfied and want to see changed. The other 99% of teachers are happy with the program, and don’t even think about providing feedback because they don’t see a need for any corrections. That survey would give very negative results for a program that is essentially quite well accepted.”
What’s your opinion of ClassroomWindow.com’s idea of selling its research to school districts and education vendors? If they are looking for teacher input, this is a way to provide it.
“If the surveys have room for lots of comments, they could work a little like a focus group. The comments may be of interest to people that are looking for feedback on a product, although, again, the comments would not be statistically significant, either.
You’ve never been one to mince words, so give it to me straight.
If I were an author/company trying to get a better understanding for how the education market was responding to my book/product, and this site were to approach me about conducting one of these surveys, I would pay them zero dollars for their research. If I were a school district considering a variety of new curricula, and this site offered to sell me their survey results to use as an evaluation tool, I’d pay them zero dollars for their results. If I were a teacher, I’d perhaps be interested in reading some of the teachers’ comments, but since the comments could possibly be written by the marketing team for the product I am considering, I think I’d rather solicit opinions in the faculty lounge.”
Flipping may have a statistically significant impact on student achievement but that conclusion cannot be drawn by SLOP-py research. I cannot advise you to flip or not to flip.
Neither can the survey.