Evaluating education research

In an earlier blog post about a Harvard study about FLVS, I mentioned that results of education research are often oversimplified, and in passing linked to an Atlantic article, How to Read Education Data Without Jumping to Conclusions. Several points in that article are worth reviewing in more detail. Researchers are sometimes accused of operating in the ivory tower and being disconnected from real, on-the-ground conditions. That is sometimes the case, but it’s also true that advocates and policymakers sometimes misinterpret or misuse research results in ways not supported by the studies. As we’ve discussed in previous blog posts, anyone citing research should have some knowledge of what the study says, and of its limitations.

Some of the article’s key points, and some of my own thoughts related to educational studies, include:

Absence of evidence does not equal evidence of absence.” The oft-cited example refers to the fact that studies have yet to find evidence of life on other planets, yet that doesn’t prove that such life doesn’t exist. Within education, if someone says “we have no evidence supporting this” the question in response is “how much effort has been made to find such evidence?”

Sample size should always be examined. Small samples can lead to misleading results. All other things equal, a larger sample equates to a more robust study. But media reports rarely include sample sizes, and often will report studies of 30 students and 30,000 students equally.

And finally, correlation does not imply causation. If statistics appear to show a correlation between a treatment and an outcome, a theory of action for why that treatment would cause that outcome is necessary. This is particularly important in blended learning implementations that show success. These programs often involve the use of digital content, tablets, or laptops, and—perhaps most importantly—extensive professional development that involves a significant change to the instructional model that is being used. The successful outcome is likely to be a result of all aspects of the implementation, but the details of success are often simplified to “the school started using tablets and its scores went up the next year.” Although that statement is technically true, it leaves out the pedagogical changes tied to tablet use that are critical to success. It may be that the tablet use and the other changes together accounted for the change, or it may in fact be the case that pedagogical changes only partly related to use of technology are responsible.

 

correlation and causation

 

Cartoon taken from http://imgs.xkcd.com/comics/correlation.png. 

These and other issues related to research in education don’t make such research less important or less valuable. But findings should be used appropriately, and in particular those people who publicize the results of studies should include all necessary conditions and caveats. In my experience the researchers usually do this, but when the results are reported by general media and advocates, often the details get lost in translation.