Elements of success in digital learning: using data
Recently we released Proof Points: Blended Learning Success in School Districts. This is the fourth of a series of blog posts reviewing findings across the districts that we researched, and other schools and districts that we have not yet profiled. The most successful programs have a commitment to creating, collecting, and analyzing outcomes data, and adjusting their programs based on their findings. Many blended programs, however, are in early stages and don’t yet have student outcomes data.
The extent to which districts are focused on creating and sharing student outcomes data appears to vary widely. Some larger districts have a coordinated research effort, such as in the Washington DC Public Schools Office of Data and Strategy. The Mooresville (NC) School District’s blended learning program (which it calls digital conversion) is another with very rich data. In other districts that we have researched, including Horry County, Randolph, Spokane, and Spring City, we found that one or more people involved in the blended learning program are closely connected to evaluations, whether via state assessments, NWEA MAP, end-of-course exams, graduation rates, or other data.
In our experience with this project overall, however, the number of districts that have been able to easily provide outcomes data has been exceeded by the number of districts which did not have data readily accessible.
When we and the Christensen Institute released the initial survey seeking examples of blended learning success in the fall of 2014, we received about 65 responses. Fully half of them told us they did not yet have student outcomes data. Of those that did have outcomes to report, we found that for quite a few of them the data were still very new. Often they had only a semester or a year’s worth of data to report. In these cases the possibility exists that the apparent improvements are temporary; perhaps they are representative of random statistical fluctuations.
Further, as we looked beyond the survey responses, we found that quite a few blended programs that have been highlighted in media and NGO reports don’t yet have outcomes data. Many of these appear to be well-planned and well-run programs that are very promising. Some of them have data suggesting that their blended learning is correlated with improved attendance rates, enhanced student engagement, and decreased disciplinary issues. Some have internal data looking at rates of usage of various technology products and examining outcomes data as a function of technology usage. But in some of these fairly prominent cases student outcomes data don’t yet exist.
Although they are not yet common, examples of successful blended programs as judged by student academic outcomes do exist. In addition to the six profiles published already, our research has found quite a few others as well. In the next few weeks we will publish another six profiles. At that point we will resume this blog series and highlight a few more overarching findings.