The Learning Accelerator helps districts think about how to measure blended learning
The Learning Accelerator has just released its District Guide to Blended Learning Measurement, which provides a useful framework to districts thinking about how to determine whether their digital learning efforts are yielding results. Educators and blended learning advocates are increasingly stressing the fact that technology should be implemented only with clear educational goals in mind, and that these goals should be well-defined and measurable. In most cases, the technology should not be considered until educational goals are established. Schools that put technology first all too often find themselves with tablets in search of a problem to solve.
But identifying the educational goal is just a start. From there, the district has to figure out what to measure, and how to measure. This is where the Guide provides useful direction under several categories, including:
When to measure
TLA points out that inputs and activities, and not just outcomes, should be measured. This is a worthwhile insight because measuring inputs and activities will help the district determine if the blended learning program was implemented with fidelity to the plan. If results are not what was expected, knowing whether the failure is due to a planning deficit, or failure to implement, is necessary to correct the problem.
The Guide shows the elements to be measured over time:
Inputs ->Activities -> Outputs -> Outcomes -> Impacts
What to measure
The guide stresses that “you may not even need to add many measures if you notice that you’re already measuring some of the outcomes or impacts that you’re interested in. For example, your district is probably already tracking graduation rates, and may even be measuring student engagement or school climate, for all schools across your district.” This approach is similar to the one Evergreen and the Christensen Institute have followed with the Proof Points project, which has focused on measures that districts are already implementing.
Whom to measure
TLA makes the evident point that “you measure those who are participating in your blended learning initiative,” but stresses that while “often these are students…measuring educators (teachers and others), administrators, families, and community members may also align with your original objectives for implementation.” The guide also explores the advantages, and challenges, of including a comparison group or data set in the measurement.
How to measure
Finally, the last section of the guide explores reliability and validity in measurement tools. Reliability is related to consistency in measurement, and validity refers to whether the tools are measuring what they purport to be evaluating.
These are constructive insights, which help frame the actual measurement tools and practices that districts are using. As we have seen in the Proof Points project, districts are gauging blended learning success either based on existing measures (e.g. graduation rates, state or district assessments) or on newly implemented tools (e.g., NWEA MAP). Our discussions with district leaders suggest that using tools such as MAP result in a far richer picture of what is happening with students and schools—but that these require a significant level of investment.
In addition to these findings for educators, the guide also has a very useful table exploring differences and similarities between research and evaluation—which is a distinction that’s not well understood by funders, policymakers, and advocates. That section of the guide alone is worth another blog post.