Guest post by Ray Schleck:
I have been talking to dozens of people who are starting “blended learning schools.” Everyone seems excited about the potential for great student-level data.
These schools fall into 2 camps:
1. Some use a single technology product for all learning.
Compass Learning. e2020. Apex Learning.
a. Advantages: integrated, so easier.
b. Disadvantages: nobody I’ve spoken to loves their product. Some “like.”
2. Some schools use multiple ed-tech products.
For example, a school might might use Khan Academy and ST Math for math; and Achieve 3000 and iReady, plus Accelerated Reader to track pleasure-reading, for English.
a. Advantages: ability to cherry-pick the best pieces from multiple programs, more control and flexibility over what students see.
b. Disadvantages: more complexity for kid, staff and administrators to learn multiple programs; can get expensive; multiple data streams.
* * *
Okay, MG again.
Now let’s back up and look at the notion of “data-driven instruction.”
The K-12 world has many data-philes, data-phobes, and some wise folks in between, like Larry Cuban.
Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
Cuban also points out:
In 2009, the federal government published a report ( IES Expert Panel) that examined 490 studies where data was used by school staffs to make instructional decisions. Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
Got that? The irony. When we use data to see if data-driven instruction “works,” we find…not really.
Now. Is that the end of the story? Of course not. Many studies of tutoring show “it doesn’t work.” Does that mean tutoring doesn’t work? Yes, when implemented by typical schools. No, when done the “right way.”
Data-driven instruction is probably the same. Often done wrong when implemented by typical schools. “Works” when done “the right way.”
* * *
Now let’s dig in a bit. Does data overload apply to blended learning schools? Ray writes: yes.
Many school leaders described that the data stream was overwhelming. Khan Academy alone gives the following information: time spent per day on each skill, total time spent each day, time spent on each video, time spent on each practice module, level of mastery of each skill, which ‘badges’ have been earned, a graph of skills completed over number of days working on the site, and a graphic showing the total percentage of time spent by video and by skill.
That’s one program. Multiply by the number of students and the number of software products.
Furthermore, there were many “dashboard” issues. Even schools that hired consultants found that the consultants couldn’t integrate all the products into a single, easy-to-use screen.
So if an English teacher wants to know how a kid is doing, it takes a few minutes just to assemble the story. Those few minutes add up if the whole purpose of the school is to address each individual kid.
Furthermore, the alignment of products makes the data hard to use. In math, only about 60% of the Khan ‘skills’ match up well with the ST Math skills. When one of your kids is struggling with how to divide fractions, there isn’t a perfectly correlated ST Math exercise to help him.
Bottom line. There’s a big difference to how doctors use data and how teachers use data.
a. Other people gather data — techs, nurses
b. Doctors use data to make decisions, then get
c. Doctors then get other professionals to implement the decisions — nurses, techs, pharmacists, etc.
a. Gather data themselves
b. Make decisions
c. Do all the follow-up remediation themselves
Ed-tech is supposed to help
a. Software gathers data
b. Teacher makes decisions
c. Software does remediation
But often it doesn’t work that way; there are barriers to “a” and “c.”
[This post was originally featured on Oct 23, 2012 on Starting an Ed School blog]