americas test kitchen
America’s Test Kitchen
Let me begin by saying: it’s quite possible that the thesis I’m about to advance is wrong. I’m new to this work. I’m hoping to provoke and be corrected.
Until June 30th I was teaching 7th grade history at Boston Collegiate Charter School. I joined Match to work on a new school design called Match Next. Broadly construed, it’s part of a universe of schools called “blended learning.” Tomorrow morning [Oct 20] I’ll go to New Orleans for a conference with folks from other blended learning schools, either just launched or soon-to-be. I’m excited to learn from their stories.
I got hired at Match in part because I’ve been an ed-tech skeptic. The results of my former school, along with several other Boston charters, have been pretty good….”the old-fashioned” way.
I’m no Luddite. It just seems like, so far, tech in K-12 has been more hype than game-changer. Few clear “wins” — whether that means kids making great strides, or teachers saving lots of valuable labor. I’m optimistic about ed tech medium term, but more cautious short term.
Let’s say you want to include some ed-tech in your school. How do you find out “what works”?
It’s not easy. Last weekend I read an NYT Magazine article about Christopher Kimball. He’s the guy who started America’s Test Kitchen (ATK) and Cook’s Illustrated.
ATK is “a hangar-size expanse of gleaming culinaria where throngs of aproned test cooks and interns” test recipes. There are 8 ranges and 32 ovens. Why does it exist?
“Most cookbook authors don’t care what happens to their recipe when it enters your home,” (Christopher) Kimball insists at Le Bernardin. In bighearted moods, he describes the Cook’s Illustrated approach as “why bad things happen to good recipes.” The corollary is his belief that empirically rigorous testing always leads to the best preparation, just as blind tastings — another staple of Kimball’s products — will always winnow out the best brand of crunchy peanut butter or microwave popcorn. To the relativists — those Pollyannas who insist that cooking is as much an art as a science and that a recipe’s effectiveness depends mostly on what a particular cook enjoys eating — Kimball has this to offer: “Cooking isn’t creative, and it isn’t easy. It’s serious, and it’s hard to do well, just as everything worth doing is damn hard.”
Hmm. This sounds familiar. Most ed tech product makers don’t want to “own” a school’s “incompetence” at implementing their pristine product. Implementing ed-tech with real life kids is not creative, and it’s not easy.
More from the NYTM:
I was at the meeting for the unveiling of the Perfect Soft-Boiled Egg. It’s one of those recipes that isolate the weird, wayward essence of the Cook’s Illustrated project, a seemingly boner-proof preparation that, when fixed with Kimball’s unsparing eye, reveals itself to be fundamentally broken. And therein lies the narrative arc of the C.I. recipe — invariably it begins with the insuperable flaw, that through toil and experimentation is resolved in a sudden, improbable revelation that, in-house, is known as the aha moment….
If you’re wondering what could be especially difficult about boiling an egg, you should have heard her. The Flaw — the unappetizing probability of either a chalky yolk or a runny white — occurs because the yolk gets cooked before the white, and the desired temperature window turns out to be harrowingly small, so the ideal preparation must set the white while leaving the yolk custardy, and not do it too rapidly. Oh, and tossing a fridge-temperature egg into boiling water will cause the air inside to expand and sometimes crack it, and apparently no two cooks can agree on exactly what simmering means, and third, the number of eggs must be compensated for by adjusting the amount of boiling water to keep cooking time constant. Geary recited further facts imperiling the P.S.B.E., and after a while the difficulty of boiling an egg at home with anything like success sounded to be on the order of a bone-marrow transplant.
Sometimes the Test Kitchen folks conclude there is no solution.
For a week or two, Andrea Geary’s attempt to bulletproof the egg looked as if it would veer into the Fudge Zone. Old-Fashioned Chocolate Fudge is the recipe everyone at C.I. mentions as the ultimate kitchen calamity — a project that, despite sound intentions, a proven methodology and rivers of brow sweat, wound up on the scrap heap. For whatever reason, it became indexed in my mind as the Bataan Death Fudge. David Pazmiño, a test cook, spent four months stir-and-lifting the New England boardwalk confection, trying to solve the problem of the ultraprecise temperatures required in candy making. The stiffening fudge required real arm muscle to agitate, and soon the 200-plus-pound Pazmiño reaggravated an old injury, inflamed an excruciating case of tendinitis and took to wearing a thumb brace around the office. More than 1,000 pounds of fudge later, the recipe wouldn’t work without a candy thermometer, a tool Kimball judged too exotic for the home kitchen, and so Old-Fashioned Chocolate Fudge became a cautionary tale.
(If you want to know what became of the quest for the perfect soft boiled egg, go ahead and read the article).
For now, I’d like to take the ATK experience, and apply it to Ed Tech. ATK doesn’t test a raw ingredient. It tests what happens with:
- a plan of what to do,
- real ingredients that will interact with each other, and
- with adults acting on it with their own tendencies and foibles.
EdTech Test Kitchen doesn’t exist. At least not to my knowledge.
- a product,
- being used by real kids (typically fairly unmotivated) in a real school context,
- with a particular type of adult support and supervision.
It’s these 3 things together that describe what teachers and school leaders want to know.
Is Khan Academy good? That’s not what I want to know.
I already know if the raw egg is spoiled, any recipe will fail. I know if Khan were a bad product with a motivated, reasonably skilled single kid, then any larger use of it would fail too. And I already believe that Khan clears that bar, just by my own tinkering — with a motivated, reasonably skilled person, it is indeed helpful. It’s a fresh egg.
I want to know the optimal recipe to deploy Khan. I want ETK to test “Khan Implementation.”
How well does it work in a room of 20 kids, who are not particularly self-motivated and arrived to that school with fairly low test scores, while being supervised by an “aide” with XYZ set of rules? What happens if you upgrade the “aide” to a full teacher, or 2 aides? What happens if you change XYZ rules to ABC rules, perhaps rewarding kids for earning Khan badges? How well does it work with 4th graders versus 7th graders? What’s the optimal “cook time” — 30 minutes, an hour, 90 minutes? Does it work if you just let every kid work at her own pace for however long she wants, or in real life does that become chaotic? What are we comparing Khan to — a regular old-fashioned math class? A study hall? Another technology product? I want to know how much “math gain” to expect from a year’s worth of 1 hour sessions, just like a recipe tells me how many servings I’ll yield. Just as ATK might test 500 variations on soft boiled egg, I want that for ed-tech too. I need to know the ah-ha moment.
In the absence of an ETK, how do we learn about what works in ed tech?
My very un-scientific polling of 14 educators over the past few weeks, all of whom are involved in blended learning schools, revealed these methods.
a. Expert Reviews…not.
There are some existing journals and websites, like Edsurge, Edudemic, Edtech Magazine, and THE. But nobody I talked to described using them. I’m not sure why. I haven’t delved in myself very much yet. My sense from brief scan is they review only the product itself, but not the implementation of the product in a real life school with complex kid and adult issues.
For example, this is on a list of “40 Quick Ways To Use Mobile Phones in Class.”
Backchanneling: Turn the classroom into an educational MST3K equivalent by equipping smartphones with Twitter and allow students to offer up their own comments and ask questions via a real-time feed that does not disrupt the flow of a lecture.
I mean: really?
b. Review Aggregators
This works fantastically well for recipe websites like Epicurious, for book reviews with Amazon, with movie reviews and Rotten Tomatoes, restaurants and Yelp, etc. Lots of amateurs give ratings. The average score tells you a lot.
But nothing like that exists for Ed Tech. Very hard to get critical mass — 25+ unique raters.
A number of school leaders I spoke with pay a consultant. Reviews were mixed. Some felt the consultant saved them a bunch of time trying to run around and figure out the marketplace by themselves. Consultants also had some practical guidance they’d learned from watching other clients in action. Other school leaders I interviewed were frustrated and felt they hadn’t gotten their money’s worth, and were getting a pretty narrow menu of options….i.e., pick among A, B, or C.
d. Ask colleagues from other schools
Teachers/school leaders often call up others they know, just as we all might ask a friend for a recipe. Again, when I asked if this yielded rich information, there were some success stories, but more frustration.
Unreliable narrators, they said.
d1. It was sometimes hard to learn about “discarded products” — ed tech another school had tried, didn’t seem to work well with real-live kids, and ultimately was jettisoned.
d2. There was reluctance to potentially offend an ed tech provider, or to suggest that their school or CMO had failures, for fear of antagonizing funders. “I’ll tell you what failed in our school, but don’t broadcast it to a wider audience.”
d3. It was sometimes hard to know if the colleague was by nature a tech-skeptic or a tech-believer.
The believers felt some colleagues were simply too close-minded, so if a new initiative failed, it was because of adult resistance.
The skeptics felt that the believers minimized some all-too-obvious failures, so the external narrative “Things are great!” didn’t match what was being said privately in the staff room.
Overall, they felt, confidence in the advice went way up if they were talking to a colleague who was a friend (or 1 degree of separation from a friend), than in talking to a polite/helpful stranger.
All in all, I was 0 for 14 in finding educators who felt confident that they could get reliable information about what works in ed tech.
Most of the perceived limits on blended learning are funding, regulation, staffing. Surely those exist. But it seems pretty hard to simply identify products and the connective tissue that allows ed tech to help real life kids. A marketplace needs good, transparent information to flourish, and I don’t see that yet. Perhaps we need a Christopher Kimball.
[This blog post was originally featured on October 19, 2012 on Starting an Ed School blog.]