So Much Data, So Little Time: How Faculty Uses Data & What Problems Arise

So Much Data, So Little Time: How Faculty Uses Data & What Problems Arise

At KIPP Chicago, we have a few people working on student data from different angles.  In this post, you will first hear from Anirban Bhattacharyya, the Director of Instructional Technology.  Anirban is responsible for setting up and implementing the digital content programs.  The second set of thoughts will be from Chris Haid, the Director of Research and Analysis.  Chris takes all of the data coming from our students and helps aggregate the data for analysis by different stakeholders.  By showing these two viewpoints, we hope we can provide a clear picture of how our faculty uses data and what problems arise.

 

How faculty uses data – Anirban

User Type 1: The “Diagnostic / Benchmark / Interim Assessment” User

Some of our users really like programs that give periodic assessments that can be compared.  At a very high level, they want to see if growth in a particular program correlates to growth in accepted metrics like state tests or NWEA MAP.  This high level data is also an indicator of which students should be given more attention.  This data may not provide the necessary data to help students on specific topics, but it does help select students to look at more closely.

Some of these programs also include norm-referenced data that gives us an indication of how the students are doing nationally.  Although programs may not have a perfectly representative sample of all students in the country, they still provide insight into what the arbitrary scores and scales mean.  Again, this norm-referenced data can be compared to other NWEA MAP scores, since they are norm-referenced as well.  Alignment would show that the specific program is assessing in a similar way to MAP.

User Type 2: The “Student Feedback” User

Some of our users want to use data in digital programs just like they do with traditional assignments/assessments.  They want to see the activities students are working on and their performance on those activities.  Moreover, it would be great if the data can be exported and mail-merged into a report that can be given to students/parents.  This increases student investment in the program, because it holds students accountable for online work.  Some programs are easier that others to pull this off.

User Type 3: The “Intervention” User

Some of our users want go on a dashboard and find students that need help, learn what they need help in, and provide an intervention lesson for those students.  These users are usually learning support people like special education teachers, paraprofessionals, and others like AmeriCorps members.  Some programs group students into weakness areas.  Other programs actually provide intervention lesson plans that these stakeholders can use.  Still others allow users to indicate which interventions have been implemented, so the program knows to re-test the student.

User Type 4: The “Engagement” User

Some users want to see if students are actually using programs, how they are spending their time, how long they are spending on the programs, and how much they are progressing.  Many programs display this data, but it very different ways.  Some show how long students have been logged in, including specific times so that users can track home use.  Others show a visual progression so users can see how they are progressing as compared to other students.  Still others like programs that give data on how long each individual activity takes, so that they can know when to encourage students to keep going.

What problems arise – Chris

The interlaced needs and demands of these four types of users bind together in a tangle of information.  Untangling this knot is not insurmountable, but if we don’t want to cut the knot entirely, we do well to identify the threads that we need to loosen and separate.  There are three that we focus on here in Chicago: Systems, data, and interpretation.

Disparate systems: To crib Coleridge, in the last half-decade myriad myriads of systems have teemed forth. Consider the panoply of student data sources we use: student information systems, blended learning programs, interim and benchmark assessments, exit slips/do nows, norm-referenced exams, standards-based report trackers, student paycheck software.   Each of these systems holds critical information about our students, and each of them creates a new process that our faculty commit to memory just get data in and out of the system. These systems rarely share data and certainly don’t talk to each other.  Some meet the needs of Engagement users, others the needs the Assessment jockey.  Some have better user experiences and information architecture than others. Nevertheless it is time consuming to log into each of these programs, learn idiosyncratic processes, and get data out.  One or two programs seem doable, but adoption and use plummet precipitously.  And with only one or two programs, you will not likely meet the needs of each type of faculty.

Incommensurable data:  It follows, almost without argument, that a disparate collection of software products provides data that are not easily compared.  First, different programs provide data in different formats: PDFs, HTML tables, CSV files, ODBC access, open APIs. Combining these different files formats—which themselves are always formatted differently—is no easy task for a data tech nerd, let alone a classroom teacher covering the intricacies of ratios and proportions or rhetorical tricks. Even Common Core standards are not carefully aligned between programs.  And how programs track engagement varies considerably.

Interpretation and Analysis:  Disparate systems with incommensurable data make any full portrait of our students inscrutable. While some systems provide meaningful analysis and actionable recommendations, none allows us to combine data in a way that illuminates the lay of the land.  Indeed, the cloud of data and systems surrounding our students occludes it. Or to borrow the earlier metaphor, this tangle more often trips us up in supporting our students surmounting the path to college.

Our role at KIPP Chicago is largely to pick apart these strands.  Ideally, we worry about the last point first, pulling on the threads of interpretation and analysis—understanding what our teachers need to effectively educate our charges.  Then we can look at what data we have and weave together analyses that support classroom analysis. And knowing what data we need guides our selection of data generating systems.

Written by Anirban Bhattacharyya

Anirban Bhattacharyya

Director of Instructional Technology at KIPP Chicago Shared Services Team, where his responsibilities include managing digital content and assessment, school and district level data systems, and traditional information technology.

One comment

  1. Kiera Chase

    I really like the description of what is going on in Chicago, it very much mirrors the sentiments of Michael Horn’s post “is the technology ready for Blended Learning? I am curious about the experience for teachers? How are they navigating all of these different systems and making sense of this data? Does your school system have protocols and/or training for teachers about how to read the data, adjust instruction, provide intervention, and then read the data again? At Envision Schools we are working to try to develop these capacities in our teachers and our Data Teams, and we have found this work to be very productive and fruitful as well as challenging at times. Our goal is to ensure that teachers are working smartly to provide for the differentiated needs of our students, and data is an essential part of this work

Leave a Reply

Your email address will not be published. Required fields are marked *