Saturday, April 23, 2011

The Trouble with Data

Personal Note: I've been away from the blog for about a month due to the birth of my second son, Simon. He's a beautiful, easygoing, healthy baby boy who has been a wonderful addition to our family. Now, back to the blog...

As I've been taking care of my new baby, I've been thinking about how the worship of data has gotten us into a real mess in education today. The whole drive for education reform today revolves around how to produce and interpret data generated by students in the classroom. Race to the Top, and its predecessor No Child Left Behind, require the generation and use of data to make crucial decisions about how to educate children in public schools. The data is used to determine school improvement efforts, school reform efforts, and, if reformers have their way, teacher effectiveness efforts. I'm afraid that educating children has taken a back seat to the production and interpretation of data in our schools today. Instead of data reflecting what goes on in schools, data is now what is done in schools.

In science, a study is only as good as its data. Scientists spend more time planning how they are going to collect their data than they spend actually collecting it. This planning is crucial because variables need to be controlled. The goal of a scientific study is to determine cause and effect - does Variable A cause Variable B? Scientists have to consider all potential variables that could cause Variable B to occur in order to isolate Variable A's causative power. Scientists have to plan the type of data to collect. They have to plan how they are going to analyze the data before they even begin to collect it. They consult with others to figure out whether their plan for data collection and analysis is sound. The actual data collection moment may take less than an hour, while the planning takes months or even years. Once they collect the data, they analyze it and share it with more colleagues who provide advice on the analysis and the potential interpretation of the data. They work with others to generate questions about what the data means. The scientific process even involves seeking out critics who will look for weak spots in the data collection, analysis, and interpretations so that only the strongest research will be published.

We in education really only have ourselves to blame for this mess we're in. We produce mounds and mounds of data each day - attendance data, assessment data, discipline data, and on and on. Yet, we have done relatively little with this data over the years. We may have looked at the data. We probably even analyzed some of the data. But, in general, we have committed two major errors with our data:

1. We do not collect our data with any plan for how we are going to analyze and interpret it.

2. And we do not share our data with others.

We commit the first error because we typically do not plan for data collection with the purpose of research in mind. We simply collect data, and after the fact realize that we should probably analyze it to see if the data shows that Variable B happened. By haphazardly collecting data, we cannot know if Variable A is the cause of Variable B because we did not control for any of the other variables that may have also caused Variable B. Did the standardized test score our students earned last March occur because the teacher was awesome? How can we know if we haven't controlled for the multitude of variables that have influenced that test score - past experience, parental involvement, poverty, number of snow days that year, etc.? Generating data last March does not automatically mean we can interpret that data in ways that provide definitive conclusions about any variable that wasn't controlled originally. And in schools, very few variables are controlled, ever.

The second error is perhaps our most grievous. If we are collecting data purposefully, and if we are analyzing it to make meaningful decisions, then we are doing a poor job of communicating our data and conclusions to a wider audience. We should be our own best advocates for what is going on in the classroom because we are in the best position to know whether a teaching strategy or a curricular program is effective. We should be collecting data with the analysis in mind, and once we analyze our data, we should share our findings with the world. If we did those things, we would not be in the current climate of hostility toward education that we are in now. We would be able to discuss our practices and results authoritatively instead of defensively. We do not teach our teachers and administrators how to conduct sound research in the classroom or school. So instead of getting educational research that is meaningful and important, we get eduspeak and excuses.

Can you use standardized tests or course grades or graduation rates or discipline records to determine effective teachers and schools? Yes, if that data is collected with these ends in mind. Yes, if that data is collected in a way that controls for the extraneous variables that also can cause test scores or course grades or graduation rates or discipline incidents. Until the data we collect in schools is collected purposefully, the conclusions we reach from the data is speculative at best.

We need to take charge of our data so we can speak with our own voices in the education debate. How much more powerful would our arguments for reform be if our conclusions stood the test of scientific criticism? To me, the best charter school is not one that eliminates tenure or has shiny new lab equipment. The best charter school would be one that seeks to collect data purposefully and is willing to share openly the results of its experimental teaching practices, good, bad, or ugly. But to me, I don't see any reason why a regular public school couldn't be like this. Good data collection is not rocket science - it's just science.

No comments: