In this classic episode of South Park , one of the main characters discovers that tiny gnomes are coming into his bedroom at night and stealing his underpants. When the gnomes are confronted and asked to explain their actions, they present a three-phased plan:
First, collect underpants.
The joke, of course, is that the Underpants Gnomes haven’t spelled out the crucial missing phase—how to turn stolen underpants into profits—but they continue their pilfering practices none-the-less.
So what does this have to do with PLCs? Well, I sometimes worry that, like the Underpants Gnomes, we are leaving out crucial details in our assumptions about how work in professional learning teams will lead to improved student learning. In a South Park version of a PLC, it might sound something like this:
First, collect and analyze student data.
Third, improved results!
In my experience, much of the literature on PLCs speaks in detail about process (collecting and analyzing student data) and outputs (student learning), but provides much less information on the middle details. And what are those crucial middle details? Well, the research is pretty clear—if you want to improve the outputs, you have to make appreciable changes to the most important school-level inputs: the curricular, assessment, and instructional practices of the classroom teacher.
Now, I don’t want to suggest that the PLC model is ineffective, or that professional learning teams are just spinning their wheels when they collect and analyze student assessment results. One solid practice discussed at length in the PLC literature concerns grouping students based on data to provide either remediation or enrichment opportunities, and I believe that this is clearly an improvement over the more traditional practice of teaching, testing, and then moving on.
But if we keep on teaching the same material the same way, even if we’re re-teaching it a second time around to a targeted group of students, are we really likely to get dramatically different results?
One strategy that I have heard mentioned in numerous workshops is to, “Analyze the data from different classes, figure out which teacher did the best job on a particular concept, and then try to replicate what that teacher did in her class.” I have two issues with this. First, when student data differ across classes, how do we figure out what it was in the higher-achieving classes that led to higher scores? Was it the examples used? Was it the one-on-one time the teachers built in? Was it exemplary class management skills? Was it the fact that two teachers taught the lesson in the morning, when students were alert, and everyone else taught it right after lunch?
Second, while the data shown on overheads in the training workshops always clearly document one class of students doing better than the others (one class has an average 15 points higher than the others, with no students scoring below 80, while a third of the students in the other classes are hovering around 60), it has been my experience that student data across most classrooms are considerably more ambiguous. The differences between most classes in most schools tend to be pretty marginal, without clear patterns that jump off the Excel spreadsheet, making it difficult to draw quick and valid interpretations.
So, if simply analyzing data is not enough, then what is? In order to fill in the missing second phase in the Underpants Gnomes’ plan, we have to ask, and attempt to answer, the kinds of questions that explicitly connect the dots between teaching and learning. What does effective instruction look like? What specific instructional practices led to specific student learning outcomes? Was whole-class instruction superior to small group work, or vice versa? What were the right topics to focus on during one-on-one conferencing? Was it better to start the lesson with an open-ended question or a step-by-step analysis of a solved problem?
Data collection and data analysis are important first steps in the process of pedagogical improvement. The meat of the work, however, lies in identifying and replicating those practices that are most effective in and most tied to raising student achievement.
So how do we do this?
Building trust and a sense of community within a group is the beginning. Making collaboration—and data collection and data analysis—a regular way of doing business is critical, along with developing a collective sense of ownership of student results. But beyond that, teachers must collectively become students of the craft, identifying ways to investigate the connection between teaching and learning. Visiting each others’ classrooms on a regular basis is an excellent strategy, along with action research projects or lesson study initiatives. Teachers can video tape themselves teaching, and then meet as a group to discuss the ways in which students respond to specific instructional practices. Book study groups can explore the strategies in Marzano’s What Works. Administrators can engage their faculties in building-wide discussions of instructional effectiveness by collecting and disseminating instructional data using walkthrough tools.
My point is that the work of professional learning teams must go beyond basic data collection and analysis practices if we expect to use the PLC model as a vehicle for substantive student learning improvements. Collaboration is the way in which we should be doing business, make no mistake. But the details of that business—the specific activities of professional learning teams, the topics on which they choose to focus their time and conversations—must progress to an inquiry-oriented focus on the relationship between instructional practices and student achievement. To do otherwise is to fall prey to the same thinking as those silly gnomes, expecting underpants to magically turn into profits.