Plain Talk with Jane Hannaway: An introduction to CALDER

Wednesday, March 14, 2007

Jane Hannaway

Jane Hannaway — is Principal Research Associate and Director of the Education Policy Center at the Urban Institute, and the overall Principal Investigator of the CALDER project. She has primary responsibility for the running of the center.

Read more about Jane Hannaway»

What can you learn from CALDER?

With never-before available data, we can ask questions never asked before. We can follow individual students' performance over time and look at how much their performance improves and identify some of the factors affecting that improvement.

CALDER data allow us to link students with teachers. This is important because we can see how effective teachers are and identify the characteristics and conditions under which they perform best. In short, we'll be able to look into the "black box" of schooling with credible data.

We've found that effective teachers make a huge difference. For instance, teachers that fall near the bottom of the distribution in effectiveness are only getting about a half-year of learning gain from students in a year. Teachers near the top are getting about a year and a half of gain.

You can just imagine how this compounds over time. So if by the luck of the draw, you happen to get a low-productivity teacher a couple years running, you're in trouble.

What promotes student achievement? That's always our bottom-line question. How do teacher policies, accountability policies, or governance policies affect student achievement? Are the effects the same for all student sub-groups?

By working with state and local data, we can see where state and local policies make a difference. What policies might improve the flow of good teachers to the more challenging schools? What do teachers look like in charter schools and how do these schools affect students' performance?

Because we have data over time, we can look at the performance of students and the performance of teachers before and after a policy takes effect to see if it makes a difference. Because we have data on all students and all teachers in a state, we can see how effects might vary for different students, teachers, or types of schools. Because we're working in multiple states, we can also test the robustness and interaction of policies across different political and jurisdictional contexts.

Do you expect surprises?

Yes, I think we'll be surprised.

One thing we're interested in is teacher mobility patterns. Why do teachers leave some schools and go to others? Many would claim that money is the answer—that teachers will go where they can get paid more. And while financial rewards may be part of the explanation, we see in the New York data that there's also a homing instinct. Teachers tend to move back to their hometowns.

The amount of professional support teachers get is also important to whether they stay or go. By disentangling these complexities in teacher mobility patterns, we can inform the development of better policies to encourage strong teachers to work in schools where they can make the most difference.

Our research also shows greater variation in teacher effectiveness within schools than between schools, a finding that further complicates policy. Should incentive policies be school level or individually targeted? How would individual incentives affect the teamwork which is also important for schools to work well?

What do the six state partners contribute to CALDER?

The No Child Left Behind Law encourages all states to develop data systems to undergird the law's accountability requirements. It's emerging. The states we're working with—New York, Florida, Texas, Missouri, North Carolina, and Washington—are the pioneers in using these data, not just for accountability purposes, but for research purposes.

Some states are just starting to put their data infrastructure together. They're facing big decisions on how to set up their longitudinal systems because anything done now could be very hard to change later.

Our first public conference next fall will showcase a study or two from each of the states. Other states will want to know things like, what kind of information should they be collecting? What's important to know?

We'll be able to say how we've generated the groundbreaking findings. And to generate these findings, we'll explain, this is the information we had to have on teachers. This is the information we had to have on students. This is the way the measures were constructed.

Another advantage of having state partners is the ability to leapfrog research findings. We don't have to wait for research papers to go through the review process and to get published before we know the findings. An exchange goes on continuously. When someone comes up with a finding in North Carolina, we can look to see if the same thing is happening in Texas or in any state already up and running with their data.

What are the emerging trends?

The huge variation in teacher effectiveness and the standard teacher quality indicators—experience (beyond the first few years), level of education, and certification—do not explain what we really want to know. Serious effort is underway to crack the puzzle of what makes a difference.

Our data allow us to be Johnnie-on-the-spot. We can be strategic and opportunistic as new policies are introduced. We've already got the baseline data collected and the collection of successive waves of data underway. By comparing the data before the policy is introduced to the data that emerges afterward, we are positioned to see what policies make a difference.