Facebook Twitter

Alan Petersime

Indianapolis Public Schools picks the Danielson Framework to evaluate teachers

After months of planning and discussion, Indianapolis Public Schools has chosen a new approach to evaluating teachers: the Danielson Framework for Teaching.

Starting next year, IPS will use the Danielson model — a system that gives administrators a list of qualities to look for while observing a classroom — as part of assigning effectiveness ratings to the district’s teachers. The ratings can influence everything from whether educators get raises to whether they can even keep their jobs.

The district will be relying heavily on the framework, which is widely used in school districts across the country including New York City and Chicago, but experts say that despite years of study, it’s still not clear whether teachers who score well on the method are better teachers than those who score poorly.

“Observation scores are not as reliable or as closely associated with student achievement as we would ideally like,” said Rachel Garrett, a researcher with American Institutes of Research.

The Danielson model, created by former economist and teacher Charlotte Danielson, calls for administrators observing classrooms to rate teachers based on how well they meet goals such as “organizing physical space,” “engaging students in learning” or “using assessment in instruction.”

The method is similar to the district’s current scoring guide, a modified version of the state model known as RISE. But educators believe it’s less subjective and more specific in its expectations for teachers, said union leader Rhondalyn Cornett.

“It’s more clear to teachers,” she said.

In addition to using the Danielson Framework to rate its teachers, IPS plans to use the method to help teachers hone their craft, said Mindy Schlegel, the district’s chief talent officer.

“What’s most important to all of us is that teachers really buy back in to something that’s meaningful and really about their professional growth and less of a compliance exercise,” she said. “None of the rubrics are all that different from each other.”

Experienced IPS teachers are already familiar with Danielson. It’s the model the district used for years before switching to the RISE model several years ago. The RISE evaluation faces resistance and inconsistent application in IPS, and some educators are eager to return to Danielson.

Research on the benefits of the framework suggests that is an imperfect tool for assessing teacher quality, Garrett said, noting that teacher observation scores can vary more from year-to-year than might be expected.

Most teacher scores also usually cluster around a three on the four-point scale, she said. A small number of teachers are at the highest level, and the smallest number of teachers are at the lower levels.

“Do we really think that teachers are that uniform in their quality or do we think actually there is more differentiation in teaching quality that we’re missing?” Garrett asked.

In fact, that’s an ongoing issue with evaluation in IPS. Last year, 91 percent of teachers in the district were rated effective or better. Critics argue that in a district where nearly half the schools are rated D or F by the state, that’s a clear indicator that the evaluation process is not accurately measuring teacher quality.

(Read: Indianapolis Public Schools seeks better way to grade teachers)

While the framework is not considered a reliable measure of teacher quality, some research suggests that it does help teachers improve, Garrett said.

“There may really be benefits to teachers receiving that feedback in a way that’s actionable,” she said.

When Chicago began rolling out a modified version of the Danielson Framework in 2008, researchers tracked how it influenced student test scores. They found that in schools that received significant assistance from the district, reading scores improved. But students at schools that began using the model in later years with less central office support did not show the same gains.

Jennie Jiang, a researcher at the University of Chicago Consortium on School Research, said that implementation is an important part of getting the most out of the framework. Principals and other evaluators need to be trained to ensure that they score teachers consistently, and they need to buy-in to the system and believe that’s valuable.

The system, which replaced an outdated checklist Chicago had used for decades, is relatively popular among educators, Jiang said.

“In schools where there’s a strong relationship with principals and strong instructional leaders, teachers tend to like the new evaluation system more,” she said.

IPS plans to train administrators and some teachers on the new evaluation process this summer, with the aim of ensuring that evaluators are on the same page so results are consistent, Schlegel said.

As the new evaluation method is rolled out she hopes that principals will use what they learn to choose professional development for teachers and to help them learn from each other, she said.

“If we’re accurately rating teaching and growing them over the course of time,” she said, “that’s a good evaluation system.”