The following is an excerpt from Chapter 13: “Findings in Practice” of Dr. Dianna Whitlock’s published book, “Teacher Evaluation as a Growth Process.” To purchase the full book, visit Amazon or Barnes&Noble.
Part 1, Chapter 13: Findings in Practice
Conducting this study gave us the data reported in Chapter Twelve, and an academic analysis of the importance of a system of frequent, consistent feedback. We found that teachers utilizing the platform improved over a three-year period, and that those who had utilized the system for a three-year period received higher marks than those who had used the system for less than three years. Also, the teachers who utilized the system over a three-year period were less transient than those who had not.
But there were some other items that stood out to us as well. For example, the fact that so many teachers were struggling with similar indicators led us to conclude that these indicators may not be stressed enough in teacher education preparation programs or professional development. This brought our research team to pose some deeper questions as to how we, as educational leaders, can continue to improve evaluation for teacher growth, development, and retention.
What if we could report and analyze the actual ratings behind the final markings?
It is key to point out that most states only require that a school district report a final evaluation score for teachers. The teachers in this study had been rated as Highly Effective, Effective, Improvement Necessary, or Ineffective (1,2,3,4). Ultimately, their state department of education only saw the final number. Our study drilled deeper into the markings behind these final ratings, looking at the specific indicators marked for participating teachers. As a result, 98% of the teachers in the study, while struggling with specific indicators, were given a final rating of Effective or Highly Effective, and 10% were not given a final rating at all (Indiana Department of Education, 2017). At the same time, our more in-depth analysis revealed that 14% of these same teachers were marked less than effective in maximizing instructional time. 13% were marked less than effective in student engagement, and 11% were marked less than effective in developing student understanding. This brought us to question the markings behind the final rating, and what they really mean.
With this type of in-depth analysis, we began to question if the final marks are being inflated. This question is not intended to imply that educators are conducting unethical practices, but rather that there may exist a lack of training in conducting and managing the evaluation process. Additionally, administrative teams that are not having frequent conversations on inter-rater reliability and analyzing evaluation data may struggle with assigning finalizations that are reflective of the teachers’ performance.
Like so many other tasks of administrators, much of this comes back to time management. We know that administrators are beyond busy and wear multiple hats daily. If an administrator does not have a system of collecting and reporting evaluation data, the tasks associated with teacher evaluation can quickly become overwhelming. Not only may an administrator struggle with making time to analyze the data to assign a final ranking reflective of the teacher’s performance, but without a management platform that generates data reports, this may be impossible to break down.
To continue reading the full study findings, click here to purchase the full book.