On April 15, GSIs asked their students an important question: “How did we do as instructors this term?” It’s exciting for teachers when students are eager to express their opinions, as they often are at the end of the semester. But this time, most students didn’t get the chance.

That’s a shame, because students aren’t the only ones learning in University classrooms. Every day, teachers experiment with new teaching methods and activities. End-of-term teaching evaluations help us to both refine our classroom manner and choose the most successful of our teaching techniques. When the CTools evaluation system was abruptly taken offline on April 20, this cycle of experimentation and feedback came to a halt.

This early closure infuriated learners on both sides of the podium, but teachers and administrators shouldn’t be so surprised at this. After the fall 2008 semester, the University delayed the release of evaluations until well after the next semester had begun. Feedback came far too late to influence the next round of syllabi, textbooks and course requirements.

Similarly, an Office of Public Affairs website linked to in an e-mail by Provost Teresa Sullivan cites “reasons not yet understood” for this semester’s failure. But we can take a guess at the reason: The University unwisely embedded the evaluations process within another computer system of “highest priority:” CTools. We conjecture that when, for still murky reasons, that system crashed, the University chose to sacrifice the part for the sake of the whole.

CTools has become invaluable for instruction, especially when finals approach. But with the details of the crash so obscure, we worry that both the network and University administrators in charge of the evaluations may not be held accountable. Even worse, we worry that spring and summer semester students may not have access to any evaluation system at all.

Students deserve a voice in the shaping of their education. And although evaluation methods abound, student questionnaires directly link students and instructors and play a crucial role in graduate students’ professional development. In response to this concern, the Graduate Employees Organization has formed the Teaching Evaluations Working Group to ensure the success of evaluations at the University.

TEWG seeks full disclosure on the errors of the fall and winter evaluations, progress reports on implementation for the spring and summer semesters and the assurance — perhaps through an online or paper backup system — that the University won’t settle for “65 percent of the expected responses,” as they do now, according to the Office of Public Affairs’s website. The University only collected 62 percent of the number of responses it expected to collect. As mathematicians, we can tell you that 65 percent of 62 percent is only 40 percent of students whose feedback the University recieved.

When GSIs walk to the chalkboard, we take responsibility for the quality of education at the University. Our students also take responsibility by offering feedback in office hours and through anonymous evaluations. As another semester begins, we hope that the Office of Evaluations, the CTools Implementation Group and the provosts and deans will join us in this responsibility and privilege to better the education of the students at the University.

Harlan Kadish and Kyle Ormsby are members of the Teaching Evaluations Working Group.

Leave a comment

Your email address will not be published.