Over the past few months, the University’s course evaluations — and whether their results will be released to students — have become increasingly contentious.
In September, the administration announced plans to release the evaluations after years of campus discussion surrounding the initiative. But the announcement drew significant pushback from faculty, leading to an October vote by the faculty Senate Assembly to support the delay of the release, citing concerns with the evaluation’s format, among other factors. University Provost Martha Pollack had said she would be willing to adjust the timeline for their release based on the assembly’s vote.
Two committees have since been created to examine both the release of the evaluations and the instrument employed for evaluating courses — re-opening conversations about issues like question design and response rate that faculty have had for years.
The current system
Many of the concerns raised by faculty and students on both sides of the issue aren’t new; they stem from how evaluations have been used long-term at the University.
For many parties involved, course evaluations have important ramifications. For GSIs and lecturers, student evaluations are a factored into decisions about their reappointment. For tenure-track faculty, they are used in conjunction with other evaluation mechanisms to determine tenure and promotion. Across the board, course evaluations are reviewed when considering nominees for teaching awards.
Kinesiology Prof. Stefan Szymanski, a member of the Senate Advisory Committee on University Affairs, said faculty members’ portfolios also include their teaching evaluation scores. These portfolios are reviewed by executive committees that make recommendations to the dean for performance-based salary increases.
What the surveys themselves look like can differ, sometimes significantly, but each starts with four University-wide questions, commonly known as Q1 through Q4. These questions have been the focus of current debate because they are constant in surveys across campus.
Q1 asks students to rate “Overall, this was an excellent course.” Q2 is “Overall, the instructor was an excellent teacher.” Q3 asks students to evaluate the statement, “I learned a great deal from this course.” Q4 asks, “I had a strong desire to take this course.”
Typically, departments form the rest of the evaluation questionnaire, but professors have the ability to add questions as well. After including Q1-Q4, faculty have the option to choose from a catalogue of about 1,300 other questions — some are broad while others are dedicated to specific courses.
Student governments and various departments and schools have sent in requests to include questions in the catalogue over the years. The questions date back to 1996, and are sorted into different categories, such as “student development” or “instructor effectiveness,” according to the Office of the Registrar.
Political Science Prof. Mika LaVaque-Manty, who has conducted research on course evaluations for a University task force on learning analytics, said he’s found that many professors don’t deviate from the standard survey their department creates.
Szymanksi said many professors pay most attention to the first four questions because they’re administered across the University, making it easy to draw comparisons.
“Using the other questions may help you design your course better, but it’s not going to tell you very much about student satisfaction overall with your course, relative to the other courses that they’re taking,” Szymanksi said.
In a November interview with The Michigan Daily, University Provost Martha Pollack said she believers the current course evaluation questions are satisfactory, but acknowledged that they haven’t been revised for some time.
“It’s a good instrument,” she said. “We’ve used it for many years, but it hasn’t been changed in a lot of years. It’s really important that we get that right, and so I appointed people with expertise in educational assessment and so on to look at the questions.”
Seeking a purpose
For some involved in the current redesign process, the focus isn’t only on the questions themselves, but on the climate the evaluations foster.
Central Student Government president Cooper Charlton, an LSA senior, said he thought issues with the evaluation system stem from flaws in University culture.
“It’s the climate we have on campus,” he said. “Students need to come to the table willing to give constructive criticism … and faculty should look at the course evaluations as a way for them to grow. I think systemically we need to work together to build a culture where not only course evaluations, but higher education in general, has a more high-impact and collaborative atmosphere.”
SACUA Chair Silke-Maria Weineck, a professor of comparative literature, said it could be beneficial to include more questions that emphasize the two-way dynamic between teacher and student.
“I would like the questions to also have collaborative aspects to bring out the fact that students are such an important part of each class … (and) make sure that it’s a shared enterprise to teach a class,” Weineck said.
In particular, she cited questions about a student’s responsibility in the classroom. Other universities, such as the University of Washington, use student-centric questions in their evaluations to assess how much effort a student put into the class and his or her interest in the course material, for example.
Out of the 1,300 total questions in the University’s question catalog, there are six — added in 1996 — that ask students to reflect on their participation and effort. From these “student responsibility” questions, none are administered across the entire University.
English Prof. David Porter, chair of the English Department, wrote in an e-mail interview that he questions whether the release of course evaluations would succeed in fulfilling what he described as their intended purpose — improving course and teaching quality.
“Speaking from personal experience, growing into one’s full potential as a classroom teacher is an ongoing process spanning years and even decades, and requiring patience, perseverance, and a great deal of trial and error,” Porter wrote. “To release course evaluation data to a broader audience than that for which it is intended would not, in my view, be helpful in our long-term efforts to provide the highest level of instruction for students in our English courses that we possibly can.”
LaVaque-Manty, who supports the release of the evaluations, said he thought it would be best if it didn’t happen in a bubble — if faculty provided resources beyond evaluations to help students make informed decisions about which classes they take.
“I wish faculty were more diligent in filling out better descriptions of their courses for the course guide,” he said. “I wish in LSA they participated in the syllabi project, which is making their syllabi available for courses … there are other data tools in development that tell us who’s taking this class, what have they taken before, what do they go onto take, this would be really valuable information. I think all of this stuff should be available to students.”
Finding a structure
Even if evaluations are released or redesigned, faculty and students over the past months have identified several structural barriers to using the data — namely, low response rates and bias — that may need to be addressed.
In Winter 2015, LSA’s course evaluation response rate was at approximately 48 percent — the lowest in the period from 2008-2015 according to the Office of the Registrar.
Response rates have been on the decline for several years, especially following the University’s switch to electronic evaluations in 2008, when response rates have remained on average by 15 to 20 percentage points lower than with paper evaluations.
Acknowledging the importance of considering response rates, Pollack said she thought a committee of faculty and students should look at the issue after an evaluation instrument is finalized. The committee’s work is expected to conclude in April.
“I think it’s absolutely right to be concerned about response rates,” she said. “When we use it internally, we’re always very cautious to look at response rates and what the response rates are and what they signify.”
Charlton said he thought the low response rates might stem from the lack of accessibility of the evaluations, and the problem might self-correct if they are released.
“Students don’t think course evaluations will help them because faculty don’t feel comfortable releasing them, so students don’t fill them out,” he said. “And when students do fill them out, they feel obligated to treat them as a joke. So we acknowledge that they’re low, but the reason they’re low is because in reality, they don’t provide any value to students.”
Several universities offer incentives for students to submit evaluations, many of which are tied to access to evaluation data. Northwestern University, which does release evaluation data, has a policy where students who don’t fill out evaluations cannot gain access to evaluations for the upcoming quarter.
In an e-mail interview, Alison Phillips, Northwestern’s senior assistant registrar, said the policy was implemented as an incentive to keep response rates at high.
Northwestern’s course evaluation response rates average between 65 and 70 percent, noticeably higher than LSA’s 48 percent.
CSG Communications Director Alexandra George, a Public Policy junior, said there is a need not just for course evaluations to be released, but also for a better understanding of how and where the results will appear, which is where an approach like Northwestern’s might be helpful.
“I feel that if we were to actually release this course evaluation data, people would see that it can be used,” she said. “If you used it to help you pick your classes, why would you not pay it forward?”
Other schools, like Michigan State University, withhold students’ grades for one week if they fail to fill out evaluation surveys at the end of the grading period.
MSU’s course evaluations are not openly published, but MSU sophomore Meghan Shelton said she believes the policy has had a negative impact on quality of responses, though it may help with quantity.
“I feel like students fill them out as fast as they can just to get it over with so they can get their grades,” Shelton said.
LaVaque-Manty said if the University were to adopt such a policy, it would be important to think of the negative impacts.
“It changes the nature of the instrument,” LaVaque-Manty said. “You would fill it out possibly angrily as another task that might color your judgment.”
Along with a low pool of data, at an Oct. 12 SACUA meeting that preceded the Faculty Senate vote to delay the release of evaluations, faculty also questioned whether sexism or racism leads to bias in student responses.
LaVaque-Manty said based on his research, individual bias due to gender and race is evident in classrooms, both in open-ended comments and quantitative measurements.
However, he also stressed that those biases tend to disappear from the overall quantitative data, except for some instances of gender bias appearing when data is analyzed at the departmental level.
George thinks that overall, there is no way to anticipate what the impact of releasing course evaluation data might be — whether that means good or bad outcomes. However, she said she thinks the outcomes will disprove faculty expectations of student behavior.
“Right now when I fill (evaluations) out, I think, ‘Where is this going? If I never see it, then who’s really looking at this?’ ” George said. “I know it’s easy to think cynically and to think that students would just take advantage of it, but I truly believe that if this were to be utilized, then everyone on campus would use this as a tool.”