Op-Ed: STEM should think beyond the bubble

Monday, April 17, 2017 - 11:57am

If you told me a year ago that in a year I’d be writing about how I love chemistry, I probably would have laughed in your face. Around that time, I was miserably glued to Problem Roulette, an online tool filled with thousands of multiple-choice questions from past exams in Chemistry 130, the University of Michigan’s primary introductory general chemistry course. I had done well enough on the course’s midterm exams by reviewing past homework and doing plenty of textbook problems. But the finale of my studying always required a trip to the deep pits of Problem Roulette, a tool that would test my understanding of the material while also giving me a flavor of what I would find in 30 multiple-choice questions come exam day. 

A year later, I’m not approaching my second-semester organic chemistry final with the same dread I felt a year ago. A lot has changed since then: I’ve fully settled into the rhythm of college studying, I’ve had a number of inspiring chemistry professors and graduate student instructors and I’ve developed a good support system among my peers. But what has most changed in my mind is how I’ve been asked to approach exams in organic chemistry.

Instead of bubbling in letters on a Scantron, organic chemistry exams feature questions that require students to draw structures and mechanistic arrows to model how the concepts and reactions occur and work together. As such, my studying has focused less on integrating multiple aspects of a problem into an A, B, C, D or E answer and has focused more on making sure I can explain and draw out concepts in a coursepack of old exams. The difference in my attitude toward the two classes could not be starker: Instead of worrying whether all my work on a multiple-choice problem boils down to a single correct or incorrect answer, I’m studying by drawing reactions to diagnose exactly what I know and don’t know.

Multiple-choice exams have always received wrath from the likes of standardized test opponents and those who claim that requiring students to bubble in responses on a separate Scantron form makes it too easy to make a mistake. I’m not so sure I oppose them outright, but I do take issue with how they’re used in introductory science, technology, engineering and mathematics courses such as general chemistry. Introductory courses exist to teach important concepts to a wide audience, but they also should inspire and excite students about the possibilities in a field. Using multiple-choice exams as a primary method of assessment takes this seemingly primary goal of inspiring and exciting students about STEM and makes it secondary.

Take this example of how a core concept could be assessed through multiple methods: Intramolecular interactions underpin everyday life, from the boiling point of liquids to ensuring your stomach enzymes have the correct shape to digest specific proteins into their subcomponent amino-acids. A multiple-choice question in general chemistry might assess this concept by listing a number of molecules and asking which one does not have an intramolecular interaction affecting boiling point, while a non-multiple-choice problem might ask a student to draw the specific interactions that stabilize a molecule.

Both questions assess practically the same thing. But the feedback the second question provides makes it more valuable. A student who gets the multiple-choice question wrong just knows that they lack some piece of understanding about intramolecular interactions. In contrast, a non-multiple-choice question shows students immediately what they understood and missed and can award a range of credit depending on the completeness of the answer. Needless to say, the ability to guess correctly without understanding seldom exists on non-multiple-choice questions.

Multiple-choice exams simply tell students whether they answered correctly or incorrectly, regardless of if they understood 0 percent or 90 percent of the problem. While the differences in feedback may be addressed with the help of an instructor, students who are not always inclined to immediately seek out help from an instructor will feel demoralized if they thought they understood most of the concept and lost points on the full question because of a small error. I feel grateful that I came to college having had terrific mentors in high school who encouraged me to pursue a path in STEM. But I worry that when just a few dozen multiple-choice problems determine most of a grade, those who are just testing the waters of STEM can become demoralized by how the exam format of courses like Chemistry 130 treat their progress, and may leave the field all together, as STEM fields have high attrition rates.

Some may deride this call for assessments that recognize the multiple aspects of an answer as a call for “participation trophies” for answers that aren’t 100 percent correct. Multiple-choice tests are a terrifically simple and low-effort way to separate an “A” student from a “B” student from a “C” student. But should that be the only goal of an assessment in an introductory class? Introductory classes, especially in STEM, are supposed to teach and assess, but also should paint students a picture of the road ahead if they pursue a STEM path.

While using non-multiple-choice exams in large introductory classes requires an increase in grading effort and costs, in my mind, the benefits make them worth it. While by no means a panacea to all of STEM education’s difficulties, non-multiple-choice exams provide students with honest feedback about their performance instead of just a simple score.

Jeremy Kaplan is a senior opinion editor.