New analysis offers more evidence against the reliability of student evaluations of teaching, at least for their use in personnel decisions (Inside Higher Education)

Bias Against Female Instructors

New analysis offers more evidence against the reliability of student evaluations of teaching, at least for their use in personnel decisions.
Colleen Flaherty, January 11, 2016

There’s mounting evidence suggesting that student evaluations of teaching are unreliable. But are these evaluations, commonly referred to as SET, so bad that they’re actually better at gauging students’ gender bias and grade expectations than they are at measuring teaching effectiveness? A new paper argues that’s the case, and that evaluations are biased against female instructors in particular in so many ways that adjusting them for that bias is impossible.

Moreover, the paper says, gender biases about instructors — which vary by discipline, student gender and other factors — affect how students rate even supposedly objective practices, such as how quickly assignments are graded. And these biases can be large enough to cause more effective instructors to get lower teaching ratings than instructors who prove less effective by other measures, according to the study based on analyses of data sets from one French and one U.S. institution.

“In two very different universities and in a broad range of course topics, SET measure students’ gender biases better than they measure the instructor’s teaching effectiveness,” the paper says. “Overall, SET disadvantage female instructors. There is no evidence that this is the exception rather than the rule.”

Accordingly, the “onus should be on universities that rely on SET for employment decisions to provide convincing affirmative evidence that such reliance does not have disparate impact on women, underrepresented minorities, or other protected groups,” the paper says. Absent such specific evidence, “SET should not be used for personnel decisions.”

Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness,” was published last week inScienceOpen Research. Philip B. Stark, associate dean of the Division of Mathematical and Physical Sciences and a professor of statistics at the University of California at Berkeley and co-author of a widely read 2014 paperquestioning the reliability of evaluations, co-wrote the paper with Anne Boring, a postdoctoral researcher in economics at the Paris Institute of Political Studies, and Kellie Ottoboni, a Ph.D. candidate in statistics at Berkeley.

 

Read the entire articlehttps://www.insidehighered.com/news/2016/01/11/new-analysis-offers-more-evidence-against-student-evaluations-teaching

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s