There’s mounting evidence suggesting that student evaluations of teaching are unreliable. But are these evaluations, commonly referred to as SET, so bad that they’re actually better at gauging students’ gender bias and grade expectations than they are at measuring teaching effectiveness? A new paper argues that’s the case, and that evaluations are biased against female instructors in particular in so many ways that adjusting them for that bias is impossible.
Moreover, the paper says, gender biases about instructors — which vary by discipline, student gender and other factors — affect how students rate even supposedly objective practices, such as how quickly assignments are graded. And these biases can be large enough to cause more effective instructors to get lower teaching ratings than instructors who prove less effective by other measures, according to the study based on analyses of data sets from one French and one U.S. institution.
“In two very different universities and in a broad range of course topics, SET measure students’ gender biases better than they measure the instructor’s teaching effectiveness,” the paper says. “Overall, SET disadvantage female instructors. There is no evidence that this is the exception rather than the rule.”
Accordingly, the “onus should be on universities that rely on SET for employment decisions to provide convincing affirmative evidence that such reliance does not have disparate impact on women, underrepresented minorities, or other protected groups,” the paper says. Absent such specific evidence, “SET should not be used for personnel decisions.”
“Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness,” was published last week inScienceOpen Research. Philip B. Stark, associate dean of the Division of Mathematical and Physical Sciences and a professor of statistics at the University of California at Berkeley and co-author of a widely read 2014 paperquestioning the reliability of evaluations, co-wrote the paper with Anne Boring, a postdoctoral researcher in economics at the Paris Institute of Political Studies, and Kellie Ottoboni, a Ph.D. candidate in statistics at Berkeley.
Read the entire article: https://www.insidehighered.com/news/2016/01/11/new-analysis-offers-more-evidence-against-student-evaluations-teaching
A few weeks ago the Times of London asked a group of British, Australian, and American academics to imagine what education might look like in 2030. The results are engaging and diverse, illustrating neatly the wide range of education futuring. They also hit many high notes for current issues.
For example, these themes appear: automation of the economy, automating learning, flipping the classroom, shortened attention spans, lectures in decline or triumph, mobile devices as enabler or enemy to learning, data analytics, interdisciplinarity, development of new competencies, and assessment.
There’s a wide range of anticipation about the scope of change, from massive revolution to slow, incremental change (check this comment, for example).
I was struck by one vision of a health-centered class, from Dan Schwartz and Candace Thille:
Th[e] de-Balkanisation of university departments will also result in Health 101 becoming the most popular course. Advances in biology…
View original post 285 more words