That time of the year. Student evaluations are being gathered by the data crunchers. Participation rates are being noted. Attitudes and responses are mapped. The vulnerable, insecure instructor, fearing an execution squad via email, looks apprehensively at comments in the attached folder that will, in all likelihood, devastate rather than reward. “Too much teaching matter”; “Too heavy in content”; “Too many books.” Then come the other comments from those who seem challenged rather than worn down; excited rather than dulled. These are few and far between: the modern student is estranged from instructor and teaching. Not a brave new world, this, but an ignorant, cowardly one.
The student evaluation, ostensibly designed to gather opinions of students about a taught course, is a surprisingly old device. Some specialists in the field of education, rather bravely, identify instances of this in Antioch during the time of Socrates and instances during the medieval period. But it took modern mass education to transform the exercise into a feast of administrative joy.
As Beatrice Tucker explains in Higher Education (Sep, 2014), “the establishment of external quality assurance bodies (particularly in the UK and in Australia), and the ever-increasing requirement for quality assurance and public accountability, has seen a shift in the use of evaluation systems including their use for performance funding, evidencing promotions and teaching awards.”
Student evaluations, the non-teaching bureaucrat’s response to teaching and learning, create a mutually complicit distortion. A false economy of expectations is generated even as they degrade the institution of learning, which should not be confused with the learning institution. (Institutions actually have no interest, as such, in teaching, merely happy customers.) It turns the student into commodity and paying consumer, units of measurement rather than sentient beings interested in learning. The instructor is also given the impression that these matter, adjusting method, approach and content. Decline is assured.
Both instructor and pupil are left with an impression by the vast, bloated bureaucracies of universities that such evaluation forms are indispensable to tailor appropriate courses for student needs. But universities remain backward in this regard, having limited tools in educational analytics and text mining. Student comments, in other words, are hard to synthesise in a meaningful way.
This leads to something of a paradox. In this illusory world, corruption proves inevitable. Impressions are everything, and in the evaluation process, the instructor and student have an uncomfortable face off. The student must be satisfied that the product delivered is up to snuff. The instructor, desperate to stay in the good books of brute management and brown nose the appropriate promotion committees, puts on a good show of pampering and coddling. Appropriate behaviour, not talent, is the order of the day.
The most pernicious element of this outcome is, by far, grade inflation. “Students,” asserts Nancy Bunge in the Chronicle of Higher Education, “give better evaluations to people who grade them more generously.” Absurd spectacles are thereby generated, including twin tower sets of academic performances that eschew anything to do with failure (students as consumers cannot fail, as such); everybody finds themselves in the distinction or high distinction band, a statistical improbability. Be wary, go the ingratiating types at course evaluation committees, of “bell curves” – they apparently do not exist as an accurate reflection of a student’s skill set.
The result is a mutually enforcing process of mediocrity and decline. The instructor tries to please, and in so doing, insists that the student does less. Students feel more estranged and engage less. Participation rates fall.
The untaxed mind is a dangerous thing, and students, unaware of this process, insist on possessing a level of prowess and learning that is the equal of the instructor. This is not discouraged by the administrative apparatchiks of various committees who make it their business to soil decent syllabi with dumbed down efforts such as “workshops” and “group work”. (The modern student supposedly has a limited, social media concentration span.) To them, the individual thinker – student or instructor – is a sworn enemy and must be stomped into an oblivion of faecal drudgery.
There is ample evidence, diligently ignored by university management, suggesting how the introduction of such surveys has been, not merely corrupting but disastrous for the groves of academe. Take, for instance, gender bias, which has a marked way of intruding into the exercise. Clayton N. Tatro found in a 1995 analysis of 537 male and female student questionnaires that both the gender of the instructor and the relevant grade “were significant predictors of evaluations.” Broadly speaking, the female students gave higher rating evaluations that their male counterparts. Female instructors did better in the evaluation scores than their male peers. Female instructors also did better in their scores with female respondents.
Learning is a process of perennial discomfort, not constant reassurance. The pinprick of awareness is far better than the smothering pillow. Genuine learning is meant to shatter models and presumptions, propelling the mind into enlightened, new domains. The student evaluation form is the enemy of the process, a stifling effect that disempowers all even as it claims to enhance quality.
Where to, then, with evaluating teaching? There is something to be said about the element of risk: there will always be good and bad teachers, and that very experience of being taught by individuals as varying as the pedestrian reader of lecture notes or the charming raconteur of learned anecdotes should be part of the pedagogical quest. From such variety grows resilience, something that customer satisfaction cannot tolerate.
Education specialists, administrators and those who staff that fairly meaningless body known as Learning and Teaching, cannot leave the instructing process alone. For them, some form of evaluation exercise must exist to placate the gods of funding and quality assurance pen pushers.
What then, to be done? Geoff Schneider, in a study considering the links between student evaluations, grade inflation and teaching, puts it this way, though he does so with a kind of blinkered optimism. “In order to improve the quality of teaching, it is important for universities to develop a system for evaluating teaching that emphasises (and rewards) the degree of challenge and learning that occurs in courses.” Snow balls suffering an unenviable fate in hell comes to mind.
The whole “evaluation” process is a time consuming scam to give otherwise unemployable bureaucrats an ego-boost and chance to dominate academics, purging any they view with disfavour.
How can a student who has attended only one university have any idea about “comparative quality” of courses?
Certainly from personal experience, I learned that another university did things differently (and better) than my alma mater.
The major problem is the exponential growth of Pro Vice Whats-its, the overpaid refugees from specialist learning and teaching who “retire” into administration for the shorter hours and fewer work assessments; much less work induced stress, with much better pay and conditions.
These overpaid academic refugees have boosted administration costs OVER 50% of government sourced income for no apparent academic advantage except removing them from the lecture theatres where they were boring students to death.
Desk jockeys all, they suck the financial life blood out of universities properly suited to R&D pursuits creating future common wealth rather than present academic welfare.
Gotten to get they’re teachers of off learning.
3 r’s
Mark Needham
As a very mature Uni student between 2009 -2012, I always dreaded the student evaluation. Much of the standard one offered by my university was in the form of ratings scales, often asking things like ‘The course is relevant to my future career’ – how can I know, when my future career is uncertain? Often handed out to be done during the final class of the semester, many students scribbled their way through without much consideration. It was all too late to have any meaning.
I think a general feedback throughout the year would be more effective. That way students could signal that they are struggling with content or not accessing the study materials, or indeed, that they are feeling truly inspired … One thing I noticed was that the student evaluation was never asked for where the course was flat and inadequately taught; there were a few over the years that I would have liked to criticise severely.