1165 København K
Tlf: 35 32 28 98 (mon-thurs)
ANALYSIS - Lecturer in sociology at the Universtity of Copenhagen Leopold Galicki unpacks what anonymous student evaluations say, and don't say, about the quality of a course at university
In the last 20 years, evaluation culture has become part of many institutional areas of life, especially in the education sector. Evaluation committees were well-known long before the 1990s. In the private sector, consumer surveys on products and services have existed for decades.
What is new is that, within the institutional world, there has been a greater involvement of consumers of welfare state services as evaluators. Of course, universities have also been influenced by this evaluation trend.
It would be interesting to look at this evaluation trend in the welfare state based on a larger study of many institutions. But in this context I confine myself to share my experience from one limited area: The student course evaluation.
The student course evaluation is a formalized element in a course at the Department of Sociology at the University of Copenhagen, where I teach.
In general, I find it beneficial to students that they, in the final stage of the course, get distributed questionnaires and have 15-20 minutes to reflect on, and address, a number of concerns regarding the quality of education.
The teacher gets an opportunity to confirm, or rule out, the eligibility of the course, as seen from the students’ perspective. As a teacher, you get feedback on a number of specific indicators on the quality of the course, like for example the use of learning tools such as whiteboard and power point, guest lectures, and on student involvement.
At the same time, students can relate to the academic level, and the teaching and learning. The evaluation questionnaire contains open questions on learning methods and on other negative and positive aspects of the course. As far as English-spoken courses, the teacher’s and students ‘ English proficiency may be evaluated too.
The questionnaire’s anonymity ensures that students can expand their criticism without fear that it will negatively influence their grades, or give them an image as a student who complains.
However, it is precisely this anonymity, that I am concerned with.
Apart from positive aspects of anonymity, there is a disturbing part of it.
There is always a small segment of course participants, maybe around 5-10 per cent, or one or two out of every 20 students, whose evaluation is what I would call ‘totally negative’. In the questionnaire this can be seen when course participants respond with a ‘no’ to the question whether the aim of the course has been fulfilled.
There can be several reasons for a strong critique: Psychological and socio-psychological factors can influence the evaluation – I am thinking here of the mental state of the individual at the time of the evaluation and the student’s welfare within the social group, including the teacher, which are all constitutive factors of the course.
The anonymity of the questionnaire can serve as an inviting platform to unfold the student’s frustration.
Students always have more or less different backgrounds to evaluate a particular course, This can lead to wildly varying evaluations. A folk saying has it, that no matter what, there is always someone who is dissatisfied. So what is the point of writing about it?
The reason I write this article, is because the one or two strongly dissatisfied students in a group of 20, will usually reveal an inconsistency in their evaluation.
Not so much due to the respondents’ mental state or their unfortunate welfare in the group. The persons demonstrate their inability to relate to, and logically evaluate the study/work situation in which they participate.
From the answers, it is as if there are black holes in the student’s ability to observe, consider, and absorb course content, and the teaching methods used. The result is that the student’s answers are directly contrary to the facts.
Let us take an example from one of my own courses: On the question of the involvement of the text readings in the teaching situation. The question was: Has it been too much, appropriate or too little?
The respondent responds: Too little. But in the specific course this response is impossible. The whole lecture series was based on PowerPoints containing representations and interpretations of passages from the texts this lecture is referring to.
There are other examples of a student who only read 20-30 percent of the syllabus, but evaluated most indicators on course quality negatively.
If a strong negative evaluation is only expressed by ticking off a closed question, it is difficult to see the lack of consistency. But the complete lack of sense in the evaluation is revealed when a student responds to open questions.
Often, the answers to the open questions directly contradict the answers given as ticks in the boxes in the closed questions.
But if it is all about one or two participants among 20, is it not just much ado about nothing?
It may be about 5-10 percent of the participants, but it is a pattern. It is worthwhile to articulate these marginal groups.
This is not just to avoid possible irrational elements, which might influence the evaluation of a given situation or a process. It is also about recognising that irrationality and emotion can be found in a university context.
With this realisation, we may be able to prevent this irrationality from affecting learning and teaching processes. Remember: These 5 to 10 percent, who exhibit the lack of skills to consistently evaluate and who unfold emotionally irrational behaviour, are aspiring to be the academic power in our society.
The pattern they demonstrate in their assessments should therefore not be made light of.
Anonymity is, for better or for worse, an important prerequisite in many evaluation contexts, and also when it comes to the assessment of the quality of a university course. But to relate constructively to the learning process means to practice criticism, which is not only important in scientific contexts, but in a democratic society in general. The autonomy of future academics depends precisely on the ability to demonstrate a critical approach in public contexts.
Evaluation culture provides an opportunity where students can demonstrate independent critical thinking about specific, but fundamental, things.
My experience is, that there is a small, but always existing group using the shield of anonymity to exercise an inconsistent criticism which falls apart after a more detailed analysis.
The relatively few, but resistant cases of criticism lacking rationality should, I believe, be highlighted and discussed after the evaluation situation.
Anonymity should not be violated. But the vast majority of constructive evaluators and the small minority of inconsistent evaluators can benefit from the exposure to irrational assessments and get something of thought-provoking substance. They can get to see the importance of consistency and integrity in a critical approach to all the various institutional contexts.
Stay in the know about news and events happening in Copenhagen by signing up for the University Post’s weekly newsletter here.