Over the last fortnight, I’ve been distributing Module Evaluation Questionnaires to my students, this year printed on fetching blue paper (hence this post’s title riff on MS’s Blue Screen Of Death). Most of them went out on Friday the week before last, and I gave any stragglers the opportunity to fill out one in the last lecture of term, most of which I gave last Friday.
Those of you who follow me on Twitter may have noticed that on the Friday when I first distributed the forms, I was not a happy bunny – I had, like an idealistic novice, looked straight through the evaluations to try and see any patterns or information that would be of use. Which meant I was looking at the tick-box scales – you know the ones, the ones which ask students to score on a scale of strongly agree to strongly disagree with statements like “the module was intellectually stimulating” and “the module was well structured”. I’m afraid I flicked through the tables and the written feedback, and all of the really negative material jumped out at me, in the way it always does, and my confidence in my teaching abilities plummeted to the floor.
However, last week I started dealing with the forms in a rather more systematic way; someone else will collate the numerical data, but I like having a record of written feedback, both to remind me what students have said as positive, but also as pointers for what I might work on improving next time. With some feedback, of course, you just can’t win – for my religion lectures, I had responses praising my use of Powerpoint juxtaposed to requests for the slides to contain more detail and more information. For that lecture, too, I had students expressing appreciation for the detail and depth of the lectures next to responses requesting that we do more analysis and suggesting the course should be more challenging (particularly difficult when I know I have students with greatly varying prior knowledge in the room, which makes it a challenge to teach at a level where everyone is going to get something out of the lecture).
What really struck me, however, was the disjunct between the written feedback and those blessed ticky-boxes, which are considered so important as a numerical metric of our teaching ability and effectiveness. They just didn’t see to add up with the written feedback. I’d have an enthusiastic comment about the course content, with only a 4, or even a 3, ticked for ‘the module was intellectually stimulating’. There seemed to be a lack of understanding of what these forms were for, or how they were going to be used once students had filled them out.
As soon as I spotted this trend, I had a handstapleforehead moment – because I know that students need to be educated about how these forms work. When I was at Rutgers, I even went to a workshop about how to raise your student evaluation scores, because it was recognised that there were certain things you could do to play the game – ways to make you look more prepared, making sure students understand ‘the plan’ behind the course, encouraging participation, getting feedback throughout the course and so on. One of the things I did in my teaching as a result of attending that workshop was to preface the handing out of the evaluation forms with a five to ten minute spiel about why they were important, how long it had taken to get the questions just right (about ten years of fiddling, in the case of RU), and some general encouragement for students to write feedback rather than just fill in the Scantron bubbles. I got fairly good scores out of it, and felt that the feedback gave me some useful suggestions about how to improve my teaching.
This time, did I preface my distribution of the forms with any such explanatory and hortatory talk? No, of course I didn’t. I walked into the lecture room and seminar room expecting that my students would know precisely how much importance is placed on these forms, what they’ll be used for, what the administration will be looking for in the number data, all that sort of thing. I expected this to have automatically entered their heads at some stage – even the first year students in my novel seminar, who I know have no experience of higher education! Well, you can see where the handstapleforehead moment came in. I know that if I want honest evaluations that don’t worry about being seen as ‘too enthusiastic’ or ‘too keen’ (as I was told some students are afraid of appearing, although why this matters when the forms are anonymous is beyond me), I have to explain what’s at stake for us as lecturers if we don’t get an accurate correlation between the ticky boxes and the written feedback, especially when the written feedback is positive. Never mind the administrative side of things, without accurate correlations we don’t know how to get better. And that, I think, is the most frustrating thing for me, that I’ve come out of this process with remarkably little on how I might improve next term.
So. A lesson learned, or rather reinforced. But I can’t help wondering how many other instructors’ ratings suffer from this disjunct between written comment and ticky box scores, and what damage that does on a wider institutional scale – especially in this new era where the student consumer is allegedly king.