This week I’ve finished writing up my responses to this year’s load of module evaluation forms – I wrote a little bit about them last year, although under different circumstances. This year, we have a new shiny system – although the forms are still completed manually, they are processed by computer, which means that all the clever number-crunching stuff is now delivered to one’s inbox in a shiny PDF. Along with a duplicate e-mail containing the same shiny PDF along with data in three other PDFs which do not appear to be particularly distinct from one other, but never mind, it’s the main one that’s interesting. Particularly clever is the fact that the scanning machine can capture written responses, so as well as the prettified data the PDFs also contain scans of what students actually wrote – meaning the time I put aside to carefully type them all up was wasted, but that’s a small price to pay for progress.
When I last wrote about these module evaluations, I expressed quite a bit of frustration about the conflicting feedback, and the problems with actually identifying anything concrete to do about the sort of comments that completely contradict each other. For that reason, I’m usually a big advocate of using things like the CIQs and one minute papers to engage with students on a micro-level rather than wait for the final assessment when it’s too late to solve problems that have affected students throughout the course. But this time around, a couple of things stood out, and I do have a few things that I want to do differently next time.
The first thing comes back to my big bugbear about getting students to read secondary literature critically, and giving them an education in doing that rather than assuming that they’ll know it’s what we expect them to do. This year, in my Roman Life Course module I tried using a blog-based discussion format for secondary literature, which worked well in last year’s epic seminar but rather less well this time around. It still did what I wanted it to – got students to discuss secondary literature and realise that they could question it – but the lecture format and the amount I was trying to cover meant that the discussion around the article was less effective than it had been in the seminar format. So I think that next time I run a lecture course, I’m going to try the Purposeful Reading Assignment as described by Faculty Focus – it’s a much more structured individual exercise, which all students can do with the reading, and which should get all of them using their critical thinking skills. I think I’ll stick with the blog-based approach for smaller groups (so seminars of up to a dozen), but for larger groups, you really do need a different tool.
The other big thing I noticed came from those new-style PDFs we’ve suddenly been issued with. The results split the questions asked into three groups – institution-wide questions, student engagement questions and lecturer-focused questions. Most of the scores on the first and third section were quite high, with some slightly lower results over effectiveness of teaching techniques but nothing that signalled a need for anything beyond the annual overhaul of the engine, as it were. However, what shocked me was the comparison between answers in those sections and the student-focused questions – made rather more obvious by the snazzy visual effects in the new reports – where the answers were universally lower.
The three statements students are asked to rate in that section are ‘I did all the required work (e.g. reading, other preparation) for all teaching activities’; ‘I contributed constructively to class discussion or other activities’, and ‘I undertook wider independent study relating to this module’. Obviously these answers are self-reported so there’s going to be some inaccuracy here, both in terms of people underestimating how much work they’ve done and in people overreporting in order not to feel they’re letting the side down, but this area still jumps out as the category most in need of attention and improvement. And this is where it gets tricky – if students are self-reporting low levels of preparation and completing the required work for lectures, how on earth are lecturers supposed to deal with that, let alone encouraging students to do work beyond the set reading? What possible techniques are there for dealing with students opting out of preparation, in-class activity or independent study?
I’m don’t have an answer – I need to think about this one a bit. But this new way of presenting the evaluation data has flagged up a problem that I didn’t know existed, and that I think needs some serious reflection.