ICT & Computing in Education

View Original

Job-seeking as a metaphor for assessment in computing

Crowds and queues, by Terry Freedman

When I saw several hundred people lining up for some sort of job registration recently, I immediately thought of the challenges of assessing pupils’ educational technology capability. A bit of a stretch? Not necessarily.

Assessment – any kind of assessment – is hard. A huge challenge is to make sure that what you think you’re assessing actually is what you’re assessing. For example, you may think you’re assessing their understanding of the subject, but you might really be assessing, in effect, their ability to read and comprehend the questions.

This is known as the validity problem, and with computing there is another dimension, that of the skills. For instance, when I first had a go at the then Teacher Training Agency’s ICT test for trainee teachers, I failed abysmally. But that reflected the facts that (a) it was an unfamiliar environment and (b) I hadn’t bothered to read the instructions. (Well, that’s my story anyway, and I’m sticking to it )

So the photo above seems to me to be a good visual metaphor for this validity issue. Here we have a line of people seeking a job, or to register for a job, and who were prepared to stand there for at least an hour I should imagine. (The group of people shown in the photograph was a very small subsection of the whole line.) In a sense, these jobs look like being allocated at least partly according to whether you have the time and the stamina to line up (pretty Darwinian, eh?), and how good you are at selling yourself in a face-to-face situation.

Of course, having to apply for a job in the more traditional way also comes up against the validity problem, because some people who are eminently suited for the job don’t get called for interview because they’re not good at selling themselves in writing.

If we apply this thinking to the assessment of computing capability, tests aren’t the full answer, partly for the reason already given, and partly because the nature of the test itself is important. That is, it will (or should) differ according to whether you’re assessing a practical skill  or theoretical understanding. But group project work is no panacea either, because then you have the problem of sorting out who has done what, and whether you’re (inadvertently) assessing collaboration skills instead of computing skills.

I don’t think we can ever get away from the validity issue entirely. All you can do, I think, is to have as many different approaches as possible, in the hope that the advantages of some will outweigh the deficiencies of others. Unfortunately, that will also mean that, to some extent, unbridled confidence in the efficacy of high stakes testing is likely to be misplaced.


If you found this article interesting or useful (or both), why not subscribe to my free newsletter, Digital Education? It’s been going since the year 2000, and has slow news, informed views and honest reviews for Computing and ed tech teachers — and useful experience-based tips.

See this content in the original post