Assessing “soft” skills

Do you have anything you’d like to add to the discussion, Terry?

The scene was a meeting at the Edusummit conference at UNESCO in Paris in 2011. The question came from the Chair.

Thank you, but no: everything I was going to say has already been said.

That was my response, because I didn’t see any purpose in repeating points that had not only been made, but also generally agreed upon. In fact, my contributions to many meetings are based on Salvator Rosa’s, dictum:

Be silent, unless what you have to say is better than silence.

The question is: does that make me a good collaborator, or not so good? How do we measure such things? And does any of it matter anyway?

Such navel-gazing arises from the fact that I’ve been asked to chair a seminar at BETT about measuring what we might call “soft” skills. Skills such as collaboration. Called “Measuring what matters: Soft skills made visible”, the seminar questions the usefulness of skills tests when we need to be able to measure skills like collaboration, and asks how we can make use of the digital tools at our disposal to make such skills more visible.

As Chair, I am always scrupulously unbiased, so I thought I’d get my revenge in first through an article on the ICT in Education website!

I think it is extraordinarily difficult to measure skills like collaboration, but perhaps that is because we tend to think about it in the wrong way. Every rubric or approach I’ve seen relies on one of two things. Either vague descriptors like “Collaborates a lot (4-5 points)”, “Collaborates some of the time (2-3 points)”, “Rarely collaborates (0-1 points)”; or a numerical record of how many times a student contributed. For example, a teacher might look at the transcript of a forum discussion and count up how many times each member made a comment, or have the software do it for her.

Well, on either of those sorts of measures I would do pretty badly. Yet I am often invited to be a part of projects because people seem to value my contributions. In other words, what matters is quality rather than quantity.

I have seen this before as a teacher. One student I had spent much of his groupwork time talking to his friend in another group, but every so often would lean back towards his own group and say “Well how about trying X”. The group would invariably try X, and it would invariably lead to success. But the rubric against which to to mark his collaborative skills would allow me to award him a maximum of only two marks.

Measuring quality is obviously much more difficult. In principle, a good approach would be an economics one, whereby you evaluate the marginal contribution of each member of the group. The issue then becomes: how much more, or less, successful was the work of the group as a result of the addition (or omission) of this particular student?

Carrying out such an evaluative process with any degree of accuracy would be nigh on impossible given the current state of our data collection abilities.

I have to ask, though, whether these skills need to be measured at all. After all, who benefits? It may be useful for a prospective employer to know whether Fred is a collaborative sort of person, but Fred already knows, and there’s not much his teacher could do with the information anyway. Besides, is a collaborative approach always a good thing anyway? Quite often, collaboration leads to groupthink and consensus when at times the best possible thing would be for a maverick to insist on doing it their way.

For the purposes of the seminar, I will retain an open mind: I am as interested as anyone else in tools that will allow teachers to record the contributions of individual students to group projects. I do worry though that perhaps one of the unintended consequences of finding and using tools that will enable us to record ever more accurately the extent of students’ collaboration in quantitative terms, the less incentive or time we will have to measure the quality of students’ contributions. In other words, I think we are far to concerned with data input, when we should be concerned with output.

Enhanced by Zemanta