On 19 November 2009 I wrote the following article about the rights of robots.
You can just imagine the family rows of the future, should technology ever reach the point where it isn't possible to distinguish between humans and non-humans merely by looking at them.
And what of the ethnic monitoring forms of the future? Will employers have to ensure that a certain percentage of its workforce is non-human? An ethnic monitoring form of the future?
An article in the Daily Telegraph reports that people have already started to think about such matters:
I recall reading a short story some years ago in which a person discovers that they're not human, but a robot, and has to leave his job because of antagonism which I suppose would be classified as 'robotism'. It gives grist to my mill that, as I argued recently, science fiction can be a great starting point for discussion in a whole range of areas.
Furthermore, as this story in the Telegraph shows, the pace of technological change is such that we cannot assume that just because something is still confined to the fiction area of the bookshop it is not worth thinking about for its implications in actuality.
What Anna Russel, the legal expert referred to, has done is to extrapolate from current technological developments to potential problems for the future. This kind of exercise can be quite useful in getting students to think about the (possible) effects of technology on society, which is part of the National Curriculum in England and Wales and the curriculum of other countries.