Tuesday, 21 February 2012

"He thinks he's people!"

So says Mrs Crabapple of Santa’s Little Helper. In the context of the Simpsons it is arguable that he indeed does.
There is a dolphin named Kelly in Mississippi whose star is rapidly rising. She was trained to clear up litter from her pool and rewarded with food every time she removed a bit of paper. So far so cute but Kelly had a thought. She realised that by hiding a piece of paper and tearing off a smaller bit to give her trainer each time, she could reap multiple rewards. An entrepreneur! How Dave must rue Britain’s chronic dolphin shortage!
But Kelly’s ingenuity doesn’t end there. When she one day caught a gull and presented it to her trainer she was given an extra large fish. This set the cogs turning and before long she was teaching ‘gulling’ to the little ones, using a small fish as bait. A mercenary! How I long to insert a Navy SEALS joke here!
Kelly’s story is not new, the Guardian reported it in 2003, but she is back in the papers today as a group of scientists and campaigners have called for a formal declaration of rights for dolphins and other cetaceans. With the aid of her story amongst many others the argument goes that we are now sure that dolphins are sufficiently intelligent and self-aware to justify rights not dissimilar to those recognised for humans. In other words, dolphins are (non-human) people.
I certainly don’t wish to argue here that dolphins are not deserving of rights. In fact I think it likely that sooner rather than later mankind’s attitude to other animals will and should go through a paradigm shift. However, the assertion of ‘peoplehood’ is being made with scant attention to the vast literature on this question. What is at issue is: How do we decide on the objects of our moral system?
Kelly is clearly intelligent to an interesting degree but intelligence isn’t a good indicator of what we deem worthy of ethical treatment. My friend’s 6 month old son Shane couldn’t come up with a gulling scheme for extra snacks but in the event of a fire at Seaworld (you get the point) I’m afraid that it’s him I’m rescuing first.
Perhaps the fact that Shane will become intelligent matters? Not when you consider the range of inflictions that people suffer, limiting their brain power. Valuing the potentially intelligent above the less so in a fire would seem a repulsive way to approach a rescue.
Self awareness may seem like a better indicator of where we should focus our moral intentions but there are other problems here. It’s the suffering itself that we generally see as something to reduce in society, not someone’s awareness that they are a person and are suffering. My awareness that I am someone suffering from excruciating toothache is nothing compared to the damned excruciating toothache itself.
One of the rights for which campaigners are pushing is the right not to be held in captivity. Would Kelly’s awareness that she is in captivity matter? Undoubtedly it would in some way, but awareness isn’t necessary for the moral judgement. If a child is raised by a brutal father in a situation of extreme psychological control, the child may not be aware that her freedom is restricted yet we still judge that she is mistreated. If I don’t realise that a housemate is putting rat poison in my coffee every morning and making me ill, I’m still being wronged.
These questions and many like them mean that there is no clear set of conditions for ‘peoplehood’. A common view is that there should be some kind of sliding scale in how we treat the animal kingdom depending on, say, intelligence. But this runs up against the same problem with valuing intelligence in this way.
One answer may be simply to draw a line under humans; we’re all equally worthy of moral attention but the animal world we’ll work out as we go along. This however is clearly racism: I’m more important than you for no better reason than that you’re a different species.
I, of course, offer no answer here as to how we should view Kelly. These are ongoing debates in the academic world. And I am very sympathetic to the suggestion that we should modify how we treat other creatures. But to skip over difficult questions is likely to damage such a cause in the long run, and is at least intellectually dishonest. Furthermore as computer technology and bioengineering develop the issues will arise again and again.
The arguments in this article are of course well trodden and I have no bibliography to hand.
The following are some of the news links.


  1. The concept of complexity is useful here. The complexity of an organism's central nervous system (which includes, but isn't limited to, the brain) correlates well with intelligence, potential of intelligence, and likely levels of self-awareness. And isn't affected by things like physical injury, down syndrome or Alzheimer's.

    This lets you skirt round a lot of these thorny pilosophical quagmires and build a useful sliding scale. If you come into conflict with another creature over some sort of resource, say your garden vegetable patch, then according to this measure, a spider is more worthy of mercy than a slug, a sparrow even more so, and if you shoot a dog over a couple of cabbages, then you're a c**t. This lets you rescue your neighbour's infant from their burning house before you go back for their cat, without having to resort to speciesism to justify yourself.

    Then there's the issue of self-awareness. When scientists and philosophers talk about this, they're usually talking more about the even more slippery issue of consciousness, rather than awareness of a specific event or situation. These are extremely complex things to try to understand or define. But they seem to correlate well (I'd argue), with complexity. The more complex a creature's CNS is, the more self-aware it's likely to be, and the more capacity for suffering it's likely to have. A slug has limited capacity even for physical pain, a spider (or sparrow) is unlikely to worry about whether it's owner cares about it, and a dog is unlikely to suffer from bouts of existential angst.

    Of course, having said all that, comparing slugs and spiders with birds and dogs is one thing. But it's when you get to the bright end of the scale - dogs, pigs, elephants, dolphins, and even some species of birds (http://www.guardian.co.uk/education/2011/feb/01/new-caledonian-crows-tool-use), that you get into tricky territory. But then that's what debate and research is for.

    1. There is certainly a correlation between complexity and intelligence, and consciousness is partly slippery because the term has multiple uses and partly slippery because each of those uses is itself slippery! If we take it here as defined in terms of self awareness rather than under a phenomenological description then the same problems remain.

      If we can come up with a sliding scale of, say, intelligence it avoids the quagmire because it avoids part of the question. If we come across someone with severe Alzheimer's it makes no sense to say:

      "Well, they place well down the scale of intelligence but it's okay, they should still be treated ethically because the reason they're low down the scale is just a malfunction of a perfectly good complex system"

      That would mean that when we say creatures are intelligent enough to deserve rights, we actually mean that creatures are complex enough to deserve rights.

      This would be a strange claim because in every day ethical decisions we don't consider complexity and until relatively recently we couldn't even have begun to consider complexity. And even if we did it would be baffling to consider why we should.

      I've certainly not argued that intelligence or self awareness shouldn't have a place in our ethical system, merely that the assumption they should be tracked when choosing moral actions is at best debatable.

    2. At the risk of getting into a lengthy philosophical debate, I'd just say that I think a creature's 'capacity for suffering' is what most deserves consideration in moral and ethical questions, and defend the notion of complexity as a way of getting at that because it is at least (or at least potentially) scientifically measurable.

      A person suffering from severe Alzheimer's is just that - suffering. They are aware enough of their situation to be highly distressed by it. I guess where the usefullness of complexity as an indicator of things like self-awareness and suffering breaks down, is in the extreme - cases of complete vegetative states and the like. Then my instinct is that the individual is gone, and resources should only be expended on sustaining them to the extent that there is some chance, however small, that they might eventually recover. Then I'm not basing my opinion on the complexity of the lump of flesh hooked up to the life-support machine, but on future potential.

      I'm not disagreeing with anything in your post, just joining in the debate. It's a thorny issue. Can we agree that shooting a dolphin (man/dog/chimpanzee) and salting a slug are not morally equivalent?

  2. I'd say a tentative yes, but that's a whole other discussion.

    The problem will be defining suffering and capacity in an interesting way, and explaining why capacity matters in this context.