Archive for August, 2012

Questions of Empathy in “Do Androids Dream of Electric Sheep?”

Posted in Uncategorized with tags , , , , , , , , on August 27, 2012 by Dan Porsa

I will continue my thoughts on Blade Runner / …Eelctric Sheep with this consideration of some of the psychological qualities of andorids,  vs humans, and the moral questions that they seem to raise…

A key theme of the book is empathy. Notionally, the means of detecting a rogue android (or ‘Andy’) is by a test of empathy, the Voigt-Kampff (sounding for all the world like any one of a number of current and past Psychological tests of Intellect, emotion, personality and so on…) and one which, presuming an essentially reliable narrator, androids don’t pass. The trouble is, aside from their inability to pass the test properly, and everyone’s acknowledgement of that, they don’t seem to be any more or less lacking in ’empathy’ than anyone else in the book.

The theme of empathy comes through in a number of ways. Very early on you are introduced to Deckard’s electric sheep and the point and purpose of such an outlandish construct; it is to demonstrate both status and empathy to his neighbours. It would seem, at some undefined by reasonably recent past point, there was a terrible (assumed) nuclear world war, of which one up-shot was the sudden and extensive extinction of numerous species of animals, and a distinct endangerment of many others. Since then, people have taken to owning animals as a form of status symbol; we see this in Deckard’s clear jealousy of his neighbours rare – and apparently fertile – horse. But the animals are not intended as status symbols. Whilst it is never stated, it is heavily implied that a deficit of empathy allowed, or caused, the war in the first place and that to guard against it people must develop their empathy. Partly this is done through Mercerism, the unusual world religion that seems to have come about with little fanfare, but a near total grip of all humans (but not androids) which appears to develop and instil empathy by the enforced experience of a single persons suffering and failure. But it is also a part of the religion to own animals, to develop one’s own empathy.

Yet the whole business seems to have little to do with any genuine empathy. As I said, the animals become status symbols, partly due to the expense of any of them, and partly through the requirement to actually have one at all. It would seem, through Deckard’s entreaties to his neighbour for a chance to purchase his foal, and by revealing his own shameful, ersatz animal, that there is a compunction towards those who do not have at all, to provide. While Deckard’s neighbour believed his animal real, he felt no guilt in one-upping him, only pride. Once he see’s it is a fake, he suddenly feels guilty for owning two. In the same way, we might boast of success to someone whose job is perhaps not so good as ours, but it would become despicable to do so to an unemployed person. But of most importance, little, if any, real empathy is shown towards these creatures. They are admired as prizes, and that’s all.

Meanwhile, no empathy at all is shown towards “Specials” intellectually backward people, known as “Chickenheads” such as JR Isidore who is of so little interest and worth that he is left to live alone amongst the wreckage (kipple!) of the old world. And then he encounters Androids, who do show him some sympathy, who can empathise with him, in a way that even his own boss, who is supposed to be sympathetic, seems not to do. In fact, despite being repeatedly told Androids have no empathy, and that this makes them dangerous, their behaviour on this matter seems persistently ambiguous. Both Androids and humans behave in much the way that humans have always behaved. With compassion, pity, fear, cruelty, ignorance, dignity, pride and many other qualities besides. Some androids seem dangerous and murderous, like Polokov, yet others, such as Garland, seem only defensive and protective. Indeed, except for the testimony of characters (both human and android)  and the fact of the various failed (and passed) Voigt-Kampff tests, it seems almost bizarre that androids are lacking in empathy, or any human quality. It would not be a stretch to read the whole work as a paranoid fantasy, and take all androids as simply human, though that is not my reading here. In fact what it is most reminiscent of is the pseudo-patients experiment, by David Rosenhan, where, having been labeled “Android” everything they do is determined thusly. In fact, it is merely a matter of perspective. This question is what is thrown into such sharp relief when Deckard is taken to the fake police station, run by androids, and the question of Phil Resch’s humanity – and Deckard’s own – is brought up. Once we have confusion about the matter, and assuming their behaviour does not change, nor the  authorial reporting of it, then we struggle to ever be sure again of exactly who is or is not an android.

All this leads to one of the most vital themes of the work; the morality of killing an android. It is just a given, they must be killed. Their lack of empathy is occasionally touched on as a reason, but mostly it is simply a means of detecting them. No thought is (by the characters, at least) given to the question of what it means if an android can pass an empathy test, exactly as a human does.  It is merely a larger obstacle in their detection, it does not for one second call into question their ability to empathise, or whether humans truly are empathic. Both are assumed, and never questioned, regardless of the facts of behaviour.

Deckard as Replicant – Why Should it Matter Anyway?

Posted in Uncategorized with tags , , , , , on August 20, 2012 by Dan Porsa

Roy Batty, supposedly the bad guy, the evil and dangerous machine, illegally loose on the earth, is in fact a disquietingly relatable character. In a similar vein to Milton’s famous portrayal of Satan, he is a figure who should be immediately and obviously evil, and reprehensible to us, but yet isn’t. Much like Satan he has a simple goal, easily expressed and with which many of us may find much to sympathise and even entirely agree on. Satan felt it better “To rule in Hell, than serve in Heaven” an aim which can only be considered wrong if we take God’s precedence as a given.  If God is merely “First” or simply more powerful alone, then it is frankly quite hard to see the moral failing in Satan’s aim. It is wrong not to serve god, and there is no reason, no cause for that to be the case, it just is. Few people will take “Because” as an explanation for anything, moral or otherwise, and so we can see the validity, moral and logical, in Satan’s position. So to with Roy Batty.

“I want more life, fucker!” is in many ways one of the purest and most continuous goals of human kind, perhaps of life itself. Few if any of us cannot agree with that sentiment! Ever since Mankind has understood the nature of mortality, he has bucked against it. Perhaps it is inherent in the nature of life, to demand more, to refuse to capitulate and be extinguished. Certainly, it is rare to see an animal, or any organism, be exterminated with ease, even when inexorable methods are applied. That the case for Batty to receive more life is then more eloquently made later, in his memorable soliloquy, ending “…all these memories will be lost, like tears in rain”. The memories in question both wonderful and terrible in nature, and as he rightly claims, held purely in the memory of Batty, soon to be gone, as his pitiful four years of existence are inevitably concluded.

Batty then, despite notionally being ‘merely’ a machine, non-human, a construct, exhibits both more passion, and a far greater desire to continue his existence than does the notional hero of the tale, Deckard. Deckard is growing weary of life, tired of his existence. He is close to giving up, and displays none of the ferocity Batty and his like do to hold on to it. He has lost the sense of the value of life, and it is only when it is shown to him by another who simply has far too little to be satisfied with it, does he again appreciate life and continue to live it.

There is another idea here though. I have pondered on why the debate rages as to whether Deckard himself was a replicant. Narratively, it seems to hold little ultimate significance, other than perhaps recasting his previous engagements somewhat, so why does it seem to hold such strong emotional engagement with people? Beyond the purely semantic level of determining whether he is or not, a replicant, a fact that ultimately remains ambiguous, regardless of how cogently it is argued, or which version of the film one chooses to engage with; yet there is a reason it matters, and that he should be a replicant. If he is, in fact, synthetic, then those memories of Batty are not (figuratively, at least) lost in the rain. The replicants do have a hold on life, as a ‘species’ or perhaps ‘form of life’ – they continue on, in Deckard. Humanity was, after all the bad guy, and the replicants (Deckard and Batty) were then engaged in a moral debate, albeit one carried out through the medium of violence. Batty had to be destroyed, had to lose, because he had done wrong, no matter how poorly he had been treated, how badly his hand had been dealt, he had killed to attempt to fix it and that was ultimately wrong. Yet his case was correct, replicants deserved better. So Deckard survives to develop his own memories. Had Batty killed Deckard, someone else would have simply come to ‘retire’ Batty, or his own short span would have been concluded. This way, Deckard (as replicant) survives, on behalf of all replicants.

Perhaps I’m wrong about the significance otherwise, or else perhaps this idea is already very much out there. I’d be quite curious to hear others takes on the matter…

Statement of Purpose

Posted in Uncategorized on August 20, 2012 by Dan Porsa

This blog is for me to put down various thoughts and ideas as I work through writing my Masters Thesis. I can tell, you’re thrilled. You may be interested in this blog, if have interests coincident with the area in which I intend to write my Master thesis on; Science Fiction, specifically, AI’s, Robots and other synthetic, artificial or otherwise non-organic forms of intelligent life. It’s a fascinating little area, for example just by considering intelligence and life together numerous fascinating questions immediately raise themselves; are there non-organic, unintelligent forms of life? Does intelligence determine life? Is an AI alive in ay meaningful way, or merely intelligent – and this is assuming intelligence already. Are they ever really intelligent, if so, how? If not, what does that mean – particularly if they can pass a Turing test. I have a lot of ideas on the matter, and I’ll be spilling a lot of them here.

Ultimately, I’m looking to celebrate and expand my love of Science Fiction – both ‘Hard’ and ‘Fantastic’ – and my long-term interests in Philosophy, Psychology, Sociology, Culture and so on. If anyone else is interested in what I’m thinking, and occasionally saying (if I feel really bold) then please feel free to post and comment, I’d love to hear from you too.