Kurzweil, R. The Age of Spiritual Machines / Norman, D. The Invisible Computer,

The Age of Spiritual Machines, Allen & Unwin, St. Leonard’s, 1999. The Invisible Computer, MIT Press, Cambridge, Mass, 1999.

A series of illustrations intended for cigarette cards (but never used) were painted in Paris in 1899. They attempted to show ‘what life would be like’ in the year 2000. The pictures were later published in a fascinating book called Future Days. [1] They illustrated a world of personal flying machines, hi-jinks under the sea, various ingenious electric devices and many different applications of mechanical lever technology. It is evident that the artist had drawn deeply upon the resources he was familiar with in the late 19th century to speculate on the early 21st. In so doing, he projected forward what seemed obvious at the time. So we have the automated orchestra, the electric scrubber, the mechanical barber and so on. Few of his imaginative guesses can be seen anywhere in 1999. Things moved on. Our world is different.

It’s worth re-asking the same question: what aspects of our own here-and-now get projected willy-nilly into the futures we envision? The one answer that comes back loud and clear is: the computer! From an aberrant preoccupation of pubescent ‘nerds’ working up breadboard circuits in garages, we have seen the emergence of a globally transforming meta-technology that, within a few short decades, has changed everything. If you look at kid’s media, TV, SF, computer magazines, Wired, – you name it – the computer holds central stage in our individual and collective views of ‘things to come’. It has become a central, unquestioned, motif of our age; a new orthodoxy. Those who raise awkward questions about ‘the computer’ tend to be dismissed as anti-progress Luddites.

But I wonder how long our present conception of ‘the computer’ will actually last? How long will it be before all those ‘futuristic’ ads look threadbare, or the skilfully laid-out control rooms of elaborate fictional space ships (so familiar in SF movies, with their banks of computers and display screens) look dated? Not long, I suspect. The fact is that the days of computers as we know them are numbered. Within a few short years it will have all-but disappeared! Why? Radical miniaturisation; shifts to new principles and substrates; the emergence of new ways of doing computing; the rise of revolutionary new technologies such as nanotech, DNA computing and, maybe one day, even quantum devices. Who knows? The sky is evidently the limit.

What I find most fascinating about this process is the widespread assumption that successive technological revolutions of this kind are permitted, as if by magic, to repeatedly ‘re-invent’ civilisation. Science and technology are, in the language of futures, key ‘drivers of change’. Clever people beaver away in labs around the world that are established and funded by competing backers. A whole new economy has developed in which the main imperatives seem to be: making new devices (particularly finding the ‘killer app’), refining and marketing them (in large numbers) and outsmarting the competition (increasing the value of company stocks). A further fascinating characteristic of this dizzy and chaotic process is that there are no hints of anything so problematic as limits. The culture of high-tech innovation that would take us into some very weird futures admits of no limits, few controls and no sense at all that ‘ordinary people’, if given some clear choices, might not elect to go quite so willingly into the kind of high-tech future now being prepared for them. Should we abandon the notion of social responsibility in a context where the main imperative seems to be an overwhelming desire to ‘be first with the new’? No matter that entire industries get overturned, or that people are thrown out of work en masse, or that new dependencies and dysfunctions are smuggled in by the back door. Something is going on here that should be of concern to everyone.

Two books that provide very different views of ‘the futures of computers’ are reviewed here. Both show how the starting assumptions of authors powerfully affect their conclusions. Both also demonstrate the limitations of working from domain-specific views; that is, taking the part of the map upon which one stands for the wider map in which it is located.

Ray Kurzweil is steeped in the language, culture and environment of computers. He believes that Moor’s Law (that computer memory doubles every two years) will lead us to a point where artificial intelligence will become a reality. He writes with convincing detail and no little passion about the developments that will, he believes, in a very short time lead to computers of such power that they will not only out-rank human minds but come to contain them! In this view, human consciousness can be reduced to complex algorithms. Human brains will be scanned and the contents transferred either to new bodies or to banks of computer memories. This is either virtual reality par excellence or, more likely I would think, Hell on a chip!

It is astounding to think that anyone in their right minds would contemplate futures of this kind without considering some very basic questions about desirability, need or value. The apparent inevitability of this kind of scenario is an illusion. Some time ago John Searle wrote about the crucial distinction between syntax (a set of rules) and semantics (a structure of meanings). [2] It’s clear that when a computer program defeated Gary Kasparov at chess it was working on the basis of massive number crunching according to a set of pre-determined rules. So from this limited point of view computers far exceed human capacity. But attributing meaning to this type of capacity is clearly a category error.

Some years ago EF Schumacher wrote in his last book, A Guide for the Perplexed, about the need for what he called ‘adequateo’. [3] That is, there must be some capacity in the knower that is adequate to that which he/she wishes to know. What is clear from reading Kurzweil’s book is that the world of reference from which it emerged is that of the compulsive, disconnected, world of hi-tech innovation. The writer knows a great deal about that world. But, if the book is anything to go by, he knows much less about how human beings work, how societies function or, indeed, how ecologies underpin both. Strange, then, that he would permit the unnecessary provocation in the title. Why call this the age of ‘spiritual’ machines when the term and what it stands for is, for him, quite clearly an empty category with no meaning? Search though one might in this long book for any hint of ‘the spiritual’ and you will not find it. Well, at least, I didn’t. The claim is so out-of-court that I can only think of two explanations: one, that it is a conscious or unconscious over-statement intended to shore up a very shaky thesis; or, two, that it is intended to create the kind of furore that sells books. For, make no mistake about it, the work may be based on science and technology ‘know-how’ but the world it anticipates is pure fantasy.

Donald Norman, on the other hand, is much more down-to-earth. He is a critic of the way the computer industry has functioned to give us generations of computers that are, in his view, way too awkward and complex. Refreshingly, he attempts to counter the new generation of ‘digital prophets’, such as Nicholas Negroponte, who claim that ‘being digital’ promises to usher in a whole new way of life. According to Norman, people are comfortable ‘being analogue’, and, furthermore, there are benefits to this that computers, as they are currently conceptualised and made, do not support. What he wants to see is a ‘human-centred approach (that) would make the technology robust, compliant, and flexible’. (P 158) Humans and computers should be seen as ‘complimentary systems’. To this end he argues for a de-emphasis on computers as they are currently known and a new focus on what he calls ‘information appliances’.

He devotes considerable space to exploring the kinds of structures and strategies that companies need to adopt to create what he calls ‘human-centred’ technologies and products. For this there is a complex process of team building that seeks to understand the customer, the human need, the milieu in which the intended product is to seamlessly fit. Further, he argues that information appliances are what he calls ‘disruptive technologies’. They are disruptive because they will challenge industry standards, change the way that businesses work and, maybe, change people’s lives too. The kind of appliances he has in mind are ‘consumer products, whereas computers are technology products’. He adds, ‘therein lies the fundamental difference in the market. Computers emphasise technology, appliances emphasise convenience, ease of use; they down play or even hide the technology.’ (P 250)

Norman then gives us his vision of a world of information appliances. It is a world in which nearly every device we use, large or small, complex or simple, long-lasting or disposable, will come with some kind of computing ability embedded within it. In this world nearly everything is interactive and flexible. There is a universal ‘standardised international protocol for sharing information so that any manufacturer’s device can share information with the devices of any other manufacturer. The result will be whole systems of powerful, interconnected, appliances offering possibilities not even contemplated today.’ (P 260-261) The applications appear to be limitless. There will be personal health monitors connected to local health networks, new types of weather and traffic displays, intelligent reference guides, a vast new range of internet appliances, and sensors built into walls, cars, even clothes. He comments, ‘as the world of embedded computers expands, many of our activities will receive automatic support from the infrastructure, often without our even being aware of the devices.’ (P 270)

The differences between Kurzweil and Norman are therefore considerable. The former sees computers as human (or post-human) destiny, the latter as over-complex devices that need re-designing. The former ventures deeply into the dangerous territory of technological narcissism, the latter has a true egalitarian desire to make computers useful, to render them into user-friendly tools. But they also have some things in common. For example, neither questions compulsive technological innovation per se or explores the notion that there might be social (or other) values that would de-emphasise the over-drawn scenarios that uncritically contemplate our ever deeper immersion in technology. The main people to have so far explored those worlds are SF writers and they have unambiguously warned us against over-dependence and dehumanisation. [4] On the basis of the evidence presented in these provocative books, their warnings are right on target – but who is listening?

The way ahead may be strewn with miniaturised devices and information appliances of all kinds. But who will ensure that the world so created will be worth living in? We will need resources of a quite different kind, both human and cultural, to successfully deal with this increasingly urgent question.

Published in The ABN Report, 7, 3, 1999, pp 14-16 and Futures 32, 7, 2000 pp 704-708.


[1] I. Asimov, Future Days, Virgin, London, 1986.

[2] J. Searle, The Mystery of Consciousness, Granta, London, 1997.

[3] EF Schumacher, A Guide for the Perplexed, Cape, London, 1977.

[4] EM Forster, The Machine Stops, in The Eternal Moment and Other Stories, HBJ, London, 1929.