Microman, and some definitions from cybernetics

I recently bought a copy of Gordon Pask’s 1982 book ‘Microman’, from betterworldbooks of Indiana for 41p (plus £2.80 postage.) It is an extraordinary book, though I can understand why it was never a great success.

Firstly, it is a survey of the field: ICT as it was in 1982, with some very basic explanations that we would not find necessary today because we no longer find some questions strange. There are lots of pictures of microchips and screens, lots of diagrams. The book does a good job of explaining these things to someone for whom ICT was a strange and frightening world. It covers, concisely but intelligently, the effect of ICT on employment, information era culture, self-organising systems, expert systems and simulations, to name but a few. Pask wrote it with the help of a lady called Susan Curran.

It has a good explanation of Pask’s ‘Colloquy of Mobiles‘ exhibit at Cybernetic Serendipity in 1968

It also has some curiously old-fashioned statements about computer use: “Even in the present austere climate, grants are being given for microprocessors in schools. In Britain… most schools will soon have at least one microcomputer.”

Predictions for the year 2000 include “electric monorails are regular and seldom crowded: there is plenty of opportunity for travellers to exchange anecdotes, debate some common point of interest, or sit in comfortable solitude.” (Alas, if he were to commute from Richmond to London now, he would find that SouthWest Trains are pretty much the same in 2016 as the London and South Eastern was in 1982.)

Oddly enough, attitudes to computers haven’t completely changed. In 1982 a survey found that 58% of people thought that computers would be sued for surveillance. On the other hand, 63% thought that ‘large computerised information files’ would make the government more effective, and 57% thought that ‘the government will determine what computers can and cannot be used for’. (which was probably last true when MILNET split off from ARPANET in 1984).

There are several accounts of ‘precursor systems’ – by which I mean ideas before their time, that are now working out, but not necessarily in the same way as originally envisaged. He describes the Data Space system of Nicholas Negroponte and Ted Nelson’s Xanadu system, which might both be described as VR or AR systems using a huge databank to which users can make their own contributions to produce hyperlinked travel in Cyberspace. Pask incluIMG_20160702_134133689des an artist’s impression of a ‘datanaut’, dressed like James Bond in a dinner jacket and bow tie, reclining on a couch smoking a cigarette through a holder as he views a giant screen.


Today, Wikipedia says Nelson’s project has been described as ‘the longest running vaporware story in the history of the computer industry’. But, by coincidence, last week’s Economist refers to growing interest in VR and AR – ‘big firms are cooking up consumer products’- and speculates what the ‘killer app of this sort of technology’ may be when it comes. In other words, bits of it have happened (the huge databank is the internet, of course) and I’ve blogged before about virtual reality: such things as virtual tourism, a plan to make a virtual simulation of Brunei to help with environmental management. virtual reality therapy for PTSD, and so on. The killer app isn’t here yet, though Google Earth may be getting quite close to it.

When Pask wrote, of course, VR had only been around for 14 years. (Assuming that the first VR system as we know it was Ivan Sutherland’s Sword of Damocles. So Negroponte and Nelson were clearly farsighted – probably more so than we are now.

What fascinates me about Pask are similar far-sighted ideas he throws off, many of which now seem to have been forgotten, except by a few enthusiasts. Are some of these ‘precursor systems’, that have not yet found their ‘killer apps’? Or are they just dead ends?

Amongst the ‘review of ICT today’ chapters are several which are devoted to Pask’s own ideas. These are more specialised than the a reader might expect: they cover the difference Pask sees between cybernetics and AI. AI “attempts to reproduce the results of human thought processes without necessarily using the brain’s methods of achieving them”. Cybernetics tries “to work forwards, from the thought processes to their result in the form of intelligent behaviour.”

The following is an attempt to summarise some of Pask’s ideas, and to provide some definitions from related sources. I want to understand whether cybernetics offers new (or old, forgotten!) insights, if it really differs or differed from AI, and if it is worth following up.

Firstly, a comparison (based on cybernetics texts, so probably unfair to AI)

Cybernetics AI Source
viewed the human as part of a resonance that looped from the human, through the environment or apparatus, back through the human and around again. For Pask, that is the interaction by which we understand each other when we speak or dance together.

… intelligence resides in interaction

presumes that knowledge is a commodity to pluck from the environment and stick in a cubbyhole

…. intelligence resides inside a head or box



Pangaro’s Obituary
The original researchers tried to work forwards from the thought processes to their result in the form of intelligent behaviour. …tried to work backwards from intelligent behaviour to the process which produces it… without using the brain’s method of achieving them. Pask, Microman, pp 67-68
Pask talks of ‘conflict resolution’: “Much of human thinking involves the resolution of conflict – knowing one thing and feeling another, deciding on priorities, choosing between alternatives. Concepts, in both humans and computers, are a fertile source of conflict; we assume we re talking about the same thing, but in fact we are applying the same label to concepts with somewhat different contents. However, it we exchange information about the contents of our concepts, we have a tool for resolving the incompatibilities between us… Independent computers are capable of creating and to some extent resolving conflict when they are arranged so that they interact with each other… … programmes are written to satisfy criteria of correctness; they terminate at a fixed point, and may then be repeated using fresh values of data. … regard computer applications that do not rely on true/ false criteria as somehow suspect, Pask, Microman, p 74, 80
Prefers a looser linkage, based on word association; this leads to the construction of a variable network of names and symbols. Another alternative is ‘protologic’: a logical association of concepts whose linkages are less strictly defined than those of formal logic. Emphasises inference, the linking together of concepts through a descriptive logic Pask, Microman, p 99
When we talk about truth or proof in everyday language, we use a form of protologic… our everyday reasoning is contextual; we use forms of reasoning which prove to be coherent within a given context, even if they are not valid outside it Pask Microman p 106

Secondly, some definitions.


(Microman p 85)

In a public language, one used by many people, the ideals are the essences of the concepts we all share. Thus, there is a right way to use the word chair…if a language is to function at all it must operate in a consistent and coherent way…take the fictional world of ‘Lord of the Rings’. We do not mistake it for the real world but it is so coherent and complete that we can sustain belief in it through hundreds of pages…’

Concepts/ Use of language

(Microman p 72-3)

A concept is an abstract or general idea that we use to organise our thoughts and experiences… First, a concept is possessed by an individual…Secondly, most of our concepts are in flux… Third, we build elaborate concepts out of simple concepts…

(From Pangaro summary)

For us to understand each other, there are minimum requirements. We may both utter the word “cup” or “happiness” or “cybernetics”, but, what is required for each of us to know we agree on the meaning? A conversation, surely. You explicate how a cup is used, and what it is for. I hear your views, re-compute your perspectives, and come as close as I can get to your meaning of “cup.” But is your meaning (or, to say it more carefully, my view of your meaning) consistent with my own, pre-existing view? Are there conflicts? And that is only the half of it. After I exteriorise _my_ view of why a cup is what it is and how it is used, does your view of my view of a cup resonate (and not conflict) with your original view? In summary, if we resonate together in our views of “cup”, then (as named by Conversation Theory) we have “agreement over an understanding” – in both metaphorical and formal terms.


(Microman p 72)

Many theorists would argue that many of our thought processes can be described ….[as Hegelian thesis / antithesis/ synthesis]…Unfortunately such a model does not clarify the question of how we actually reach synthesis; is it always by logical deduction? …One could argue that juxtaposing an improbable thesis and antithesis sometimes leads to creative synthesis… And we might find that creativity is less far removed from logical thought than we sometimes suppose.



(Microman p 85, 86)

,,, can be defines as a structured body of concepts which can be expressed in a public or private language. The concepts must be common and shared and form a coherent system, but they need not be true. Knowledge to us includes not only verified concepts but also hypotheses, beliefs and out and out fictions…

..Since the computer operates in a logical way the language in which it is programmed must be logically consistent. That does not mean the concepts expressed by the language have to be true. To the computer that is irrelevant…. What the computer does is manipulate symbols, and these symbols may stand for facts or fictions. In this sense the computer is always making models of systems, not reproducing them….

… It is relatively easy to construct a coherent system, a formal logic like algebra, in which we manipulate empty symbols such as ‘a’…. the problem arises when we try to manipulate ‘full’ symbols, or symbols with fixed meanings that are in some specific context, true as well as consistent

Conversation theory:

(From Pangaro summary)

As observing beings, we learn what we learn by interacting with our environment: the spaces, objects, processes and others-who-are-also-observing all around us. Construing these interactions as “conversations”, whether with our friends or our pet fish, is highly useful in both metaphorical and formal ways.
Metaphorically speaking, we “converse” with everything in our environment. We “offer our views” as we act, re-act and think. The environment “speaks to us” in the sense that we interpret it. We respond to what we hear and see and feel, in an exchange that has the structure of a dialogue in language.
More formally, the term “conversation” was used by Gordon Pask and others in the body of work called Conversation Theory, which formalizes concepts such as agreement, understanding, and consciousness. Each of these concepts (as well as the concept “concept”) exists in relation to conversation.

(From Anderson)

Conversation Theory as developed by Pask originated from this cybernetics framework and attempts to explain learning in both living organisms and machines.  The fundamental idea of the theory was that learning occurs through conversations about a subject matter which serves to make knowledge explicit….

Pask argued that subject matter should be represented in the form of  structures which show what is to be learned.  These structures exist in a variety of different levels depending upon the extent of the relationships displayed.  The critical method of learning according to Conversation Theory is “teachback” in which one person teaches another what they have learned.

Learning strategies

(From Anderson)

Pask identified two different types of learning strategies:
§         Serialists – Progress through a structure in a sequential fashion
§         Holists – Look for higher order relations
For students to learn a subject matter, they must learn the relationships among the concepts.  For teachers, the explicit explanation of the subject matter facilitates student understanding (e.g., use of teachback technique).   However, students differ in their preferred manner of learning relationships (serialists versus holists).


As Pangaro says in his obituary: “I often heard listeners say that 10% of his talk was understandable and, if the other 90% was as good, than this guy was really something.”





Leave a Reply

Your e-mail address will not be published. Required fields are marked *