As We May Think - Is there such a thing as an intelligence augmentation tool and can it help us understand how we think better?

I am motivated to write this posting for two-fold reasons; firstly is because I did promise that I would post about critical thoughts and ideas behind computer-human interface in an attempt to understand the consciousness of machines and humans, to look at the continuity and disjunction between both, by way of phenomenological approaches, the other is to try to understand the materiality of machine-human intelligence. I am perhaps interested more in the latter for this posting as I am preparing a presentation with another fellow student on the subject matter of thinking and the tools that are supposed to make augment our native intelligence by allowing us access to all the 'accessible' knowledge and information of the world. Then, I would like to extend this thesis to gesture lightly, at the moment of this posting, to thinking about the evolution of epistemology and if any of the methodology that we are starting to interrogate here will be a fruitful way to think about the history of human consciousness in terms of the history of ontological and epistemological developments.


1. J.C. R. Licklider is perhaps one of the earliest advocates of the symbiotic existence between men and machine, and thus Andy Clark who talked about the extended mind in his "Supersizing the Mind" was certainly not advocating anything new in his talking about how a tool from a blind-man's cane (I think this was Nagel's earlier example, but correct me if I am wrong) or a robot that could take over all the menial, tedious job of  intricate calculations and then storage of large number of data for the many steps to come. He talks about artifical intelligence also in the terms of the mechanically extended man (perhaps in today's parlance, we can talk about it in the context of a digitally extended man or a cyborg). However, his vision of this system seems to be mostly contained within the solving of predetermined and preformulated problems. In other words, simple for solving analytical problems. This of course would include 'creative' works that still works within the rules and subjection of the logical linguistic-semantic system. Hence, as Licklider himself admits, his thinking of this extensive tool is that they will perform the slave-function or what he calls, clerical duties. As he argues in the subheading, "Separable Functions of Men and Computers in the Anticipated Symbiotic Association" of the "Man Computer Symbiosis,"
...the computer will serve as a statistical-inference, decision-theory, or game-theory machine to make elementary evaluations of suggested courses of action whenever there is enough basis to support a formal statistical analysis. Finally, it will do as much diagnosis, pattern-matching, and relevance recognizing as it profitably can, but it will accept a clearly secondary status in those areas. (7)


Compare this to
"Men are flexible, capable, of  'programming themselves contingently' on the basis of newly received information. Computing machines are single-minded, constrained by their 'pre-programming.' Men naturally speak redundant languages organized around unitary objects and coherent actions and employing 20 to 60 elementary symbols. Computers 'naturally' speak nonredundant languages, usually with only two elementary symbols and no inherent appreciation either of unitary objects or of coherent actions" (6)


Ok, so we have a very clear distinction between the human and the machine in this whole symbiotic setup. Like the rather superficial symbiosis between organisms in biology. But I use the word 'superficial' only because we do not necessary know the depths of the symbiotic process that goes on in biology and thus we assume certain shallow win-win situation, but I am looking for biologists to correct me on this! In my work on the human-machine consciousness within the background of the Large Hadron Collider, I am thinking of the subjective condition whereby one may not be able to always neatly enact an agential cut, to borrow the idea of intra-action and agential realism in Barad's "Meeting the Universe Halfway," between the computer-machine that controls the production of the Higgs Boson in the LHC and the human-technician-scientist who tries to process the raw series of data through the mediation of another level of the calculating/modeling machine. Is the machine therefore extending the limitations of the human brain-eye by breaking the limits of the boundary between the conscious and unconscious, since much activities considered as conscious thought are often pushed to the unconscious due to the human's inability to hold more than a certain amount of thoughts at the same time (think Deep Blue and Gary Kasparov, Kasparov representing an extraordinary human who could hold more than the usual number of thoughts in the form of different permutations of chess moves through a certain power of concentration)? Unconscious then becomes the container of both thoughts that have no 'room' in the conscious container and also the suppressed thoughts (thoughts denied or with-held through various defensive mechanisms of the brain that we are still studying). Therefore, would a more synergistic symbiotic human-machine relationship be able to access such thoughts? But how do we break through the language barrier (hence the communication barrier). Is it possible to get the machine to access the unconscious language that we use?  Trie memory structure discussed by Licklider is already being implemented in various forms in today's modelling software packages. But what if we can bring it to a hardware level, which I think was discussed also by the likes of Manovich and Bolter Grusin. I am especially attracted with this relation between mental models and how one can and should improve communication between machines and humans (increasing the symbiotic relations based on better connectivity-connection) that he talks about in his "The Computer as a Communication Device" paper.  I wonder if there's a psychological entailment involved here that can be extrapolated to human-human connection (which is also an important part of my LHC project) where the human mediated by another human attempt to resolving problems should be important for helping us interrogate the notion of cognitive dissonance at the human level (science versus ethics, science versus traditional, dogmatic religious interpretations, science versus ideology). Also, would you agree with Licklider's contention that the critical mass in creative endeavour has to be modelled on the basis of the individualistic predilections of  'creative problem-solvers'? However, he did bring up rather original arguments on the role of the computer, whether as the switch in communication or the mediator. Licklider did an important service in differentiating "informational housekeeping" versus "deep-modelling" (maybe I should therefore get people to read this great piece of fiction on the subject, "Turing" by Christos Papadimitrou). "Deep-modelling" is something we aspires to with our fantasy of the android and bot that approximates the human, but is this the kind of deep-modelling that we want to aspire to, since deep-modeling is supposed to help us confront ourselves at a deeper decision-making level beyond mathematical choice-theory (right?). The message processors that Licklider points to may be a key to this issue (or it may not) if we are to deconstruct the meaning of "message processors". Would they be functioning at a paradigmatic or syntagmatic level? Will they be connotative or denotative?  Should they also perform the function of analysis of the therapist by the couch?


N.B. Incidentally, my friend brought to table the question of hallucinogens as augmenting the conscious processing. Could such drugs, if they can be perfected, trigger particular receptors in the brains to help us access our infinite memory, through all its cobwebs and noise, so that we can begin to imprint information on our brain the way people with photographic memory are able to do?


2. The "Augmenting Human Intellect: A Conteptual Framework" article prepared by D.C. Engelbart for the the director of information sciences of the Air Force Office of Scientific Research in Washington 25, D.C.  is an even larger manuscript to get through. He rides a lot on work done by Bush and also Licklider, and is perhaps more material in his invocation of the human intelligence than either Bush or Licklider. What he did is to expound more intensively on the structuring details which he presents as the 'inter-knit, hierarchical structure'. He of course gave a terminology to the matrix of method and practices H-LAM/T system (Human using Language, Artifacts, Methodology, in which he is Trained) that has the capability for "explicit-human process capabilities" and "explicit-artifact process capabilities" and of course the "composite process capabilities" which stems from the combination of both (and this will inform Engelbart explication of the Research Program. Maybe I should provide a summary of his long report here, stemming from his rhetorical (but also rigorous attempt to argue for intellect augmentation tools by detailing the various conceptual structures that one will need to work through; general, mental, concept (Engelbart since concept as a refinement of the mental), symbol, process, physical, interdependence and regeneration. Engelbart constructs the symbiotic process as the interface between the human and the artifact. Engelbart tries to connect the former with arguments on how the human mind manipulates concepts and symbols from the abstract to the tangible level, whereas manual, external and symbol manipulation that are constructed as externalized activities. I don't know a lot about Whorfian linguistics, nor do I wish to figure out at this point how fruitful is it as a frame of interrogation, but I'll just trust Engelbart's interpretation of it for now
The Whorfian hypothesis states that the world view of a culture is limited by the structure of the language which that culture uses. But there seems to be another factor to consider in the evolution of language and human reasoning ability. We offer the following hypothesis, which is related to the Whorfian hypothesis: Both the language used by a culture, and the capability for effective intellectual activity are directly affected during their evolution by the means by which individuals control the external manipulation of symbols. (For identification, we will refer to this as the Neo-Whorfian hypothesis.) If the Neo-Whorfian hypothesis could be proved readily, and if we could see how our means of externally manipulating symbols influence both our language and our way of thinking, then we would have a valuable instrument for studying human-augmentation possibilities. For the sake of discussion, let us assume the Neo-Whorfian hypothesis to be true, and see what relevant deductions can be made. If the means evolved for an individual's external manipulation of his thinking-aid symbols indeed directly affect the way in which he thinks, then the original Whorfian hypothesis would offer an added effect. The direct effect of the external-symbol-manipulation means upon language would produce an indirect effect upon the way of thinking via the Whorfian hypothesis linkage. There would then be two ways for the manner in which our external symbol manipulation was done to affect our thinking.
However, many of the processes discussed by Engelbart, as well as his narrative of Joe the techie and his demonstration of the uber tools used do not tell us much about conditions and processes outside the interior world of the human's attempt to articulate their thoughts and desires and tools that are built to enhance or facilitate the process. So there's no real extrapolation of the external. As Engelbart himself admits, thought processes are modified to suit the realities of the tools used. So, do the tools really augment the intellect or does the brain merely adapt to the needs of the tools?  Of course, it seems easier to edit a film when we can easily undo our mistake or make a combination of different versions and save them under different filenames, and also it's easier to rearrange our confused and unlinear thoughts after having first written them in an image/word association format that does not follow any logical argument sequences. But I am still unsure how that can constitute as augmenting one's capacity to do more complicating intellectual work. It just seems to demonstrate an ability of freeing the human from tedious tasks unsuited to his/her temperament. Perhaps the biggest question is; how do these tools allow to rethink significance and use of mental models? I don't feel that this question is quite answered in Engelbart's paper. Neither does he go into the details of tool failure (since we now know that many hours are spent sometimes on troubleshooting on tiny glitch of the system mainly because we do not necessary know at once what is the course of the systems failure, whether in the computer or the LHC) Let me include a summary of the paper's program below:


This report has treated one over-all view of the augmentation of human intellect. In the report the following things have been done: (1) An hypothesis has been presented. (2) A conceptual framework has been constructed. (3) A "picture" of augmented man has been described. (4) A research approach has been outlined. These aspects will be re viewed here briefly:
  • An hypothesis has been stated that the intellectual effectiveness of a human can be significantly improved by an engineering-like approach toward redesigning changeable components of a system.
  •  A conceptual framework has been constructed that helps provide a way of looking at the implications and possibilities surrounding and stemming from this hypothesis. Briefly, this framework provides the realization that our intellects are already augmented by means which appear to have the following characteristics:
  • The principal elements are the language artifacts, and methodology that a human has learned to use.
  • The elements are dynamically interdependent within an operating system.
  • The structure of the system seems to be hierarchical, and to be best considered as a hierarchy of process capabilities whose primitive components are the basic human capabilities and the functional capabilities of the artifacts - which are organized successively into ever-more-sophisticated capabilities.
  • The capabilities of prime interest are those associated with manipulating symbols and concepts in support of organizing and executing processes from which are ultimately derived human comprehension and problem solutions.
  • The automaton of the symbol manipulation associated with the minute-by-minute mental processes seems to offer a logical next step in the evolution of our intellectual capability.
  • A picture of the implications and promise of this framework has been described, based upon direct human communication with a computer. Here the many ways in which the computer could be of service, at successive levels of augmented capability, have been brought out. This picture is fanciful, but we believe it to be conservative and representative of the sort of rich and significant gains that are there to be pursued.
  • An approach has been outlined for testing the hypothesis of Item (1) and for pursuing the "rich andsignificant gains" which we feel are promised. This approach is designed to treat the redesign of acapability hierarchy by reworking from the bottom up, and yet to make the research on augmentation means progress as fast as possible by deriving practically usable augmentation systems for real-world problem solvers at a maximum rate. This goal is fostered by the recommendation of incorporating positive feedback into the research development - i.e., concentrating a good share of the basic research attention upon augmenting those capabilities in a human that are needed in the augmentation research workers The real-world applications would be pursued by designing a succession of systems for specialists, whose progression corresponds to the increasing generality of the capabilities for which coordinated augmentation means have been evolved. Consideration is given in this rather global approach to providing potential users in different domains of intellectual activity with the basic general-purpose augmentation system from which they themselves can construct the special features of a system to match their job, and their ways of working - or it could be used on the other hand by researchers who want to pursue the development of special augmentation systems for special fields.





3. Alan Kay's arguments, on the other hand, seems to be centered design and how that can appeal to the human's mental model, which he designated as doing mentality, image mentality and symbolic mentality. He cites Bruner's multiple mentality model that tries to combine  different mental model approaches to problem-solving. Kay seems to be looking for a way to get rid of all the interferences in the mentalities and concentrate attention on mentalities that could actually do the learning, which is thus focused more strongly on the environment. The main jobs of the three mentalities are:


enactive - one that helps us identify our location.
iconic    -  to enable us to recognize, compare, configure, concretize.
symbolic - putting into a coherent form a long chain of usually abstract forms of reasonings.


Kay suggests that the most intuitive ways to use the multiple windows in the interface is to make it modeless, which he equates with getting rid of "clumsy command syntax." Do you agree that the GUI-intensive interface is better than learning the ABCs of the syntax command intensive UNIX OS?


If I wish to work with a gaming and virtual environment as a way of exploring the LHC, should I therefore be thinking towards building a modeless system as a way of removing the noise from the mental models that I wish to build for my project?


These are my analyses, summary and and questions based on my readings of these pioneering works in computer science and information systems that have not all be completely realized, even if the vision they set up is more familiar today than previously. 

Comments

Popular posts from this blog

Containing Toxicity: Nuclear Waste as a Societal Challenge

A Philosophy of Artscience: Something Old, Something Novel

Updates on some forthcoming changes for this blog