Difference between revisions of "Directory:Jon Awbrey/Papers/Inquiry Driven Systems : Part 4"

MyWikiBiz, Author Your Legacy — Thursday December 26, 2024
Jump to navigationJump to search
Line 90: Line 90:
  
 
=====4.1.3.2. Inquiry Driven Systems=====
 
=====4.1.3.2. Inquiry Driven Systems=====
 +
 +
The stages of work just described lead me to introduce the concept of an "inquiry driven system".  In rough terms, this type of system is designed to integrate the functions of a data driven adaptive system and a rule driven intelligent system.  The idea is to have a system whose adaptive transformations are determined, not just by learning from observations alone, or else by reasoning from concepts alone, but by the interactions that occur between these two sources of knowledge.  A system which combines different contributions to its knowledge base, much less the mixed modes of empirical and rational types of knowledge, will find its next problem lies in reconciling the mismatches that arise between these sources.  Thus, one arrives at the concept of an adaptive knowledge-base whose changes over time are driven by the differences that it encounters between what is observed in the data it gathers and what is predicted by the laws it knows.  This sounds, at the proper theoretical distance, like an echo of an error-controlled cybernetic system.  Moreover, it falls in line with the general description of scientific inquiry.  Finally, it raises the interesting possibility that good formulations of these "differences of opinion" might allow one to find differential laws for the time evolution of inquiry processes.
 +
 +
There are several implications of my approach that I need to emphasize.  Many distractions can be avoided if I continue to guide my approach by the two questions raised above, of principles and extensions, and if I guard against confusing what they ask and do not ask.  The issues that surround these points, concerning the actual nature and the possible nurture of the capacity for inquiry, can be taken up shortly.  But first I need to deal with a preliminary source of confusion.  This has to do with the two vocabularies, the language of the application domain, that talks about the higher order functions and intentions of software users, and the language of the resource domain, that describes the primitive computational elements to which software designers must try to reduce the problems that confront them.  I am forced to use, or at least to mention, both of these terminologies in my effort to bridge the gap between them, but each of them plays a different role in the work.
 +
 +
In studies of formal specifications the designations "reduced language" and "reducing language" are sometimes used to discuss the two roles of language that are being encountered here.  It is a characteristic of some forms of reductionism to call a language "reduced" simply because it is intended to be reduced and long before it is actually reduced, but aside from that this language of "reduced" and "reducing" can still be useful.  The reduced language, or the language that is targeted to be reduced, is the language of the application, practice, or target domain.  The reducing language, or the language that is intended to provide the sources of explanation and the resources for reduction, is the language of the resource, method, or base domain.  I will use all of these terms, with the following two qualifications.
 +
 +
First, I need to note a trivial caution.  One's sense of "source" and "target" will often get switched depending on one's direction of work.  Further, these terms are reserved in category theory to refer to the domain and the codomain of a function, mapping, or transformation.  This will limit their use, in the above sense, to informal contexts.
 +
 +
Now, I must deal with a more substantive issue.  In trying to automate even a fraction of such grand capacities as intelligence and inquiry, it is seldom that we totally succeed in reducing one domain to the other.  The reduction attempt will usually result in our saying something like this:  That we have reduced the capacity A in the application domain to the sum of the capacity B in our base domain plus some residue C of unanalyzed abilities that must be called in from outside the basic set.  In effect, the residual abilities are assigned to the human side of the interface, that is, they are attributed to the conscious observation, the common sense, or the creative ingenuity of users and programmers.
 +
 +
In the theory of recursive functions, this situation is expressed by saying that A is a "relatively computable" function, more specifically, that A is computable relative to the assumption of an "oracle" for C.  For this reason, I usually speak of "relating" a task A to a method B, rather than fully "reducing" it.  A measure of initial success is often achieved when a form of analysis relates or connects an application task to a basic method, and this usually happens long before a set of tasks are completely reduced to a set of methods.  The catch is whether the basic set of resources is already implemented, or is just being promised, and whether the residual ability has a lower complexity than the original task, or is actually more difficult.
 +
 +
At this point I can return to the task of analyzing and extending the capacity for inquiry.  In order to enhance a human capacity it is first necessary to understand its process.
 +
 +
To extend a human capacity we need to know the critical functions which support that ability, and this involves us in a theory of the practice domain.  This means that most of the language describing the target functions will come from sources outside the areas of systems theory and software engineering.  The first thoughts that we take for our specs will come from the common parlance that everyone uses to talk about learning and reasoning, and the rest will come from the special fields which study these abilities, from psychology, education, logic and the philosophy of science.  This particular hybrid of work easily fits under the broad banner of artificial intelligence, yet I need to repeat that my principal aim is not to build any kind of autonomous intelligence, but simply to amplify our own capacity for inquiry.
 +
 +
There are many well-reasoned and well-respected paradigms for the study of learning and reasoning, any one of which I might have chosen as a blueprint for the architecture of inquiry.  The model of inquiry that works best for me is one with a solid standing in the philosophy of science and whose origins are entwined with the very beginnings of symbolic logic.  Its practical applications to education and social issues have been studied in depth, and aspects of it have received attention in the AI literature (Refs 1-8).  This is the pragmatic model of inquiry, formulated by C.S. Peirce from his lifelong investigations of classical logic and experimental reasoning.  For my purposes, all this certification means is that the model has survived many years of hard knocks testing, and is therefore a good candidate for further trial.  Since we are still near the beginning of efforts to computerize inquiry, it is not necessary to prove that this is the best of all possible models.  At this early stage, any good ideas would help.
 +
 +
My purpose in looking to the practical arena of inquiry and to its associated literature is to extract a body of tasks that are in real demand and to start with a stock of plausible suggestions for ways to meet their requirements.  Some of what one finds depicted in current pictures of learning and reasoning may turn out to be inconsistent or unrealizable projections, beyond the scope of any present methods or possible technology to implement.  This is the very sort of thing that one ought to be interested in finding out!  It is one of the benefits of submitting theories to trial by computer that we obtain this knowledge.  Of course, the fact that no one can presently find a way to render a concept effectively computable does not prove that it is unworkable, but it does place the idea in a different empirical class.
 +
 +
This should be enough to say about why it is sometimes necessary to cite the language of other fields and to critically reflect on the associated concepts in the process of doing work within the disciplines of systems theory and software engineering.  To sum it up, it is not a question of entering another field or absorbing its materials, but of finding a good standpoint on one's own grounds from which to tackle the problems that the outside world presents.
 +
 +
Sorting out which procedures are effective in inquiry and finding out which functions are feasible to implement is a job can be done better in the hard light demanded by formalized programs.  But there is nothing wrong in principle with a top down approach, so long as one does come down, that is, so long as one eventually descends from a level of purely topical reasoning.  I will follow the analogy of a recursive program that progresses down discrete steps to its base, stepwise refining the topics of higher level specifications to arrive at their more concrete details.  The best reinforcement for such a program is to maintain a parallel effort that builds up competencies from fundamentals.
 +
 +
Once I have addressed the question of what the principles are that enable human inquiry it brings me to the question of how I would set out to improve the human capacity for inquiry by computational means.
 +
 +
Within the field of AI there are many ways of simulating and supporting learning and reasoning that would not involve me in systems theory proper, that is, in reflecting on mathematically defined systems or in considering the dynamics that automata trace out through abstract state spaces.  However, I have chosen to take the system-theoretic route for several reasons, which I will now discuss.
 +
 +
First, if we succeed in understanding intelligent inquiry in terms of system-theoretic properties and processes, it equips this knowledge with the greatest degree of transferability between comparable systems.  In short, it makes our knowledge robust, and not narrowly limited to a particular instantiation of the target capacity.
 +
 +
Second, if we organize our thinking in terms of a coherent system or integrated agent which carries out inquiries, it helps to manage the complexity of the design problem by splitting it into discrete stages.  This strategy is especially useful in dealing with the recursive or reflexive quality that bedevils all such inquiries into inquiry itself.  This aspect of self-application in the problem is probably unavoidable, due to the following facts.  Human beings are complex agents, and any system likely to support significant inquiry is bound to surpass the complexity of most systems we can fully analyze today.  Research into complex systems is one of the jobs that will depend on intelligent software tools to advance in the future.  For this we need programs that can follow the drift of inquiry and perhaps even scout out fruitful directions of exploration.  Programs to do this will need to acquire a heuristic model of the inquiry process they are designed to assist.  And so it goes.  Programs for inquiry will pull themselves up by their own bootstraps.
 +
 +
Taking as given the system-theoretic approach from now on, I can focus and rephrase my question about the technical enhancement of inquiry.  How can we put computational foundations under the theoretical models of inquiry, at least, the ones we discover to be accessible?  In more detail, what is the depth and content of the task analysis that we need to relate the higher order functions of inquiry with the primitive elements given in systems theory and software engineering?  Connecting the requirements of a formal theory of inquiry with the resources of mathematical systems theory has led me to the concept of inquiry driven systems.
 +
 +
The concept of an inquiry driven system is intended to capture the essential properties of a broad class of intelligent systems, and to highlight the crucial processes which support learning and reasoning in natural and cultural systems.  The defining properties of inquiry driven systems are discussed in the next few paragraphs.  I then consider what is needed to supply these abstractions with operational definitions, concentrating on the terms of mathematical systems theory as a suitable foundation.  After this, I discuss my plans to implement a software system which is designed to help analyze the qualitative behavior of complex systems, inquiry driven systems in particular.
 +
 +
An inquiry driven system has components of state, accessible to the system itself, which characterize the norms of its experience.  The idea of a norm has two meanings, both of which are useful here.  In one sense, we have the descriptive regularities which are observed in summaries of past experience.  These norms are assumed to govern the expectable sequences of future states, as determined by natural laws.  In another sense, we have the prescriptive policies which are selected with an eye to future experience.  These norms govern the intendable goals of processes, as controlled by deliberate choices.  Collectively, these components constitute the knowledge base or intellectual component of the system.
 +
 +
An inquiry driven system, in the simplest cases worth talking about, requires at least three different modalities of knowledge component, referred to as the expectations, observations, and intentions of the system.  Each of these components has the status of a theory, that is, a propositional code which the agent of the system carries along and maintains with itself through all its changes of state, possibly updating it as the need arises in experience.  However, all of these theories have reference to a common world, indicating under their varying lights more or less overlapping regions in the state space of the system, or in some derivative or extension of the basic state space.
 +
 +
The inquiry process is driven by the nature and extent of the differences existing at any time among its principal theories, for example, its expectations, observations, and intentions.  These discrepancies are evidenced by differences in the sets of models which satisfy the separate theories.  Normally, human beings experience a high level of disparity among these theories as a dissatisfying situation, a state of cognitive discord.  For people, the incongruity of cognitive elements is accompanied by an unsettled affective state, in Peirce's phrase, the "irritation of doubt".  A person in this situation is usually motivated to reduce the annoying disturbance by some action, all of which activities we may classify under the heading of inquiry processes.
 +
 +
Without insisting on strict determinism, we can say that the inquiry process is lawful if there is any kind of informative relationship connecting the state of cognitive discord at each time with the ensuing state transitions of the system.  Expressed in human terms, a difference between expectations and observations is experienced as a surprise to be explained, a difference between observations and intentions is experienced as a problem to be solved.  We begin to understand a particular example of inquiry when we can describe the relation between the intellectual state of its agent and the subsequent action that the agent undertakes.
 +
 +
These simple facts, the features of inquiry outlined above, already raise a number of issues, some of which are open problems that my research will have to address.  Given the goal of constructing supports for inquiry on the grounds of systems theory, each of these difficulties is an obstacle to progress in the chosen direction, to understanding the capacity for inquiry as a systems property.  In the next few paragraphs I discuss a critical problem to be solved in this approach, indicating its character to the extent I can succeed at present, and I suggest a reasonable way of proceeding.
 +
 +
In human inquiry there is always a relation between cognitive and affective features of experience.  We have a sense of how much harmony or discord is present in a situation, and we rely on the intensity of this sensation as one measure of how to proceed with inquiry.  This works so automatically that we have trouble distinguishing the affective and cognitive aspects of the irritating doubt that drives the process.  In the artificial systems we build to support inquiry, what measures can we take to supply this sense or arrange a substitute for it?  If the proper measure of doubt cannot be formalized, then all responsibility for judging it will have to be assigned to the human side of the interface.  This would greatly reduce the usefulness of the projected software.
 +
 +
The unsettled state which instigates inquiry is characterized by a high level of uncertainty.  The settled state of knowledge at the end of inquiry is achieved by reducing this uncertainty to a minimum.  Within the framework of information theory we have a concept of uncertainty, the entropy of a probability distribution, as being something we can measure.  Certainly, how we feel about entropy does not enter the equation.  Can we form a connection between the kind of doubt that drives human inquiry and the kind of uncertainty that is measured on scales of information content?  If so, this would allow important dynamic properties of inquiry driven systems to be studied in abstraction from the affective qualities of the disagreements which drive them.  With respect to the measurable aspects of uncertainty, inquiry driven systems could be taken as special types of control systems, where the variable to be controlled is the total amount of disparity or dispersion in the knowledge base of the system.
 +
 +
The assumption of modularity, that the affective and intellectual aspects of inquiry can be disentangled into separate components of the system, is a natural one to make.  Whenever it holds, even approximately, it simplifies the task of understanding and permits the analyst or designer to assign responsibility for these factors to independent modules of the simulation or implementation.
 +
 +
However, this assumption appears to be false in general, or true only in approaching certain properties of inquiry.  Other features of inquiry are not completely understandable on this basis.  To tackle the more refractory properties, I will be forced to examine the concept of a measure which separates the affective and intellectual impacts of disorder.  To the extent that this issue can be resolved by analysis, I believe that it hinges on the characters that make a measure objective, that is, an impression whose value is invariant over many different perspectives and interpretations, as opposed to being the measure of a purely subjective moment, that is, an impression whose value is limited to a special interpretation or perspective.
 +
 +
The preceding discussion has indicated a few of the properties that are attributed to inquiry and its agents and has initiated an analysis of their underlying principles.  Now I engage the task of giving these processes operational definitions in the framework of mathematical systems theory.
 +
 +
Consider the inquiry driven system as described by a set of variables:
 +
 +
: x1, ... , xn, a1, ... , ar.
 +
 +
The xi are regarded as ordinary state variables and the aj are regarded as variables codifying the state of knowledge with respect to a variety of different questions.  Many of the parameters aj will simply anticipate or echo the transient features of state that are swept out in reality by the ordinary variables xi.  This puts these information variables subject to fairly direct forms of interpretation, namely, as icons and indices of the ordinary state of the system.  However, in order for the system to have a knowledge base which takes a propositional stance with respect to its own state space, other information variables among the aj will have to be used in less direct ways, in other words, made subject to more symbolic interpretations.  In particular, some of them will be required to serve as the signs of logical operators.
 +
 +
The most general term that I can find to describe the informational parameters aj is to call them "signs".  These are the syntactic building blocks that go into constructing the various knowledge bases of the inquiry driven system.  Although these variables can be employed in a simple analogue fashion to represent information about remembered, observed, or intended states, ultimately it is necessary for the system to have a formal syntax of expressions in which propositions about states can be represented and manipulated.  I have already implemented a fairly efficient way of doing this, using only three arbitrary symbols beyond the set that is used to echo the ordinary features of state.
 +
 +
A task that remains for future work is to operationalize a suitable measure of difference between alternative propositions about the world, that is, to sort out competing statements about the state space of the system.  A successful measure will gauge the differences in objective models and not be overly sensitive to unimportant variations in syntax.  This means that its first priority is to recognize logical equivalence classes of expressions, in other words, to discriminate between members of different equivalence classes, but to respond in equal measure to every member of the same equivalence class.  This requirement brings the project within the fold of logical inquiry.  Along with finding an adequate measure of difference between propositions, it is necessary to specify how these differences can determine, in some measure, the state transitions of an inquiry driven system.  At this juncture, a variety of suggestive analogies arise, connecting the logical or qualitative problem just stated with the kinds of questions that are commonly treated in differential geometry and in geometric representations of dynamics.
  
 
===4.2. The Context of Inquiry===
 
===4.2. The Context of Inquiry===

Revision as of 13:45, 4 August 2011


ContentsPart 1Part 2Part 3Part 4Part 5AppendicesReferencesDocument History


Part 4. Discussion of Inquiry

The subject matter under review is assigned the name “inquiry”, a name that is presently both general and vague. The generality is essential, marking the actual extension and the eventual coverage that the name is intended to have. The vagueness is incidental, hinging on the personal concept of the subject matter and the prevailing level of comprehension that relates an interpreter of the name to the object of its indication. As the investigation proceeds, it is hoped that the name can become as general as it is meant to be, but not forever remain so vague.

In regarding and presenting this subject, I need the freedom to adopt any one of several views. These are the perspectives that I call the “classical” or syllogistic, the “pragmatic” or sign theoretic, and the “dynamical” or system theoretic points of view. Each perspective or point of view is supported by a corresponding framework or a supply of resources, the intellectual tools that allow a person taking up such a view to develop and to share a picture of what can be seen from it.

The situation with respect to these different views can be described as follows. The part of the subject that can be seen in each view appears to make sense, if taken by itself, but it does not appear to be entirely consistent with all that is obvious from the other perspectives, and all of these pictures put together are almost certainly incomplete in regard to the subject they are meant to depict. Although the various pictures can be presented in the roughly historical order of their development, it is a mistaken view of their progression to think that the later views can simply replace the former pictures. In particular, if the classical view is taken as an initial approximation to its subject, then it can be regarded as relatively complete with respect to this intention, but both of the later frameworks, that build on and try to reform this basis, are very much works in progress and far from being completed projects. Taken in order of historical development, each succeeding view always promises to keep all that is good from the preceding points of view, but the current state of development is such that these claims have to be taken as promissory notes, not yet matured and not yet due.

In accord with this situation, I would like to be able to take up any one of these views at any point in the discussion and to move with relative ease among the different pictures of inquiry that they frequently show. Along with this ability, I need to have ways to put their divergent and varying views in comparison with each other, to reflect on the reasons for their distinctions and variations, and overall to have some room to imagine how these views might be corrected, extended, reconciled, and integrated into a coherent and a competent picture of the subject.

This part of the discussion cannot be formal or systematic. It is not intended to argue for any particular point of view, but only to introduce some of the language, ideas, and issues that surround the topic of inquiry. No amount of forethought or premeditation that I can muster would be sufficient to impose an alien organization on this array. The most that I can hope to bring to this forum is to seek clarification of some the terms and to pursue the consequences of some of the axioms that are in the air. Nor do I expect the reader to be disarmed by this apology, at least, not yet, and never unilaterally, since it is obvious that my own point of view, however unformed and inarticulate it may be, cannot help but to affect the selection of problems, subtopics, and suggested angles of approach.

Taking all of these factors into account, the best plan that I can arrange for presenting the subject matter of inquiry is as follows:

First, I describe the individual perspectives and their corresponding frameworks in very general terms, but only insofar as they bear on the subject of inquiry, so that each way of looking at the subject is made available as a resource for the discussion that follows.

Next, I discuss the context of inquiry, treating it as a general field of observation in which the task is to describe an interesting phenomenon or a problematic process. There are a number of dilemmas that arise in this context, especially when it comes to observing and describing the process of inquiry itself. Since these difficulties threaten to paralyze the basic abilities to observe, to describe, and to reflect, a reasonable way around their real obstacles or through their apparent obstructions has to be found before proceeding.

Afterwards, I consider concrete examples of inquiry activities, using any combination of views that appears to be of service at a given moment of discussion, and pointing out problems that call for further investigation.

4.1. Approaches to Inquiry

Try as I might, I do not see a way to develop a theory of inquiry from nothing: To begin from a point where there is nothing to question, to strike out in a given direction without putting anything of consequence at stake, and to trace an unbroken, forward course by following steps that are never unsure. Acquiring a theory of inquiry is not, in short, a purely deductive exercise.

At the risk of being wrong, I am ready to venture that a theory of inquiry is not to be gained for nothing. If I try to base this claim on the evidence of all previous attempts, in both my own and others' trials, having already failed, then it is certain only that no positive proof can arise from so negative a recommendation. Acquiring a theory of inquiry is not, in sum, a purely inductive exercise.

4.1.1. The Classical Framework : Syllogistic Approaches

4.1.2. The Pragmatic Framework : Sign-Theoretic Approaches

I would like to introduce a pair of ideas from pragmatism that can help to address the issues of knowledge and inquiry in an integrated way.

The first idea is that knowledge is a product of inquiry. The impact of this idea is that one's interest in knowledge shifts to an interest in the process of inquiry that is capable of yielding knowledge as a result. In the pragmatic perspective, the theory of knowledge, or epistemology, is incorporated within a generative theory of inquiry. The result is a theory of inquiry that treats it as a general form of conduct, that is, as a dynamic process with a deliberate purpose.

The second idea is that all thought takes place in signs. This means that all thinking occurs within a general representational setting that is called a "sign relation". As a first approximation, a sign relation can be thought of as a triadic relation or a three place transaction that exists among the various domains of objects, signs, and ideas that are involved in a given situation. For example, suppose that there is a duck on the lake (this is an object); one refers to it by means of the word "duck" (this is a sign) and one has has an image of the duck in one's mind (this is an idea).

Since an inquiry is a special case of a thought process, an activity that operates on sign and ideas in respect of certain objects, this means that the theory of inquiry and the theory of sign relations are very tightly integrated within this point of view, and are almost indistinguishable. Putting the idea that knowledge is a product of inquiry together with the idea that inquiry takes place within a sign relation, one can even say that the inquiry itself, or the production of knowledge, is just the transformation of a sign relation.

Generally speaking, a transformation of a sign relation allows any numbers of objects, signs, and ideas that are involved in a given situation to be engaged in process of change. For example, adding a new word to one's vocabulary, such as the word "mallard" for that which one formerly called a "duck", is just one of many ways that a sign relation can be transformed.

Constant references to "transformations of sign relations", or else to "sign relational transformations", can eventually become a bit unwieldy, and so I assign them the briefer name of "pragmatic transformations". Considered in their full generality, the potential array of pragmatic transformations that one might find it necessary to consider can be very general indeed, exibiting an overwhelming degree of complexity. To deal with this level of complexity, one needs to find strategies for approaching it in stages. Two common tactics are: (1) to classify special types of pragmatic transformations in terms of which kinds of entities are changing the most, and (2) to focus on special cases of pragmatic transformations in which one class of entities is fixed.

If one intends to study processes of development that are every bit as general as "cultural transformations", in which all of the artifacts, symbols, and values are capable of being thrown into a state of flux, then I suggest that pragmatic transformations are a relatively generic but a reasonably well defined form of intermediate case, in other words, a suitable type of transitional object.

What I just gave was the popular version of the theory of signs. This much was already evident in Aristotle's work On Interpretation and was probably derived from Stoic sources. It is still the most natural and intuitive way to approach the idea of a sign relation. But within the frame of pragmatism proper, a number of changes need to be worked on the idea of a sign relation, in order to make it a more exact and more flexible instrument of thought.

From a pragmatic perspective, ideas are taken to be signs in the mind. In this role they come to serve as special cases of "interpretant signs", those that follow other signs in the ongoing process of interpretation. As far as their essential qualities go, signs and ideas can be classed together, though a sign and its interpretant can still be distinguished by their roles in relation to each other. At this point, the reader is probably itching to ask: Where is the interpreter in all of this? Ultimately, signs and ideas can be recognized as features that affect or indirectly characterize the state of the interpretive agent, and their specifications can even be sharpened up to point that one can say it is the states of the interpreter that are the real signs and interpretants in the process. This observation, that Peirce summed up by saying that the person is a sign, has consequences for bringing about a synthesis between the theory of sign relations and the theory of dynamic systems.

4.1.3. The Dynamical Framework : System-Theoretic Approaches

“Inquiry” is a word in common use for a process that resolves doubt and creates knowledge. Computers are involved in inquiry today, and are likely to become more so as time goes on. The aim of my research is to improve the service that computers bring to inquiry. I plan to approach this task by analyzing the nature of inquiry processes, with an eye to those elements that can be given a computational basis.

I am interested in the kinds of inquiries which human beings carry on in all the varieties of learning and reasoning from everyday life to scientific practice. I would like to design software that people could use to carry their inquiries further, higher, faster. Needless to say, this could be an important component of all intelligent software systems in the future. In any application where a knowledge base is maintained, it will become more and more important to examine the processes that deliver the putative knowledge.

4.1.3.1. Inquiry and Computation

Three questions immediately arise in the connection between inquiry and computation. As they reflect on the concept of inquiry, these questions have to do with its integrity, its effectiveness, and its complexity.

  1. Integrity. Do all the activities and all the processes that are commonly dubbed "inquiry" have anything essential in common?
  2. Effectiveness. Can any useful parts of these so called inquiries be automated in practice?
  3. Complexity. Just how deep is the analysis, the disassembly, or the "takedown" of inquiry that is required to reach the level of routine steps?

The issues of effectiveness and complexity are discussed throughout the remainder of this text, but the problem of integrity must be dealt with immediately, since doubts about it can interfere with the very ability to use the word "inquiry" in this discussion.

Thus, I must examine the integrity, or well-definedness, of the very idea of inquiry, in other words, "inquiry" as a general concept rather than a catch all term. Is the faculty of inquiry a principled capacity, leading to a disciplined form of conduct, or is it only a disjointed collection of unrelated skills? As it is currently carried out on computers, inquiry includes everything from database searches, through dynamic simulation and statistical reasoning, to mathematical theorem proving. Insofar as these tasks constitute specialized efforts, each one needs software that is tailored to the individual purpose. To the extent that these different modes of investigation contribute to larger inquiries, present methods for coordinating their separate findings are mostly ad hoc and still a matter of human skill. Thus, one can question whether the very name "inquiry" succeeds in referring to a coherent and independent process.

Do all the varieties of inquiry have something in common, a structure or a function that defines the essence of inquiry itself? I will say "yes". One advantage of this answer is that it brings the topic of inquiry within human scope, and also within my capacity to research. Without this, the field of inquiry would be impossible for any one human being to survey, because a person would have to cover the union of all the areas that employ inquiry. By grasping what is shared by all inquiries, I can focus on the intersection of their generating principles. Another benefit of this alternative is that it promises a common medium for inquiry, one in which the many disparate pieces of our puzzling nature may be bound together in a unified whole.

When I look at other examples of instruments that people have used to extend their capacities, I see that two questions must be faced. First, what are the principles that enable human performance? Second, what are the principles that can be augmented by available technology? I will refer to these two issues as the question of original principles and the question of technical extensions, respectively. Following this model leads me to examine the human capacity for inquiry, asking which of its principles can be reflected in the computational medium, and which of its faculties can be sharpened in the process. It is not likely that everybody with the same interests and applications would answer these questions the same way, but I will describe how I approach them, what has resulted so far, and what directions I plan to explore next.

The focus of my work will narrow in three steps. First, I will concentrate on the design of intelligent software systems that support inquiry. Then, I will select mathematical systems theory as an indispensable tool, both for the analysis of inquiry itself and for the design of programs to support it. Finally, I will develop a theory of qualitative differential equations, implement methods for their computation and solution, and apply the resulting body of techniques to two kinds of recalcitrant problems, (1) those where an inquiry must begin with too little information to justify quantitative methods, and (2) those where a complete logical analysis is necessary to identify critical assumptions.

4.1.3.2. Inquiry Driven Systems

The stages of work just described lead me to introduce the concept of an "inquiry driven system". In rough terms, this type of system is designed to integrate the functions of a data driven adaptive system and a rule driven intelligent system. The idea is to have a system whose adaptive transformations are determined, not just by learning from observations alone, or else by reasoning from concepts alone, but by the interactions that occur between these two sources of knowledge. A system which combines different contributions to its knowledge base, much less the mixed modes of empirical and rational types of knowledge, will find its next problem lies in reconciling the mismatches that arise between these sources. Thus, one arrives at the concept of an adaptive knowledge-base whose changes over time are driven by the differences that it encounters between what is observed in the data it gathers and what is predicted by the laws it knows. This sounds, at the proper theoretical distance, like an echo of an error-controlled cybernetic system. Moreover, it falls in line with the general description of scientific inquiry. Finally, it raises the interesting possibility that good formulations of these "differences of opinion" might allow one to find differential laws for the time evolution of inquiry processes.

There are several implications of my approach that I need to emphasize. Many distractions can be avoided if I continue to guide my approach by the two questions raised above, of principles and extensions, and if I guard against confusing what they ask and do not ask. The issues that surround these points, concerning the actual nature and the possible nurture of the capacity for inquiry, can be taken up shortly. But first I need to deal with a preliminary source of confusion. This has to do with the two vocabularies, the language of the application domain, that talks about the higher order functions and intentions of software users, and the language of the resource domain, that describes the primitive computational elements to which software designers must try to reduce the problems that confront them. I am forced to use, or at least to mention, both of these terminologies in my effort to bridge the gap between them, but each of them plays a different role in the work.

In studies of formal specifications the designations "reduced language" and "reducing language" are sometimes used to discuss the two roles of language that are being encountered here. It is a characteristic of some forms of reductionism to call a language "reduced" simply because it is intended to be reduced and long before it is actually reduced, but aside from that this language of "reduced" and "reducing" can still be useful. The reduced language, or the language that is targeted to be reduced, is the language of the application, practice, or target domain. The reducing language, or the language that is intended to provide the sources of explanation and the resources for reduction, is the language of the resource, method, or base domain. I will use all of these terms, with the following two qualifications.

First, I need to note a trivial caution. One's sense of "source" and "target" will often get switched depending on one's direction of work. Further, these terms are reserved in category theory to refer to the domain and the codomain of a function, mapping, or transformation. This will limit their use, in the above sense, to informal contexts.

Now, I must deal with a more substantive issue. In trying to automate even a fraction of such grand capacities as intelligence and inquiry, it is seldom that we totally succeed in reducing one domain to the other. The reduction attempt will usually result in our saying something like this: That we have reduced the capacity A in the application domain to the sum of the capacity B in our base domain plus some residue C of unanalyzed abilities that must be called in from outside the basic set. In effect, the residual abilities are assigned to the human side of the interface, that is, they are attributed to the conscious observation, the common sense, or the creative ingenuity of users and programmers.

In the theory of recursive functions, this situation is expressed by saying that A is a "relatively computable" function, more specifically, that A is computable relative to the assumption of an "oracle" for C. For this reason, I usually speak of "relating" a task A to a method B, rather than fully "reducing" it. A measure of initial success is often achieved when a form of analysis relates or connects an application task to a basic method, and this usually happens long before a set of tasks are completely reduced to a set of methods. The catch is whether the basic set of resources is already implemented, or is just being promised, and whether the residual ability has a lower complexity than the original task, or is actually more difficult.

At this point I can return to the task of analyzing and extending the capacity for inquiry. In order to enhance a human capacity it is first necessary to understand its process.

To extend a human capacity we need to know the critical functions which support that ability, and this involves us in a theory of the practice domain. This means that most of the language describing the target functions will come from sources outside the areas of systems theory and software engineering. The first thoughts that we take for our specs will come from the common parlance that everyone uses to talk about learning and reasoning, and the rest will come from the special fields which study these abilities, from psychology, education, logic and the philosophy of science. This particular hybrid of work easily fits under the broad banner of artificial intelligence, yet I need to repeat that my principal aim is not to build any kind of autonomous intelligence, but simply to amplify our own capacity for inquiry.

There are many well-reasoned and well-respected paradigms for the study of learning and reasoning, any one of which I might have chosen as a blueprint for the architecture of inquiry. The model of inquiry that works best for me is one with a solid standing in the philosophy of science and whose origins are entwined with the very beginnings of symbolic logic. Its practical applications to education and social issues have been studied in depth, and aspects of it have received attention in the AI literature (Refs 1-8). This is the pragmatic model of inquiry, formulated by C.S. Peirce from his lifelong investigations of classical logic and experimental reasoning. For my purposes, all this certification means is that the model has survived many years of hard knocks testing, and is therefore a good candidate for further trial. Since we are still near the beginning of efforts to computerize inquiry, it is not necessary to prove that this is the best of all possible models. At this early stage, any good ideas would help.

My purpose in looking to the practical arena of inquiry and to its associated literature is to extract a body of tasks that are in real demand and to start with a stock of plausible suggestions for ways to meet their requirements. Some of what one finds depicted in current pictures of learning and reasoning may turn out to be inconsistent or unrealizable projections, beyond the scope of any present methods or possible technology to implement. This is the very sort of thing that one ought to be interested in finding out! It is one of the benefits of submitting theories to trial by computer that we obtain this knowledge. Of course, the fact that no one can presently find a way to render a concept effectively computable does not prove that it is unworkable, but it does place the idea in a different empirical class.

This should be enough to say about why it is sometimes necessary to cite the language of other fields and to critically reflect on the associated concepts in the process of doing work within the disciplines of systems theory and software engineering. To sum it up, it is not a question of entering another field or absorbing its materials, but of finding a good standpoint on one's own grounds from which to tackle the problems that the outside world presents.

Sorting out which procedures are effective in inquiry and finding out which functions are feasible to implement is a job can be done better in the hard light demanded by formalized programs. But there is nothing wrong in principle with a top down approach, so long as one does come down, that is, so long as one eventually descends from a level of purely topical reasoning. I will follow the analogy of a recursive program that progresses down discrete steps to its base, stepwise refining the topics of higher level specifications to arrive at their more concrete details. The best reinforcement for such a program is to maintain a parallel effort that builds up competencies from fundamentals.

Once I have addressed the question of what the principles are that enable human inquiry it brings me to the question of how I would set out to improve the human capacity for inquiry by computational means.

Within the field of AI there are many ways of simulating and supporting learning and reasoning that would not involve me in systems theory proper, that is, in reflecting on mathematically defined systems or in considering the dynamics that automata trace out through abstract state spaces. However, I have chosen to take the system-theoretic route for several reasons, which I will now discuss.

First, if we succeed in understanding intelligent inquiry in terms of system-theoretic properties and processes, it equips this knowledge with the greatest degree of transferability between comparable systems. In short, it makes our knowledge robust, and not narrowly limited to a particular instantiation of the target capacity.

Second, if we organize our thinking in terms of a coherent system or integrated agent which carries out inquiries, it helps to manage the complexity of the design problem by splitting it into discrete stages. This strategy is especially useful in dealing with the recursive or reflexive quality that bedevils all such inquiries into inquiry itself. This aspect of self-application in the problem is probably unavoidable, due to the following facts. Human beings are complex agents, and any system likely to support significant inquiry is bound to surpass the complexity of most systems we can fully analyze today. Research into complex systems is one of the jobs that will depend on intelligent software tools to advance in the future. For this we need programs that can follow the drift of inquiry and perhaps even scout out fruitful directions of exploration. Programs to do this will need to acquire a heuristic model of the inquiry process they are designed to assist. And so it goes. Programs for inquiry will pull themselves up by their own bootstraps.

Taking as given the system-theoretic approach from now on, I can focus and rephrase my question about the technical enhancement of inquiry. How can we put computational foundations under the theoretical models of inquiry, at least, the ones we discover to be accessible? In more detail, what is the depth and content of the task analysis that we need to relate the higher order functions of inquiry with the primitive elements given in systems theory and software engineering? Connecting the requirements of a formal theory of inquiry with the resources of mathematical systems theory has led me to the concept of inquiry driven systems.

The concept of an inquiry driven system is intended to capture the essential properties of a broad class of intelligent systems, and to highlight the crucial processes which support learning and reasoning in natural and cultural systems. The defining properties of inquiry driven systems are discussed in the next few paragraphs. I then consider what is needed to supply these abstractions with operational definitions, concentrating on the terms of mathematical systems theory as a suitable foundation. After this, I discuss my plans to implement a software system which is designed to help analyze the qualitative behavior of complex systems, inquiry driven systems in particular.

An inquiry driven system has components of state, accessible to the system itself, which characterize the norms of its experience. The idea of a norm has two meanings, both of which are useful here. In one sense, we have the descriptive regularities which are observed in summaries of past experience. These norms are assumed to govern the expectable sequences of future states, as determined by natural laws. In another sense, we have the prescriptive policies which are selected with an eye to future experience. These norms govern the intendable goals of processes, as controlled by deliberate choices. Collectively, these components constitute the knowledge base or intellectual component of the system.

An inquiry driven system, in the simplest cases worth talking about, requires at least three different modalities of knowledge component, referred to as the expectations, observations, and intentions of the system. Each of these components has the status of a theory, that is, a propositional code which the agent of the system carries along and maintains with itself through all its changes of state, possibly updating it as the need arises in experience. However, all of these theories have reference to a common world, indicating under their varying lights more or less overlapping regions in the state space of the system, or in some derivative or extension of the basic state space.

The inquiry process is driven by the nature and extent of the differences existing at any time among its principal theories, for example, its expectations, observations, and intentions. These discrepancies are evidenced by differences in the sets of models which satisfy the separate theories. Normally, human beings experience a high level of disparity among these theories as a dissatisfying situation, a state of cognitive discord. For people, the incongruity of cognitive elements is accompanied by an unsettled affective state, in Peirce's phrase, the "irritation of doubt". A person in this situation is usually motivated to reduce the annoying disturbance by some action, all of which activities we may classify under the heading of inquiry processes.

Without insisting on strict determinism, we can say that the inquiry process is lawful if there is any kind of informative relationship connecting the state of cognitive discord at each time with the ensuing state transitions of the system. Expressed in human terms, a difference between expectations and observations is experienced as a surprise to be explained, a difference between observations and intentions is experienced as a problem to be solved. We begin to understand a particular example of inquiry when we can describe the relation between the intellectual state of its agent and the subsequent action that the agent undertakes.

These simple facts, the features of inquiry outlined above, already raise a number of issues, some of which are open problems that my research will have to address. Given the goal of constructing supports for inquiry on the grounds of systems theory, each of these difficulties is an obstacle to progress in the chosen direction, to understanding the capacity for inquiry as a systems property. In the next few paragraphs I discuss a critical problem to be solved in this approach, indicating its character to the extent I can succeed at present, and I suggest a reasonable way of proceeding.

In human inquiry there is always a relation between cognitive and affective features of experience. We have a sense of how much harmony or discord is present in a situation, and we rely on the intensity of this sensation as one measure of how to proceed with inquiry. This works so automatically that we have trouble distinguishing the affective and cognitive aspects of the irritating doubt that drives the process. In the artificial systems we build to support inquiry, what measures can we take to supply this sense or arrange a substitute for it? If the proper measure of doubt cannot be formalized, then all responsibility for judging it will have to be assigned to the human side of the interface. This would greatly reduce the usefulness of the projected software.

The unsettled state which instigates inquiry is characterized by a high level of uncertainty. The settled state of knowledge at the end of inquiry is achieved by reducing this uncertainty to a minimum. Within the framework of information theory we have a concept of uncertainty, the entropy of a probability distribution, as being something we can measure. Certainly, how we feel about entropy does not enter the equation. Can we form a connection between the kind of doubt that drives human inquiry and the kind of uncertainty that is measured on scales of information content? If so, this would allow important dynamic properties of inquiry driven systems to be studied in abstraction from the affective qualities of the disagreements which drive them. With respect to the measurable aspects of uncertainty, inquiry driven systems could be taken as special types of control systems, where the variable to be controlled is the total amount of disparity or dispersion in the knowledge base of the system.

The assumption of modularity, that the affective and intellectual aspects of inquiry can be disentangled into separate components of the system, is a natural one to make. Whenever it holds, even approximately, it simplifies the task of understanding and permits the analyst or designer to assign responsibility for these factors to independent modules of the simulation or implementation.

However, this assumption appears to be false in general, or true only in approaching certain properties of inquiry. Other features of inquiry are not completely understandable on this basis. To tackle the more refractory properties, I will be forced to examine the concept of a measure which separates the affective and intellectual impacts of disorder. To the extent that this issue can be resolved by analysis, I believe that it hinges on the characters that make a measure objective, that is, an impression whose value is invariant over many different perspectives and interpretations, as opposed to being the measure of a purely subjective moment, that is, an impression whose value is limited to a special interpretation or perspective.

The preceding discussion has indicated a few of the properties that are attributed to inquiry and its agents and has initiated an analysis of their underlying principles. Now I engage the task of giving these processes operational definitions in the framework of mathematical systems theory.

Consider the inquiry driven system as described by a set of variables:

x1, ... , xn, a1, ... , ar.

The xi are regarded as ordinary state variables and the aj are regarded as variables codifying the state of knowledge with respect to a variety of different questions. Many of the parameters aj will simply anticipate or echo the transient features of state that are swept out in reality by the ordinary variables xi. This puts these information variables subject to fairly direct forms of interpretation, namely, as icons and indices of the ordinary state of the system. However, in order for the system to have a knowledge base which takes a propositional stance with respect to its own state space, other information variables among the aj will have to be used in less direct ways, in other words, made subject to more symbolic interpretations. In particular, some of them will be required to serve as the signs of logical operators.

The most general term that I can find to describe the informational parameters aj is to call them "signs". These are the syntactic building blocks that go into constructing the various knowledge bases of the inquiry driven system. Although these variables can be employed in a simple analogue fashion to represent information about remembered, observed, or intended states, ultimately it is necessary for the system to have a formal syntax of expressions in which propositions about states can be represented and manipulated. I have already implemented a fairly efficient way of doing this, using only three arbitrary symbols beyond the set that is used to echo the ordinary features of state.

A task that remains for future work is to operationalize a suitable measure of difference between alternative propositions about the world, that is, to sort out competing statements about the state space of the system. A successful measure will gauge the differences in objective models and not be overly sensitive to unimportant variations in syntax. This means that its first priority is to recognize logical equivalence classes of expressions, in other words, to discriminate between members of different equivalence classes, but to respond in equal measure to every member of the same equivalence class. This requirement brings the project within the fold of logical inquiry. Along with finding an adequate measure of difference between propositions, it is necessary to specify how these differences can determine, in some measure, the state transitions of an inquiry driven system. At this juncture, a variety of suggestive analogies arise, connecting the logical or qualitative problem just stated with the kinds of questions that are commonly treated in differential geometry and in geometric representations of dynamics.

4.2. The Context of Inquiry

4.2.1. The Field of Observation

4.2.2. The Problem of Reflection

4.2.3. The Problem of Reconstruction

4.2.4. The Trivializing of Integration

4.2.5. Tensions in the Field of Observation

4.2.6. Problems of Representation and Communication

4.3. The Conduct of Inquiry

4.3.1. Introduction

4.3.2. The Types of Reasoning

4.3.2.1. Deduction
4.3.2.2. Induction
4.3.2.3. Abduction

4.3.3. Hybrid Types of Inference

4.3.3.1. Analogy
4.3.3.2. Inquiry

4.3.4. Details of Induction

4.3.4.1. Learning
4.3.4.2. Transfer
4.3.4.3. Testing

4.3.5. The Stages of Inquiry


ContentsPart 1Part 2Part 3Part 4Part 5AppendicesReferencesDocument History



<sharethis />