Introduction to Inquiry Driven Systems

MyWikiBiz, Author Your Legacy — Monday July 22, 2024
Jump to navigationJump to search

Author: Jon Awbrey

The following essay is intended to provide readers with background on the pragmatic theory of inquiry and its relationship to the pragmatic theory of signs.

Aspects of Inquiry

“Inquiry” is a word in common use for a process that resolves doubt and creates knowledge. Computers are involved in inquiry today, and are likely to become more so as time goes on. The aim of my research is to improve the service that computers bring to inquiry. I plan to approach this task by analyzing the nature of inquiry processes, with an eye to those elements that can be given a computational basis.

I am interested in the kinds of inquiries which human beings carry on in all the varieties of learning and reasoning from everyday life to scientific practice. I would like to design software that people could use to carry their inquiries further, higher, faster. Needless to say, this could be an important component of all intelligent software systems in the future. In any application where a knowledge base is maintained, it will become more and more important to examine the processes that deliver the putative knowledge.

Preliminary Questions

Three questions immediately arise in the connection between inquiry and computation. As they reflect on the very idea of inquiry, they have to do with its integrity, its effectiveness, and its complexity. These questions ask in their turn whether all such processes that are dubbed “inquiry” have anything essential in common, whether any useful parts of these processes can be automated in practice, and just how deep is the takedown needed to reach the level of routine steps. The issues of effectiveness and complexity will be discussed throughout the rest of this work, but the problem of integrity must be dealt with immediately, since doubts about it may interfere with my ability to exercise this title to “inquiry”.

Thus, we must examine the integrity, or well-definedness, of the very idea of inquiry, that is, “inquiry” as a general concept rather than a catch-all word. Is the faculty of inquiry a principled capacity, leading to a disciplined form of conduct, or is it only a disjointed collection of unrelated skills? As it is currently being carried out on computers today, inquiry includes everything from database searches, through dynamic simulation and statistical reasoning, to mathematical theorem proving. Insofar as these tasks constitute specialized efforts, each of them demands software that is tailored to its individual purpose. Insofar as these different modes of investigation contribute to larger inquiries, our present methods for coordinating their separate findings are mostly ad hoc and still a matter of human skill. Thus, we might question whether the very name “inquiry” succeeds in referring to a coherent and independent process.

Do all the varieties of inquiry have something in common, a structure or a function that defines the essence of inquiry itself? I will say “yes”. One advantage of this answer is that it brings the topic of inquiry within human scope, and also within my capacity to research. Without this, the field of inquiry would be impossible for any one human being to survey, because a person would have to cover the union of all the areas that employ inquiry. By grasping what is shared by all inquiries, I can focus on the intersection of their generating principles. Another benefit of opting for this answer is that it promises a common medium for inquiry, one in which the many disparate pieces of our puzzling nature may be bound together in a unified whole.

When I look at other examples of instruments that people have used to extend their capacities, I see that two questions must be faced. First, what are the principles that enable human performance? Second, what are the principles that can be augmented by available technology? I will refer to these two issues as the question of original principles and the question of technical extensions, respectively. Following this model leads me to examine the human capacity for inquiry, asking which of its principles can be reflected in the computational medium, and which of its faculties can be sharpened in the process. It is not likely that everybody with the same interests and applications would answer these questions the same way, but I will describe how I approach them, what has resulted so far, and what directions I plan to explore next.

Initial Approach

The focus of this work will narrow in three steps:

  • First, I intend to concentrate on the design of intelligent software systems that support inquiry.
  • Next, I will select mathematical systems theory as an indispensable tool, both for the analysis of inquiry itself and for the design of programs to support it.
  • Finally, I plan to develop a theory of qualitative differential equations, implement methods for their computation and their solution, and then apply the resulting body of techniques to two kinds of recalcitrant problems:
    • Situations where an inquiry must begin with too little information to justify quantitative methods.
    • Situations where a complete logical analysis is necessary to identify critical assumptions.

The stages of work just described will gradually lead me to introduce the concept of an "inquiry driven system". In rough terms, this type of system is designed to integrate the functions of data-driven adaptive systems and rule-driven intelligent systems. The idea is to have a system whose adaptive transformations are determined, not by learning from observations alone nor by reasoning from concepts alone, but by the interactions between these two sources of knowledge. A system that combines different contributions to its knowledge base, much less the mixed modes of empirical and rational types of knowledge, will find that its next problem lies in reconciling the mismatches between these sources. Thus, we arrive at the concept of an adaptive knowledge-base whose changes over time are driven by the differences that it encounters between what is observed in data and what is predicted by laws. This sounds, at the proper theoretical distance, like an echo of the error-controlled cybernetic system, moreover, it falls into line with classic descriptions of scientific inquiry. Finally, this suggests that good formulations of such "differences of opinion" might allow us to find differential laws for the temporal evolution of inquiry processes.

There are several implications of my approach that I need to emphasize. Many distractions can be avoided if we guide our approach by the two questions that were raised above, of principles and extensions, and if we guard against confounding what they ask with what they do not ask. The issues that surround these points, concerning the actual nature and the possible nurture of the capacity for inquiry, will be taken up shortly. But first I need to deal with a preliminary source of confusion. This arises from the two vocabularies, the language of the application domain, which talks about higher order functions and intentions of software users, and the language of the resource domain, which describes the primitive computational elements to which software designers must try to reduce the problem. We are forced to use, or at least to mention, both of these terminologies in our effort to bridge the gap between them, but each of these languages plays a different role in the work.

In studies of formal specifications the designations "reduced language" and "reducing language" are often used to discuss the two roles that are encountered here, that of the "application", "practice", or "target" domain, on the one hand, and that of the "base", "method", or "(re)source" domain, on the other. I will be using all of these terms, with the following two qualifications.

First, I must note a trivial caution. Our sense of "source" and "target" will often get switched depending on our direction of work. Furthermore, these words are reserved in category theory to refer to the domain and the codomain of an "arrow", that is, a function, a mapping, a morphism, or a transformation. This will limit their use in the above sense to the more informal contexts.

Now, I must deal with a more substantive issue. In attempting to automate a fraction of such grand capacities as intelligence and inquiry, it is seldom that we totally succeed in reducing one domain to the other. The reduction attempt will usually result in our saying something like this: that we have reduced the capacity A in the application domain to the sum of the capacity B in our base domain plus some residue C of unanalyzed abilities that must be called in from outside the basic set. The residual abilities will then be assigned to the human side of the interface, that is, attributed to the conscious observation, common sense, or creative ingenuity of users and programmers. In the theory of recursive functions, we would say that A is "relatively computable", given an "oracle" for C. For this reason, I will often speak of "relating" a task to a method, rather than fully "reducing" it. A measure of initial success is often achieved when we can relate or connect an application task to a basic method, long before we can completely reduce one set of them to the other. The catch will always be whether the basic set of resources has already been implemented, or is just being promised, and whether the residual ability has a lower complexity than the original task, or is actually more difficult in practice.

Model of Inquiry

I can now return to the task of analyzing and extending the capacity for inquiry. Any effort to enhance a human capacity must lead off with a beginning comprehension of its nature and must develop concurrently with an evolving understanding of the underlying process that supports this capacity.

To extend a human capacity we need to know the critical functions that support that ability, and this involves us in a theory of the practice domain. This means that most of the language describing the target functions will come from sources outside the areas of systems theory and software engineering. The first thoughts that we take for our specifications will come from the common parlance that everybody uses to talk about learning and reasoning, and the rest will come from the special fields that study these abilities, from psychology, education, logic, and the philosophy of science. This particular hybrid of work easily fits under the broad banner of artificial intelligence, yet I need to repeat that my principal aim is not to build any kind of autonomous intelligence, but simply to amplify our own capacity for inquiry.

There are many well-reasoned and well-respected paradigms for the study of learning and reasoning, any one of which I might have chosen as a blueprint for the architecture of inquiry. The model of inquiry that works best for me is one with a solid standing in the philosophy of science and whose origins are entwined with the very beginnings of symbolic logic. Its practical applications to education and social issues have been studied in depth, and aspects of it have received attention in the AI literature (Refs. 1-8). This is the pragmatic model of inquiry, formulated by C.S. Peirce from his lifelong investigations of classical logic and experimental reasoning. For my purposes, all this certification means is that the model has survived many years of hard-knocks testing, and is therefore a good candidate for further trial. Since we are still near the beginning of efforts to computerize inquiry, it is not necessary to prove that this is the best of all possible models. At this early stage, any good ideas would help.

My purpose in looking to the practical arena of inquiry and to its associated literature is to extract a body of tasks in real demand and to start with a stock of plausible suggestions for ways to meet these requirements. Some of what we find depicted in contemporary pictures of learning and reasoning may turn out to be inconsistent postulations or unrealizable projections, beyond the scope of our present or any possible technology. But this is the very sort of thing that we should be interested in finding out! It is one of the benefits of submitting theories to trial by computer that we obtain just this brand of knowledge. Of course, the fact that no one can presently find a way to make a concept effectively computable does not in itself prove that it is unworkable, but it does place the idea in a different class.

This should be enough to say about why we sometimes need to cite the terms and critically reflect on the concepts of other fields in the process of doing work within the disciplines of systems theory and software engineering. To sum it up, it is not a question of entering another field or absorbing its materials, but of finding a good standpoint on our own grounds from which to tackle the problems that the outside world presents.

Sorting out which procedures are effective in inquiry and finding out which functions are feasible to implement is a job that we can do better in the hard light demanded by fully formalized programs. But there is nothing wrong in principle with a top-down approach, so long as we do come down to familiar ground. I will follow the analogy of a recursive program that progresses down steps to its base, stepwise refining the details of higher-level specifications. One of the best reinforcements for such a program is to maintain a parallel effort that builds up competencies from fundamental rudiments.

System-Theoretic Method

Once I have addressed the question as to "what" are the principles that enable human inquiry it brings me to the question as to "how" I would set out to improve the human capacity for inquiry by computational means.

Within the field of AI there are many ways of simulating and supporting learning and reasoning that would not of necessity involve us in systems theory proper, that is, in reflecting on mathematically defined systems or in considering the dynamical trajectories that automata trace out through abstract state spaces. However, I have chosen to take the system-theoretic route for several reasons, which I will now discuss.

First, if we succeed in understanding intelligent inquiry in terms of system-theoretic properties and processes, it equips this knowledge with the greatest degree of transferability between comparable systems. In short, it makes our knowledge robust, and keeps it from becoming too narrowly limited to a particular instantiation of the target capacity.

Second, if we organize our thinking in terms of a coherent system or an integral agent that carries out inquiries, it helps to manage the complexity of the design problem by splitting it into discrete stages. This strategy is especially useful in dealing with the recursive or the reflexive quality that bedevils all such inquiries into inquiry itself. This aspect of self-application to the problem is probably unavoidable, due to the following facts. Human beings are extremely complex agents, and any system that is likely to support significant human inquiry is bound to surpass the complexity of most systems that we are currently able to analyze in full. Research into complex systems is one of the jobs that will depend on intelligent software tools to advance in the future. For this we need programs that can follow the drift of inquiry and perhaps even help us to scout out fruitful directions of exploration. Programs to do this will need to acquire a heuristic model of the inquiry process hat they are being designed to assist. And so it goes. Programs for inquiry will be required to pull themselves up by their own bootstraps.

Inquiry Driven Systems

Taking as given the system-theoretic approach from now on, I can focus and rephrase my question about the technical enhancement of inquiry:

  • How can we put computational foundations under the theoretical models of inquiry, at least, the ones that we discover to be accessible?

To ask the same question in greater detail:

  • What is the depth and the content of the task analysis that is needed to relate the higher order functions of inquiry with the primitive elements that are given in systems theory and software engineering?

Connecting the requirements of a formal theory of inquiry with the resources of mathematical systems theory has led me to the concept of an "inquiry driven system" (IDS).

The concept of an inquiry driven system is intended to capture the essential properties of a broad class of intelligent systems, and to highlight the crucial processes which support learning and reasoning in natural and cultural systems. The defining properties of inquiry driven systems are discussed in the next few paragraphs. I then consider what is needed to supply these abstractions with operational definitions, concentrating on the terms of mathematical systems theory as a suitable foundation. After this, I discuss my plans to implement a software system that is designed to help analyze the qualitative behavior of complex systems, inquiry driven systems in particular.

An inquiry driven system has components of state, accessible to the system itself, which characterize the norms of its experience. The idea of a norm has two meanings, both of which are useful here.

  • In one sense of the word "norm", we have the descriptive regularities that are observed in summaries of past experience. These norms govern the expectable sequences of future states, as determined by natural laws.
  • In another sense of the word "norm", we have the prescriptive policies that are selected with an eye to future experience. These norms govern the intendable goals of processes, as controlled by deliberate choices.

Collectively, these two orders of norms go to make up the "knowledge base", or the "intellectual component", of the intelligent agent or the inquiry driven system.

An inquiry driven system, in the simplest cases worth talking about, requires at least three different modalities of knowledge component, referred to as "expectations", "intentions", and "observations" of the system. Each of these components has the status of a theory, that is, a propositional code that the agent of the system carries along and maintains with itself through all of its changes of state, possibly updating it as the need arises in experience. However, all of these theories have reference to a common world, and they indicate under their varying lights more or less overlapping regions in the state space of the system, or in some derivative or extension of the basic state space.

The inquiry process is driven by the nature, the degree, and the extent of the differences that exist at any time among its operative theories, for example, the differences among the expectations, the intentions, and the observations of the inquiry agent or the relevant community of inquiry. These discrepancies are evidenced by differences in the assemblies of models, empirical or theoretical, that are held to satisfy the respective theories.

Normally, human beings experience a high level of disparity among these theories as a dissatisfying situation, a condition of cognitive discord. For people, the incongruity of cognitive elements is accompanied by an unsettled affective state, in Peirce's phrase, the "irritation of doubt". A person in this situation is usually motivated to reduce the annoying disturbance by some action, all of which activities we may classify under the heading of inquiry processes.

Without insisting on strict determinism, we may say that the inquiry process is lawful if there is any kind of informative relationship connecting the state of cognitive discord at each time with the ensuing state transitions of the system.

Expressed in human terms, a difference between expectations and observations is experienced as a surprise to be explained, a difference between intentions and observations is experienced as a problem to be solved. We begin to understand a particular example of inquiry when we can describe the relation between the momentary intellectual state of its agent and the subsequent action that the agent undertakes.

These simple facts, the features of inquiry outlined above, already raise a number of issues, some of which are open problems that my research will have to address. Given the goal of constructing supports for inquiry on the grounds of systems theory, each of these difficulties is an obstacle to progress in the chosen direction, to understanding the capacity for inquiry as a systems property.

The Irritation of Doubt

In the next few paragraphs I discuss a critical problem to be solved in this approach, indicating its character to the extent I can succeed at present, and I suggest a reasonable way of proceeding.

In human inquiry there is always a relation between affective and cognitive features of experience. We have a sense of how much discord or harmony is present in a situation, and we rely on the intensity of this sensation as one measure of how to proceed with inquiry. This works so automatically that we have trouble distinguishing the affective and cognitive aspects of the irritating doubt that drives the process.

In the artificial systems we build to support inquiry, what measures can we take to supply this sense or arrange a substitute for it? If the proper measure of doubt cannot be formalized, then all responsibility for judging it will have to be assigned to the human side of the interface. This would greatly reduce the usefulness of the projected software.

The unsettled state that instigates inquiry is characterized by a high level of uncertainty. The settled state of knowledge at the end of inquiry is achieved by reducing this uncertainty to a minimum, at least to the point where action is not misguided.

Within the framework of information theory we already have a concept of uncertainty, the entropy of a probability distribution, as being something that we can measure. Certainly, how we feel about entropy does not enter the equation. Can we form a connection between the kind of doubt that drives human inquiry and the kind of uncertainty that is measured on scales of information content? If so, this would allow salient dynamic properties of inquiry driven systems to be studied in abstraction from the affective qualities of the anomalies, the disagreeabilities, and the incongruities that now drive them in the spheres of human experience. With respect to measurable qualities of uncertainty, inquiry driven systems could be taken as special types of control systems, where the variable to be controlled is the total amount of discrepancy, disparity, or dispersion in the knowledge base of the system.

The assumption of modularity, that the affective and the intellectual aspects of inquiry can be disentangled into separate components of the system, is a natural one to make. Whenever it holds, even approximately, it simplifies the task of understanding and permits the analyst or designer to assign responsibility for these factors to independent modules of the simulation or the implementation.

However, the assumption of modularity appears to be false in general, or true and useful only in approaching certain properties of inquiry. Many other features of inquiry are not completely understandable on this basis. To tackle these more refractory properties, I will be forced to examine the concept of a measure that separates the affective and intellectual impacts of disorder. To the extent that this issue can be resolved by analysis, I believe that it hinges on the characters that make a measure "objective", in effect, invariant over many perspectives and interpretations, as opposed to being merely the measure of a subjective moment, an impression that is limited to a special interpretation or a transient perspective.

The Orbit of Inquiry

The preceding discussion has indicated a few of the properties that are attributed to inquiry and its agents, and has initiated an analysis of their underlying principles. Now we engage the task of giving these processes operational definitions within the framework of mathematical systems theory.

Let us consider an inquiry driven system as described by a set of variables:

x1, …, xk, a1, …, am.

Here, the xi, for i = 1 to k, are regarded as ordinary state variables while the aj, for j = 1 to m, are regarded as variables codifying the state of knowledge with respect to a variety of issues. Many of the parameters aj will simply echo or anticipate the transient features of state that are swept out by the xi variables. However, in order for the system to possess a knowledge base that takes a propositional stance with respect to its own state space, other information variables aj will have to be utilized in less direct, that is, more symbolic ways.

The most general term that we can use to describe the informational parameters aj is to call them "signs". These are the syntactic building blocks that go into constructing the various knowledge bases of the inquiry driven system. Although these variables can be employed in a simple analogue fashion to represent information about past, present, or prospective states of the system, ultimately it becomes necessary for the system to have a formal syntax of expressions in which logical propositions about states can be represented and manipulated. I have implemented one fairly efficient way of doing this, using only three arbitrary symbols beyond the more passive arrays that are used to echo the ordinary features of state.

A task that remains for future work is to operationalize a suitable measure of difference between alternative propositions about the world, that is, about the state space of the system. A successful measure will gauge the differences in objective models and not be overly sensitive to unimportant variations in syntax. This means that the first priority of this measure is to recognize logical equivalence classes of expressions, responding equally to each of their individual members. This requirement brings the investigation back within the fold of logical inquiry. Along with finding such a measure of difference I will have to specify how these differences determine the state transitions of the inquiry driven system. At this juncture a number of suggestive analogies arise, connecting the logical, qualitative problem just stated with the questions treated in differential geometry and geometric dynamics.

Approaches to Inquiry

In this Chapter I lay out the "pragmatic theory of inquiry" that I will use in my study of inquiry driven systems. I begin with the basic features of one standard model of inquiry processes. Then I outline two different approaches to the functional structure of inquiry. Finally, I discuss a collection of computational routines that I have implemented to study various aspects of this model of inquiry.

The Pragmatic Approach to Inquiry

This Division sketches the main features of a canonical model of inquiry that will be employed throughout the rest of this project.

The pragmatic model or theory of inquiry was extracted by Charles Sanders Peirce from its raw materials in classical logic and refined in parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning. Borrowing a brace of concepts from Aristotle, Peirce examined three fundamental modes of reasoning that play a role in inquiry, commonly known as abductive, deductive, and inductive inference.

In rough terms, "abduction" is what we use to generate a likely hypothesis or an initial diagnosis in response to a phenomenon of interest or a problem of concern, while "deduction" is used to clarify, to derive, and to explicate the relevant consequences of the selected hypothesis, and "induction" is used to test the sum of the predictions against the sum of the data.

These three processes typically operate in a cyclic fashion, systematically operating to reduce the uncertainties and the difficulties that initiated the inquiry in question, and in this way, to the extent that inquiry is successful, leading to an increase in knowledge or in skills.

In the pragmatic way of thinking everything has a purpose, and the purpose of each thing is the first thing we should try to note about it. The purpose of inquiry is to reduce doubt and lead to a state of belief, which a person in that state will usually call "knowledge" or "certainty". As they contribute to the end of inquiry, we should appreciate that the three kinds of inference describe a cycle that can be understood only as a whole, and none of the three makes complete sense in isolation from the others. For instance, the purpose of abduction is to generate guesses of a kind that deduction can explicate and that induction can evaluate. This places a mild but meaningful constraint on the production of hypotheses, since it is not just any wild guess at explanation that submits itself to reason and bows out when defeated in a match with reality. In a similar fashion, each of the other types of inference realizes its purpose only in accord with its proper role in the whole cycle of inquiry. No matter how much it may be necessary to study these processes in abstraction from each other, the integrity of inquiry places strong limitations on the effective modularity of its principal components.

For our present purposes, the first feature to note in distinguishing the three principal modes of reasoning from each other is whether each of them is exact or approximate in character. In this light, deduction is the only one the three types of reasoning that can be made exact, in essence, always deriving true conclusions from true premisses, while abduction and induction are unavoidably approximate in their modes of operation, involving elements of fallible judgment in practice and inescapable error in their application.

The reason for this is that deduction, in the ideal limit, can be rendered a purely internal process of the reasoning agent, while the other two modes of reasoning essentially demand a constant interaction with the outside world, a source of phenomena and problems that will no doubt continue to exceed the capacities of any finite resource, human or machine, to master. Situated in this larger reality, approximations can be judged appropriate only in relation to their context of use and can be judged fitting only with regard to a purpose in view.

A parallel distinction that is often made in this connection is to call deduction a "demonstrative" form of inference, while abduction and induction are classed as "non-demonstrative" forms of reasoning. Strictly speaking, the latter two modes of reasoning are not properly called inferences at all. They are more like controlled associations of words or ideas that just happen to be successful often enough to be preserved as useful heuristic strategies in the repertoire of the agent. But non-demonstrative ways of thinking are inherently subject to error, and must be constantly checked out and corrected as needed in practice.

In classical terminology, forms of judgment that require attention to the context and the purpose of the judgment are said to involve an element of "art", in a sense that is judged to distinguish them from "science", and in their renderings as expressive judgments to implicate arbiters in styles of rhetoric, as contrasted with logic.

In a figurative sense, this means that only deductive logic can be reduced to an exact theoretical science, while the practice of any empirical science will always remain to some degree an art. This has important implications for any attempt to support inquiry with automated or computable procedures, constraining both the manner and the degree of their likely success. Among the more obvious consequences of this contingency, we may observe the following:

  1. Inquiry support software will need to be highly interactive, capable of being sensitive to the run-time conditions at at least two kinds of interfaces, those with its human users and those with the real world.
  2. The main effect of automation, at least, in the beginning, will be to speed up and to strengthen deductive reasoning.
  3. The chief assistance that computation can provide to induction is through measures of fit between the empirically gathered data sets and the theoretically conceived constructions.
  4. The limited guidance that formal and computable methods can bring to abduction, diagnosis, and hypothesis generation is restricted to checking the partly logical properties of consistency or feasibility, and defeasibility or falsifiability, and to speeding up the process of evaluation that pursues the initial pretense of a hypothesis.
  5. All of the above notwithstanding, on account of the circumstance that inquiry is an iterative cycle, improving the rate of performance at any critical bottleneck can serve to accelerate the entire process.

As far as automating induction goes, we should not expect an inductive program to make up the data for us, no matter how sophisticated it gets! Inductive tests can provide measures of how well a theoretical construct fits a set of data, but no fit is perfect, nor is it ever intended to be. An inductive concept is supposed to present a simplification of a complex reality, otherwise it would serve no function over and above just staring at the data. In gauging the slippage between concept and data, the degree of tolerance acceptable in a given situation is a matter of discretionary judgments that have to be made under the actual conditions in the field.

When it comes to automating abductive reasoning, we should observe the historical circumstance that it is often the most "unlikely" set of hypotheses that turn out to form the correct conceptual framework, at least when that likelihood has been judged from the standpoint of the previous framework. Aside from their responsibilities to the inquiry process, abductive hypotheses can be freely generated in the most creative manner possible. Breaking the mind-set of the problem as stated and reformulating data descriptions from radically new perspectives are just some of the allowable strategies that are frequently required for ultimate success.

Abductive reasoning is the mode of operation which is involved in shifting from one paradigm to another. In order to reduce the overall tension of uncertainty in a knowledge base, it is often necessary to restructure our perspective on the data in radical ways, to change the channel that parcels out information to us. But the true value of a new paradigm is typically not appreciated from the standpoint of the alternative or established models, that is, not until it has had time to reorganize the knowledge base in ways that demonstrate clear advantages to the entire community of inquiry concerned.

The preceding survey has introduced a model of inquiry and charted a series of limits and obstacles to the prospects of automating a support for inquiry. We should not let ourselves be too discouraged by the acknowledgment of these limitations and obstructions. But we ought to recognize that these constraints are not so much limits on the computational extension of human inquiry as they are limits on the instrumental nature of inquiry itself, being matters of the specific adaptations of finite creatures to an infinite world. In effect, they are nothing else but the familiar limits of the scientific method. They are the limits that make it a method.

Inquiry is a form of reasoning process, in effect, a particular way of conducting thought, and thus it can be said to institute a specialized manner, style, or turn of thinking. Philosophers of the school that is commonly called "pragmatic" hold that all thought takes place in signs, where "sign" is the word they use for the broadest conceivable variety of characters, expressions, formulae, messages, signals, texts, ..., that might be imagined. Indeed, even intellectual concepts and mental ideas are held to be a special class of signs, corresponding to internal states of the thinking agent that both issue in and result from the interpretation of external signs.

The subsumption of inquiry within reasoning in general and the inclusion of thinking within the class of sign processes allows us to approach the subject of inquiry from two different perspectives:

  • The "syllogistic" approach treats inquiry as a logical species.
  • The "sign-theoretic" approach views inquiry as taking place within a more general setting of sign processes.

I would to wrap up this preliminary survey of the inquiry domain by introducing a classic example of an everyday inquiry process, an example that I will take as canonical in the sequel, turning it around and viewing it from several different angles as a way to illustrate many generic aspects of the full inquiry process. In the process of doing this I will continue to introduce an array of basic terms and a host of critical issues that we will need to pick up and tackle in the larger discussion of inquiry. Here is John Dewey's "Sign of Rain" story:

A man is walking on a warm day. The sky was clear the last time he observed it; but presently he notes, while occupied primarily with other things, that the air is cooler. It occurs to him that it is probably going to rain; looking up, he sees a dark cloud between him and the sun, and he then quickens his steps. What, if anything, in such a situation can be called thought? Neither the act of walking nor the noting of the cold is a thought. Walking is one direction of activity; looking and noting are other modes of activity. The likelihood that it will rain is, however, something suggested. The pedestrian feels the cold; he thinks of clouds and a coming shower. (John Dewey, How We Think, pp. 6–7).

I now undertake a detailed study of the pragmatic theory of inquiry, treating its positive features in gradually increasing depth. Even though they can contribute but partial perspectives to the complete account, I regard it as wise to begin with the syllogistic and the sign-theoretic outlooks to get a foothold on the inquiry domain.

The Syllogistic Approach

In this Division I discuss the syllogistic approach to inquiry, considering it only insofar as the propositional or sentential properties of the associated reasoning processes are concerned.


In the case of propositional calculus or sentential logic, deduction comes down to applications of the transitive law for conditional implications and the approximate forms of inference hang on the properties that derive from these. In describing the various types of inference I will employ a few old "terms of art" from classical logic that are still of use in treating these kinds of simple problems in reasoning.

  Expressed in these terms, Deduction takes a Case,
  the minor premiss XY, and combines it with a Rule,
  the major premiss YZ, to arrive at a Fact, namely,
  the demonstrative conclusion XZ.
  Contrasted with this pattern, Induction takes
  a Fact of the form XZ and matches it with
  a Case of the form XY to guess that a Rule
  is possibly in play, one of the form YZ.
  Cast on the same template, Abduction takes
  a Fact of the form XZ and matches it with
  a Rule of the form YZ to guess that a Case
  is presently in view, one of the form XY.

For ease of reference, Figure 1 and the Legend beneath it summarize the classical terminology for the three types of inference and the relationships among them.

|                                                 |
|                   Z                             |
|                   o                             |
|                   |\                            |
|                   | \                           |
|                   |  \                          |
|                   |   \                         |
|                   |    \                        |
|                   |     \   R U L E             |
|                   |      \                      |
|                   |       \                     |
|               F   |        \                    |
|                   |         \                   |
|               A   |          \                  |
|                   |           o Y               |
|               C   |          /                  |
|                   |         /                   |
|               T   |        /                    |
|                   |       /                     |
|                   |      /                      |
|                   |     /   C A S E             |
|                   |    /                        |
|                   |   /                         |
|                   |  /                          |
|                   | /                           |
|                   |/                            |
|                   o                             |
|                   X                             |
|                                                 |
| Deduction takes a Case of the form X => Y,      |
| matches it with a Rule of the form Y => Z,      |
| then adverts to a Fact of the form X => Z.      |
|                                                 |
| Induction takes a Case of the form X => Y,      |
| matches it with a Fact of the form X => Z,      |
| then adverts to a Rule of the form Y => Z.      |
|                                                 |
| Abduction takes a Fact of the form X => Z,      |
| matches it with a Rule of the form Y => Z,      |
| then adverts to a Case of the form X => Y.      |
|                                                 |
| Even more succinctly:                           |
|                                                 |
|           Abduction  Deduction  Induction       |
|                                                 |
| Premiss:     Fact       Rule       Case         |
| Premiss:     Rule       Case       Fact         |
| Outcome:     Case       Fact       Rule         |
|                                                 |
Figure 1.  Elementary Structure and Terminology

In its original usage a statement of Fact has to do with a deed done or a record made, that is, a type of event that is openly observable and not riddled with speculation as to its very occurrence. In contrast, a statement of Case may refer to a hidden or a hypothetical cause, that is, a type of event that is not immediately observable to all concerned. Obviously, the distinction is a rough one and the question of which mode applies can depend on the points of view that different observers adopt over time. Finally, a statement of a Rule is called that because it states a regularity or a regulation that governs a whole class of situations, and not because of its syntactic form. So far in this discussion, all three types of constraint are expressed in the form of conditional propositions, but this is not a fixed requirement. In practice, these modes of statement are distinguished by the roles that they play within an argument, not by their style of expression. When the time comes to branch out from the syllogistic framework, we will find that propositional constraints can be discovered and represented in arbitrary syntactic forms.

In the normal course of inquiry, the elementary types of inference proceed in the order: Abduction, Deduction, Induction. However, the same building blocks can be assembled in other ways to yield different types of complex inferences. Of particular importance, reasoning by analogy can be analyzed as a combination of induction and deduction, in other words, as the abstraction and the application of a rule. Because a complicated pattern of analogical inference will be used in our example of a complete inquiry, it will help to prepare the ground if we first stop to consider an example of analogy in its simplest form.


The classic description of analogy in the syllogistic frame comes from Aristotle, who called this form of inference by the name "paradeigma", that is, reasoning by way of example or through the parallel comparison of cases.

We have an Example [analogy, 'paradeigma'] when the major extreme is shown to be applicable to the middle term by means of a term similar to the third. It must be known both that the middle applies to the third term and that the first applies to the term similar to the third. (Aristotle, Prior Analytics, 2.24).

Aristotle illustrates this pattern of argument with the following sample of reasoning. The setting is a discussion, taking place in Athens, on the issue of going to war with Thebes. It is apparently accepted that a war between Thebes and Phocis is or was a bad thing, perhaps from the objectivity lent by non-involvement or perhaps as a lesson of history.

E.g., let A be "bad", B "to make war on neighbors", C "Athens against Thebes", and D "Thebes against Phocis". Then if we require to prove that war against Thebes is bad, we must be satisfied that war against neighbors is bad. Evidence of this can be drawn from similar examples, e.g., that war by Thebes against Phocis is bad. Then since war against neighbors is bad, and war against Thebes is against neighbors, it is evident that war against Thebes is bad. (Aristotle, Prior Analytics, 2.24).

We may analyze this argument as follows:

First, a Rule is induced from the consideration of a similar Case and a relevant Fact:

  Case: DB, Thebes vs Phocis is war against neighbors.
  Fact: DA, Thebes vs Phocis is bad.
  Rule: BA, War against neighbors is bad.

Next, the Fact to be proved is deduced from the application of this Rule to the present Case:

  Case: CB, Athens vs Thebes is war against neighbors.
  Rule: BA, War against neighbors is bad.
  Fact: CA, Athens vs Thebes is bad.

In practice, of course, it would probably take a mass of comparable cases to establish a rule. As far as the logical structure goes, however, this quantitative confirmation only amounts to "gilding the lily". Perfectly valid rules can be guessed on the first try, abstracted from a single experience or adopted vicariously with no personal experience. Numerical factors only modify the degree of confidence and the strength of habit that govern the application of previously learned rules.

Figure 2 gives a graphical illustration of Aristotle's example of "Example", that is, the form of reasoning that proceeds by Analogy or according to a Paradigm.

|                                                           |
|                             A                             |
|                             o                             |
|                            /*\                            |
|                           / * \                           |
|                          /  *  \                          |
|                         /   *   \                         |
|                        /    *    \                        |
|                       /     *     \                       |
|                      /   R u l e   \                      |
|                     /       *       \                     |
|                    /        *        \                    |
|                   /         *         \                   |
|                  /          *          \                  |
|              F a c t        o        F a c t              |
|                /          * B *          \                |
|               /         *       *         \               |
|              /        *           *        \              |
|             /       *               *       \             |
|            /   C a s e            C a s e    \            |
|           /     *                       *     \           |
|          /    *                           *    \          |
|         /   *                               *   \         |
|        /  *                                   *  \        |
|       / *                                       * \       |
|      o                                             o      |
|     C                                               D     |
|                                                           |
| A  =  Atrocious, Adverse to All, A bad thing              |
| B  =  Belligerent Battle Between Brethren                 |
| C  =  Contest of Athens against Thebes                    |
| D  =  Debacle of Thebes against Phocis                    |
|                                                           |
| A is a major term                                         |
| B is a middle term                                        |
| C is a minor term                                         |
| D is a minor term, similar to C                           |
|                                                           |
Figure 2.  Aristotle's "War Against Neighbors" Example

In this analysis of reasoning by Analogy, it is a complex or a mixed form of inference that can be seen as taking place in two steps:

1. The first step is an Induction that abstracts a Rule from a Case and a Fact.

  Case: DB, Thebes vs Phocis is a battle between neighbors.
  Fact: DA, Thebes vs Phocis is adverse to all.
  Rule: BA, A battle between neighbors is adverse to all.

2. The final step is a Deduction that applies this Rule to a Case to arrive at a Fact.

  Case: CB, Athens vs Thebes is a battle between neighbors.
  Rule: BA, A battle between neighbors is adverse to all.
  Fact: CA, Athens vs Thebes is adverse to all.


Getting back to our "Rainy Day" story, we find our peripatetic hero presented with a surprising Fact:

  Fact: CA, In the Current situation the Air is cool.

Responding to an intellectual reflex of puzzlement about the situation, his resource of common knowledge about the world is impelled to seize on an approximate Rule:

  Rule: BA, Just Before it rains, the Air is cool.

This Rule can be recognized as having a potential relevance to the situation because it matches the surprising Fact, CA, in its consequential feature A.

All of this suggests that the present Case may be one in which it is just about to rain:

  Case: CB, The Current situation is just Before it rains.

The whole mental performance, however automatic and semi-conscious it may be, that leads up from a problematic Fact and a previously settled knowledge base of Rules to the plausible suggestion of a Case description, is what we are calling an abductive inference.

The next phase of inquiry uses deductive inference to expand the implied consequences of the abductive hypothesis, with the aim of testing its truth. For this purpose, the inquirer needs to think of other things that would follow from the consequence of his precipitate explanation. Thus, he now reflects on the Case just assumed:

  Case: CB, The Current situation is just Before it rains.

He looks up to scan the sky, perhaps in a random search for further information, but since the sky is a logical place to look for details of an imminent rainstorm, symbolized in our story by the letter B, we may safely suppose that our reasoner has already detached the consequence of the abduced Case, CB, and has begun to expand on its further implications. So let us imagine that our up-looker has a more deliberate purpose in mind, and that his search for additional data is driven by the new-found, determinate Rule:

  Rule: BD, Just Before it rains, Dark clouds appear.

Contemplating the assumed Case in combination with this new Rule leads him by an immediate deduction to predict an additional Fact:

  Fact: CD, In the Current situation Dark clouds appear.

The reconstructed picture of reasoning assembled in this second phase of inquiry is true to the pattern of deductive inference.

Whatever the case, our subject observes a Dark cloud, just as he would expect on the basis of the new hypothesis. The explanation of imminent rain removes the discrepancy between observations and expectations and thereby reduces the shock of surprise that made this inquiry necessary.

Figure 3 gives a graphical illustration of Dewey's example of inquiry, isolating for the purposes of the present analysis the first two steps in the more extended proceedings that go to make up the whole inquiry.

|                                                           |
|     A                                               D     |
|      o                                             o      |
|       \ *                                       * /       |
|        \  *                                   *  /        |
|         \   *                               *   /         |
|          \    *                           *    /          |
|           \     *                       *     /           |
|            \   R u l e             R u l e   /            |
|             \       *               *       /             |
|              \        *           *        /              |
|               \         *       *         /               |
|                \          * B *          /                |
|              F a c t        o        F a c t              |
|                  \          *          /                  |
|                   \         *         /                   |
|                    \        *        /                    |
|                     \       *       /                     |
|                      \   C a s e   /                      |
|                       \     *     /                       |
|                        \    *    /                        |
|                         \   *   /                         |
|                          \  *  /                          |
|                           \ * /                           |
|                            \*/                            |
|                             o                             |
|                             C                             |
|                                                           |
| A  =  the Air is cool                                     |
| B  =  just Before it rains                                |
| C  =  the Current situation                               |
| D  =  a Dark cloud appears                                |
|                                                           |
| A is a major term                                         |
| B is a middle term                                        |
| C is a minor term                                         |
| D is a major term, associated with A                      |
|                                                           |
Figure 3.  Dewey's "Rainy Day" Inquiry

In this analysis of the first steps of Inquiry, we have a complex or a mixed form of inference that can be seen as taking place in two steps:

1. The first step is an Abduction that abstracts a Case from the consideration of a Fact and a Rule.

  Fact: CA, In the Current situation the Air is cool.
  Rule: BA, Just Before it rains, the Air is cool.
  Case: CB, The Current situation is just Before it rains.

2. The final step is a Deduction that admits this Case to another Rule and so arrives at a novel Fact.

  Case: CB, The Current situation is just Before it rains.
  Rule: BD, Just Before it rains, a Dark cloud will appear.
  Fact: CD, In the Current situation, a Dark cloud will appear.

This is nowhere near a complete analysis of the Rainy Day inquiry, even insofar as it might be carried out within the constraints of the syllogistic framework, and it covers only the first two steps of the relevant inquiry process, but maybe it will do for a start.

One last thing ought to be noticed here, the formal duality between this portion of inquiry and the argument by analogy.

In order to comprehend the bearing of inductive reasoning on the closing phases of inquiry there are a couple of observations that we should make:

  • First, we need to recognize that smaller inquiries are typically woven into larger inquiries, whether we view the whole pattern of inquiry as carried on by a single agent or by a complex community.
  • Further, we need to consider the different ways in which the particular instances of inquiry can be related to ongoing inquiries at larger scales. Three modes of inductive interaction between the micro-inquiries and the macro-inquiries that are salient here can be described under the headings of the Learning, the Transfer, and the Testing of rules.

Throughout inquiry the reasoner makes use of rules that have to be transported across intervals of experience, from the masses of experience where they are learned to the moments of experience where they are applied. Inductive reasoning is involved in the learning and the transfer of these rules, both in accumulating a knowledge base and in carrying it through the the times between acquisition and application.

  • Learning. The principal way that induction contributes to an ongoing inquiry is through the learning of rules, that is, by creating each of the rules that goes into the knowledge base, or ever gets used along the way.
  • Transfer. The continuing way that induction contributes to an ongoing inquiry is through the exploit of analogy, a two-step combination of induction and deduction that serves to transfer rules from one context to another.
  • Testing. Finally, every inquiry that makes use of a knowledge base constitutes a "field test" of its accumulated contents. If the knowledge base fails to serve any live inquiry in a satisfactory manner, then there is a prima facie reason to reconsider and possibly to amend some of its rules.

I will next describe how these principles of learning, transfer, and testing apply to John Dewey's "Sign of Rain" example.


Rules in a knowledge base, as far as their effective content goes, can be obtained by any mode of inference.

For example, a rule like:

  Rule: BA, Just Before it rains, the Air is cool,

is usually induced from a consideration of many past events, in a manner that can be rationally reconstructed as follows:

  Case: CB, In Certain events, it is just Before it rains,
  Fact: CA, In Certain events, the Air is cool,
  Rule: BA, Just Before it rains, the Air is cool.

However, the very same proposition could also be abduced as an explanation of a singular occurrence or deduced as a conclusion of a presumptive theory.


What is it that gives a distinctively inductive character to the acquisition of a knowledge base? It is evidently the "analogy of experience" that underlies the useful application. Whenever we find ourselves prefacing an argument with the phrase "If past experience is any guide ..." then we can be sure that this principle has come into play. We are invoking an analogy between past experience, considered as a totality, and present experience, considered as a point of application. What we mean in practice is this: "If past experience is a fair sample of possible experience, then the knowledge gained in it applies to present experience". This is the mechanism that allows a knowledge base to be carried across gulfs of experience that are indifferent to the effective contents of its rules.

Here are the details of how this notion of transfer works out in the case of the "Sign of Rain" example:

Let us consider a fragment, Kpres, of the reasoner's knowledge base that is logically equivalent to the conjunction of two rules:

  Kpres = (BA) and (BD).

Kpres = present knowledge base, expressed in the form of a logical constraint on the present universe of discourse.

It is convenient to have the option of expressing all logical statements in terms of their models, that is, in terms of the primitive circumstances or the elements of experience over which they hold true.

  • Let Epast be the chosen set of experiences, or the circumstances that we have in mind when we refer to "past experience".
  • Let Eposs be the collective set of experiences, or the projective total of possible circumstances.
  • Let Epres be the present experience, or the circumstances that are present to the reasoner at the current moment.

If we think of the knowledge base Kpres as referring to the "regime of experience" over which it is valid, then all of these sets of models can be compared by the simple relations of set inclusion or logical implication.

Figure 4 schematizes this way of viewing the "analogy of experience".

|                                                           |
|                          K_pres                           |
|                             o                             |
|                            /|\                            |
|                           / | \                           |
|                          /  |  \                          |
|                         /   |   \                         |
|                        /  Rule   \                        |
|                       /     |     \                       |
|                      /      |      \                      |
|                     /       |       \                     |
|                    /     E_poss      \                    |
|              Fact /         o         \ Fact              |
|                  /        *   *        \                  |
|                 /       *       *       \                 |
|                /      *           *      \                |
|               /     *               *     \               |
|              /    *                   *    \              |
|             /   *  Case           Case  *   \             |
|            /  *                           *  \            |
|           / *                               * \           |
|          /*                                   *\          |
|         o<<<---------------<<<---------------<<<o         |
|      E_past         Analogy Morphism         E_pres       |
|    More Known                              Less Known     |
|                                                           |
Figure 4.  Analogy of Experience

In these terms, the "analogy of experience" proceeds by inducing a Rule about the validity of a current knowledge base and then deducing a Fact, its applicability to a current experience, as in the following sequence:

Inductive Phase:

  Given Case: Epast => Eposs, Chosen events fairly sample Collective events.
  Given Fact: Epast => Kpres, Chosen events support the Knowledge regime.
  Induce Rule: Eposs => Kpres, Collective events support the Knowledge regime.

Deductive Phase:

  Given Case: Epres => Eposs, Current events fairly sample Collective events.
  Given Rule: Eposs => Kpres, Collective events support the Knowledge regime.
  Deduce Fact: Epres => Kpres, Current events support the Knowledge regime.

If the observer looks up and does not see dark clouds, or if he runs for shelter but it does not rain, then there is fresh occasion to question the validity of his knowledge base.

I defer discussing the logical basis of such a step until another occasion.

Nota Bene. I am going to skip ahead, as Items 2.3 through 2.5 of the Outline need some work that I am not ready to do just yet, so I'll move on to Chapter 3.

Inquiry and Analogy

This Chapter discusses C.S. Peirce's treatment of analogy, placing it in relation to his overall theory of inquiry. The first order of business is to introduce the three elementary types of reasoning that Peirce adopted from classical logic. In Peirce's analysis both inquiry and analogy are complex programs of reasoning which develop through stages of these three types, although normally in different orders.

Three Types of Reasoning

Types of Reasoning in Aristotle


Types of Reasoning in C.S. Peirce

Here we present one of Peirce's earliest treatments of the three types of reasoning, from his Harvard Lectures of 1865 "On the Logic of Science". It illustrates how one and the same proposition might be reached from three different directions, as the end result of an inference in each of the three modes.

  We have then three different kinds of inference:
  Deduction or inference à priori,
  Induction or inference à particularis,
  Hypothesis or inference à posteriori.
  (C.S. Peirce, CE 1, p. 267).
  If I reason that certain conduct is wise because it has a character which belongs only to wise things, I reason à priori.
  If I think it is wise because it once turned out to be wise, that is, if I infer that it is wise on this occasion because it was wise on that occasion, I reason inductively [à particularis].
  But if I think it is wise because a wise man does it, I then make the pure hypothesis that he does it because he is wise, and I reason à posteriori.
  (C.S. Peirce, CE 1, p. 180).

Suppose we make the following assignments:

  A = "Wisdom",
  B = "a certain character",
  C = "a certain conduct",
  D = "done by a wise man",
  E = "a certain occasion".

Recognizing that a little more concreteness will aid the understanding, let us make the following substitutions in Peirce's example:

  B = "Benevolence", a certain character,
  C = "Contributes to Charity", a certain conduct,
  E = "Earlier today", a certain occasion.

The converging operation of all three reasonings is shown in Figure 5.

|                                                                     |
|  D ("done by a wise man")                                           |
|   o                                                                 |
|    \*                                                               |
|     \ *                                                             |
|      \  *                                                           |
|       \   *                                                         |
|        \    *                                                       |
|         \     *                                                     |
|          \      * A ("a wise act")                                  |
|           \       o                                                 |
|            \     /| *                                               |
|             \   / |   *                                             |
|              \ /  |     *                                           |
|               .   |       o B ("benevolence", a certain character)  |
|              / \  |     *                                           |
|             /   \ |   *                                             |
|            /     \| *                                               |
|           /       o                                                 |
|          /      * C ("contributes to charity", a certain conduct)   |
|         /     *                                                     |
|        /    *                                                       |
|       /   *                                                         |
|      /  *                                                           |
|     / *                                                             |
|    /*                                                               |
|   o                                                                 |
|  E ("earlier today", a certain occasion)                            |
|                                                                     |
Figure 5.  A Thrice Wise Act

The common proposition that concludes each argument is AC, to wit, "contributing to charity is wise".

Deduction could have obtained the Fact AC from the Rule AB, "benevolence is wisdom", along with the Case BC, "contributing to charity is benevolent".

Induction could have gathered the Rule AC, after a manner of saying that "contributing to charity is exemplary of wisdom", from the Fact AE, "the act of earlier today is wise", along with the Case CE, "the act of earlier today was an instance of contributing to charity".

Abduction could have guessed the Case AC, in a style of expression stating that "contributing to charity is explained by wisdom", from the Fact DC, "contributing to charity is done by this wise man", and the Rule DA, "everything that is wise is done by this wise man". Thus, a wise man, who happens to do all of the wise things that there are to do, may nevertheless contribute to charity for no good reason, and even be known to be charitable to a fault. But all of this notwithstanding, on seeing the wise man contribute to charity we may find it natural to conjecture, in effect, to consider it as a possibility worth examining further, that charity is indeed a mark of his wisdom, and not just the accidental trait or the immaterial peculiarity of his character — in essence, that wisdom is the "reason" that he contributes to charity.

Comparison of the Analyses


Aristotle's "Apagogy" : Abductive Reasoning as Problem Reduction

Peirce's notion of abductive reasoning was derived from Aristotle's treatment of it in the Prior Analytics. Aristotle's discussion begins with an example that may appear incidental, but the question and its analysis are echoes of an important investigation that was pursued in one of Plato's Dialogues, the Meno. This inquiry is concerned with the possibility of knowledge and the relationship between knowledge and virtue, or between their objects, the true and the good. It is not just because it forms a recurring question in philosophy, but because it preserves a certain correspondence between its form and its content, that we shall find this example increasingly relevant to our study.

A couple of notes on the reading may be helpful. The Greek text seems to imply a geometric diagram, in which directed line segments AB, BC, AC are used to indicate logical relations between pairs of the terms in A, B, C. We have two options for reading these line labels, either as implications or as subsumptions, as in the following two paradigms for interpretation.

  Read as Implications:
  "AB" = "A <= B",
  "BC" = "B <= C",
  "AC" = "A <= C".

  Read as Subsumptions:
  "AB" = "A subsumes B",
  "BC" = "B subsumes C",
  "AC" = "A subsumes C".

Here, "X subsumes Y" means that "X applies to all Y", or that "X is predicated of all of Y". When there is no danger of confusion, we may write this as "X >= Y".

  We have Reduction ['apagoge', or 'abduction']: (1) when it is obvious that the first term applies to the middle, but that the middle applies to the last term is not obvious, yet nevertheless is more probable or not less probable than the conclusion; or (2) if there are not many intermediate terms between the last and the middle; for in all such cases the effect is to bring us nearer to knowledge.
  (1) E.g., let A stand for "that which can be taught", B for "knowledge", and C for "morality". Then that knowledge can be taught is evident; but whether virtue is knowledge is not clear. Then if BC is not less probable or is more probable than AC, we have reduction; for we are nearer to knowledge for having introduced an additional term, whereas before we had no knowledge that AC is true.
  (2) Or again we have reduction if there are not many intermediate terms between B and C; for in this case too we are brought nearer to knowledge. E.g., suppose that D is "to square", E "rectilinear figure", and F "circle". Assuming that between E and F there is only one intermediate term -- that the circle becomes equal to a rectilinear figure by means of lunules -- we should approximate to knowledge.
  Aristotle, "Prior Analytics" 2.25, in Aristotle, Volume 1, H.P. Cooke and H. Tredennick (trans.), Loeb Classical Library, William Heinemann, London, UK, 1938.

The method of abductive reasoning bears a close relation to the sense of reduction in which we speak of one question reducing to another. The question being asked is "Can virtue be taught?" The type of answer which develops is the following.

If virtue is a form of understanding, and if we are willing to grant that understanding can be taught, then virtue can be taught. In this way of approaching the problem, by detour and indirection, the form of abductive reasoning is used to shift the attack from the original question, whether virtue can be taught, to the hopefully easier question, whether virtue is a form of understanding.

The logical structure of the process of hypothesis formation in the first example follows the pattern of "abduction to a case", whose abstract form is diagrammed and schematized in Figure 6.

|                                                 |
|             T  =  Teachable                     |
|             o                                   |
|             ^^                                  |
|             | \                                 |
|             |  \                                |
|             |   \                               |
|             |    \                              |
|             |     \   R U L E                   |
|             |      \                            |
|             |       \                           |
|         F   |        \                          |
|             |         \                         |
|         A   |          \                        |
|             |           o U  =  Understanding   |
|         C   |          ^                        |
|             |         /                         |
|         T   |        /                          |
|             |       /                           |
|             |      /                            |
|             |     /   C A S E                   |
|             |    /                              |
|             |   /                               |
|             |  /                                |
|             | /                                 |
|             |/                                  |
|             o                                   |
|             V  =  Virtue                        |
|                                                 |
| T  =  Teachable (didacton)                      |
| U  =  Understanding (epistemé)                  |
| V  =  Virtue (areté)                            |
|                                                 |
| T is the Major term                             |
| U is the Middle term                            |
| V is the Minor term                             |
|                                                 |
| TV  =  "T of V"  =  Fact in Question            |
| TU  =  "T of U"  =  Rule in Evidence            |
| UV  =  "U of V"  =  Case in Question            |
|                                                 |
| Schema for Abduction to a Case:                 |
|                                                 |
|  Fact:  V => T?                                 |
|  Rule:  U => T.                                 |
| ----------------                                |
|  Case:  V => U?                                 |
Figure 6.  Teachability, Understanding, Virtue

Toward a Functional Conception of Quantificational Logic

Up till now quantification theory has been based on the assumption of individual variables ranging over universal collections of perfectly determinate elements. Merely to write down quantified formulas like \(\forall_{x \in X} F(x)\) and \(\exists_{x \in X} F(x)\) involves a subscription to such notions, as shown by the membership relations invoked in their indices. Reflected on pragmatic and constructive principles, however, these ideas begin to appear as problematic hypotheses whose warrants are not beyond question, projects of exhaustive determination that overreach the powers of finite information and control to manage. Therefore, it is worth considering how we might shift the scene of quantification theory closer to familiar ground, toward the predicates themselves that represent our continuing acquaintance with phenomena.

Higher Order Propositional Expressions

By way of equipping this inquiry with a bit of concrete material, I begin with a consideration of higher order propositional expressions (HOPE's), in particular, those that stem from the propositions on 1 and 2 variables.

Higher Order Propositions and Logical Operators (n = 1)

A higher order proposition is, very roughly speaking, a proposition about propositions. If the original order of propositions is a class of indicator functions {F : XB}, then the next higher order of propositions consists of maps of the type m : (XB) → B, where, as usual, B = {0, 1}.

For example, consider the case where X = B1 = B. Then there are exactly four propositions of the form F : BB, and exactly sixteen higher order propositions, all of the type m : (BB) → B. Table 7 lists the sixteen higher order propositions about propositions on one boolean variable, organized in the following fashion:

Columns 1 and 2 form a truth table for the four F : BB, perhaps turned on its side from the way one is accustomed to see truth tables, with the row leaders in Column 1 displaying the names of the functions Fi, i = 1 to 4, while the entries in Column 2 give the values of each function for the argument values that are listed in the column head. Column 3 displays one of the usual expressions for the proposition in question. The last sixteen columns are topped by a set of conventional names for the higher order propositions, also known as the "measures" mj, for j = 0 to 15, where the entries in the body of the Table record the values that each mj assigns to each Fi.

Table 7. Higher Order Propositions (n = 1)
\ x 1 0 F m m m m m m m m m m m m m m m m
F \     00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15
F0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
F1 0 1 (x) 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
F2 1 0 x 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
F3 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

I am going to put off explaining Table 8, that presents a sample of what I call interpretive categories for higher order propositions, until after we get beyond the 1-dimensional case, since these lower dimensional cases tend to be a bit condensed or degenerate in their structures, and a lot of what is going on here will almost automatically become clearer as soon as we get even two logical variables into the mix.

Table 8. Interpretive Categories for Higher Order Propositions (n = 1)
Measure Happening Exactness Existence Linearity Uniformity Information
m0 nothing happens          
m1   just false nothing exists      
m2   just not x        
m3     nothing is x      
m4   just x        
m5     everything is x F is linear    
m6         F is not uniform F is informed
m7   not just true        
m8   just true        
m9         F is uniform F is not informed
m10     something is not x F is not linear    
m11   not just x        
m12     something is x      
m13   not just not x        
m14   not just false something exists      
m15 anything happens          

Higher Order Propositions and Logical Operators (n = 2)

By way of reviewing notation and preparing to extend it to higher order universes of discourse, let us first consider the universe of discourse X° = [X] = [x1, x2] = [x, y], based on two logical features or boolean variables x and y.

1. The points of X° are collected in the space:
  X = <<x, y>> = {<x, y>} B2.
  In other words, written out in full:
  X = {<"(x)", "(y)">, <"(x)", "y">, <"x", "(y)">, <"x", "y">}
  X {<0, 0>, <0, 1>, <1, 0>, <1, 1>}
2. The propositions of X° make up the space:
  X↑ = (XB) = {f : XB} (B2B).

As always, it is frequently convenient to omit a few of the finer markings of distinctions among isomorphic structures, so long as one is aware of their presence and knows when it is crucial to call upon them again.

The next higher order universe of discourse that is built on X° is X°2 = [X°] = [[x, y]], which may be developed in the following way. The propositions of X° become the points of X°2, and the mappings of the type m : (XB) → B become the propositions of X°2. In addition, it is convenient to equip the discussion with a selected set of higher order operators on propositions, all of which have the form w : (B2B)kB.

To save a few words in the remainder of this discussion, I will use the terms "measure" and "qualifier" to refer to all types of higher order propositions and operators. To describe the present setting in picturesque terms, the propositions of [x, y] may be regarded as a gallery of sixteen venn diagrams, while the measures m : (XB) → B are analogous to a body of judges or a panel of critical viewers, each of whom evaluates each of the pictures as a whole and reports the ones that find favor or not. In this way, each judge m_j partitions the gallery of pictures into two aesthetic portions, the pictures (mj)–1(1) that mj likes and the pictures (mj)–1(0) that mj dislikes.

There are 216 = 65536 measures of the type m : (B2B) → B. Table 9 introduces the first 24 of these measures in the fashion of the higher order truth table that I used before. The column headed "mj" shows the values of the measure mj on each of the propositions fi : B2B, for i = 0 to 23, with blank entries in the Table being optional for values of zero. The arrangement of measures that continues according to the plan indicated here is referred to as the "standard ordering" of these measures. In this scheme of things, the index j of the measure mj is the decimal equivalent of the bit string that is associated with mj's functional values, which can be obtained in turn by reading the jth column of binary digits in the Table as the corresponding range of boolean values, taking them up in the order from bottom to top.

Table 9. Higher Order Propositions (n = 2)
x : 1100 f m m m m m m m m m m m m m m m m m m m m m m m m
y : 1010   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
f0 0000 ( ) 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
f1 0001 (x)(y)     1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
f2 0010 (x) y         1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
f3 0011 (x)                 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
f4 0100 x (y)                                 1 1 1 1 1 1 1 1
f5 0101 (y)                                                
f6 0110 (x, y)                                                
f7 0111 (x y)                                                
f8 1000 x y                                                
f9 1001 ((x, y))                                                
f10 1010 y                                                
f11 1011 (x (y))                                                
f12 1100 x                                                
f13 1101 ((x) y)                                                
f14 1110 ((x)(y))                                                
f15 1111 (( ))                                                

Umpire Operators

In order to get a handle on the space of higher order propositions and eventually to carry out a functional approach to quantification theory, it serves to construct some specialized tools. Specifically, I define a higher order operator Υ, called the "umpire operator", which takes up to three propositions as arguments and returns a single truth value as the result. Formally, this so-called "multi-grade" property of \(\Upsilon\!\) can be expressed as a union of function types, in the following manner:

\[\Upsilon : \cup^{m = 1, 2, 3}((\mathbb{B}^k \to \mathbb{B})^m \to \mathbb{B}).\!\]

In contexts of application the intended sense can be discerned by the number of arguments that actually appear in the argument list. Often, the first and last arguments appear as indices, the one in the middle being treated as the main argument while the other two arguments serve to modify the sense of the operation in question. Thus, we have the following forms:

Υpr q = Υ(p, q, r)
Υpr : (BkB) → B

The intention of this operator is that we evaluate the proposition q on each model of the proposition p and combine the results according to the method indicated by the connective parameter r. In principle, the index r might specify any connective on as many as 2k arguments, but usually we have in mind a much simpler form of combination, most often either collective products or collective sums. By convention, each of the accessory indices p, r is assigned a default value that is understood to be in force when the corresponding argument place is left blank, specifically, the constant proposition 1 : BkB for the lower index p, and the continued conjunction or continued product operation Π for the upper index r. Taking the upper default value gives license to the following readings:

1. Υp q = Υ(p, q) = Υ(p, q, Π).
2. Υp = Υ(p, __, Π) : (BkB) → B.

This means that Υp q = 1 if and only if q holds for all models of p. In propositional terms, this is tantamount to the assertion that pq, or that _(p (q))_ = 1.

Throwing in the lower default value permits the following abbreviations:

3. Υq = Υ(q) = Υ1 q = Υ(1, q, Π).
4. Υ = Υ(1, __, Π) : (BkB) → B.

This means that Υq = 1 if and only if q holds for the whole universe of discourse in question, that is, if and only q is the constantly true proposition 1 : BkB. The ambiguities of this usage are not a problem so long as we distinguish the context of definition from the context of application and restrict all shorthand notations to the latter.

Measure for Measure

An acquaintance with the functions of the umpire operator can be gained from Tables 10 and 11, where the 2-dimensional case is worked out in full.

The auxiliary notations:

αi f = Υ(fi, f),
βi f = Υ(f, fi),

define two series of measures:

αi, βi : (B2B) → B,

incidentally providing compact names for the column headings of the next two Tables.

Table 10. Qualifiers of Implication Ordering: αi f = Υ(fif)
x : 1100 f α α α α α α α α α α α α α α α α
y : 1010   15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
f0 0000 ( )                               1
f1 0001 (x)(y)                             1 1
f2 0010 (x) y                           1   1
f3 0011 (x)                         1 1 1 1
f4 0100 x (y)                       1       1
f5 0101 (y)                     1 1     1 1
f6 0110 (x, y)                   1   1   1   1
f7 0111 (x y)                 1 1 1 1 1 1 1 1
f8 1000 x y               1               1
f9 1001 ((x, y))             1 1             1 1
f10 1010 y           1   1           1   1
f11 1011 (x (y))         1 1 1 1         1 1 1 1
f12 1100 x       1       1       1       1
f13 1101 ((x) y)     1 1     1 1     1 1     1 1
f14 1110 ((x)(y))   1   1   1   1   1   1   1   1
f15 1111 (( )) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Table 11. Qualifiers of Implication Ordering: βi f = Υ(ffi)
x : 1100 f β β β β β β β β β β β β β β β β
y : 1010   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
f0 0000 ( ) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
f1 0001 (x)(y)   1   1   1   1   1   1   1   1
f2 0010 (x) y     1 1     1 1     1 1     1 1
f3 0011 (x)       1       1       1       1
f4 0100 x (y)         1 1 1 1         1 1 1 1
f5 0101 (y)           1   1           1   1
f6 0110 (x, y)             1 1             1 1
f7 0111 (x y)               1               1
f8 1000 x y                 1 1 1 1 1 1 1 1
f9 1001 ((x, y))                   1   1   1   1
f10 1010 y                     1 1     1 1
f11 1011 (x (y))                       1       1
f12 1100 x                         1 1 1 1
f13 1101 ((x) y)                           1   1
f14 1110 ((x)(y))                             1 1
f15 1111 (( ))                               1

Applied to a given proposition f, the qualifiers αi and βi tell whether f rests "above fi" or "below fi", respectively, in the implication ordering. By way of example, let us trace the effects of several such measures, namely, those that occupy the limiting positions of the Tables.

  α00 f = 1 iff f00f, iff 0 ⇒ f, hence α00 f = 1 for all f.
  α15 f = 1 iff f15 ⇒ f, iff 1 ⇒ f, hence α15 f = 1 ⇒ f = 1.
  β00 f = 1 iff f ⇒ f00, iff f ⇒ 0, hence β00 f = 1 ⇒ f = 0.
  β15 f = 1 iff f ⇒ f15, iff f ⇒ 1, hence β15 f = 1 for all f.

Thus, α0 = β15 is a totally indiscriminate measure, one that accepts all propositions f : B2B, whereas α15 and β0 are measures that value the constant propositions 1 : B2B and 0 : B2B, respectively, above all others.

Finally, in conformity with the use of the fiber notation to indicate sets of models, it is natural to use notations like:

[| αi |] = (αi)(–1)(1),
[| βi |] = (βi)(–1)(1),
[| Υp |] = (Υp)(–1)(1),

to denote sets of propositions that satisfy the umpires in question.

Extending the Existential Interpretation to Quantificational Logic

Previously I introduced a calculus for propositional logic, fixing its meaning according to what C.S. Peirce called the "existential interpretation". As far as it concerns propositional calculus this interpretation settles the meanings that are associated with merely the most basic symbols and logical connectives. Now we must extend and refine the existential interpretation to comprehend the analysis of "quantifications", that is, quantified propositions. In doing so we recognize two additional aspects of logic that need to be developed, over and above the material of propositional logic. At the formal extreme there is the aspect of higher order functional types, into which we have already ventured a little above. At the level of the fundamental content of the available propositions we have to introduce a different interpretation for what we may call "elemental" or "singular" propositions.

Let us return to the 2-dimensional case X° = [x, y]. In order to provide a bridge between propositions and quantifications it serves to define a set of qualifiers Luv : (B2B) → B that have the following characters:

  L00 f
  = L"(x)(y)" f
  = α1 f
  = Υ"(x)(y)" f
  = Υ"(x)(y) ⇒ f"
  = "f likes (x)(y)"

  L01 f
  = L"(x) y " f
  = α2 f
  = Υ"(x) y " f
  = Υ"(x) y ⇒ f"
  = "f likes (x) y "

  L10 f
  = L" x (y)" f
  = α4 f
  = Υ"x (y)" f
  = Υ"x (y) ⇒ f"
  = "f likes x (y)"

  L11 f
  = L" x y" f
  = α8 f
  = Υ"x y" f
  = Υ"x y ⇒ f"
  = "f likes x y"

Intuitively, the Luv operators may be thought of as qualifying propositions according to the elements of the universe of discourse that each proposition positively values. Taken together, these measures provide us with the means to express many useful observations about the propositions in X° = [x, y], and so they mediate a subtext [L00, L01, L10, L11] that takes place within the higher order universe of discourse X°2 = [X°] = [[x, y]]. Figure 12 summarizes the action of the Luv on the fi within X°2.

|                                                           |
|                             o                             |
|                            / \                            |
|                           /   \                           |
|                          /x   y\                          |
|                         / o---o \                         |
|                        o   \ /   o                        |
|                       / \   o   / \                       |
|                      /   \  |  /   \                      |
|                     /     \ @ /     \                     |
|                    / x   y \ / x   y \                    |
|                   o  o---o  o  o---o  o                   |
|                  / \  \    / \    /  / \                  |
|                 /   \  @  /   \  @  /   \                 |
|                /     \   /     \   /     \                |
|               /   y   \ /       \ /   y   \               |
|              o    @    o    @    o    o    o              |
|             / \       / \       / \   |   / \             |
|            /   \     /   \     /   \  @  /   \            |
|           /     \   /x   y\   /     \   /     \           |
|          /  x y  \ / o   o \ /  x y  \ / x   y \          |
|         o    @    o   \ /   o    o    o  o   o  o         |
|         |\       / \   o   / \   |   / \  \ /  /|         |
|         | \     /   \  |  /   \  @  /   \  @  / |         |
|         |  \   /     \ @ /     \   /     \   /  |         |
|         |   \ /   x   \ / x   y \ /   x   \ /   |         |
|         |    o    @    o  o---o  o    o    o    |         |
|         |    |\       / \  \ /  / \   |   /|    |         |
|         |    | \     /   \  @  /   \  @  / |    |         |
|         |    |  \   /     \   /     \   /  |    |         |
|         |L_11|   \ /   o y \ / x o   \ /   |L_00|         |
|         o---------o    |    o    |    o---------o         |
|              |     \ x @   / \   @ y /     |              |
|              |      \     /   \     /      |              |
|              |       \   /     \   /       |              |
|              |L_10    \ /   o   \ /    L_01|              |
|              o---------o    |    o---------o              |
|                         \   @   /                         |
|                          \     /                          |
|                           \   /                           |
|                            \ /                            |
|                             o                             |
|                                                           |
Figure 12.  Higher Order Universe of Discourse [L_uv] c [[x, y]]

Application of Higher Order Propositions to Quantification Theory

Our excursion into the vastening landscape of higher order propositions has finally come round to the stage where we can bring its returns to bear on opening up new perspectives for quantificational logic.

There is a question arising next that is still experimental in my mind. Whether it makes much difference from a purely formal point of view is not a question I can answer yet, but it does seem to aid the intuition to invent a slightly different interpretation for the two-valued space that we use as the target of our basic indicator functions. Therefore, let us declare a type of "existential-valued" functions f : BkE, where E = {–e, +e} = {"empty", "exist"} is a couple of values that we interpret as indicating whether of not anything exists in the cells of the underlying universe of discourse, venn diagram, or other domain. As usual, let us not be too strict about the coding of these functions, reverting to binary codes whenever the interpretation is clear enough.

With this interpretation in mind we note the following correspondences between classical quantifications and higher order indicator functions:

Table 13. Syllogistic Premisses as Higher Order Indicator Functions
A Universal Affirmative All x is y Indicator of " x (y)" = 0
E Universal Negative All x is (y) Indicator of " x y " = 0
I Particular Affirmative Some x is y Indicator of " x y " = 1
O Particular Negative Some x is (y) Indicator of " x (y)" = 1

Tables 14 and 15 develop these ideas in more detail.

Table 14. Relation of Quantifiers to Higher Order Propositions
Mnemonic Category Classical Form Alternate Form Symmetric Form Operator
All x is (y)   No x is y (L11)
All x is y   No x is (y) (L10)
    All y is x No y is (x) No (x) is y (L01)
    All (y) is x No (y) is (x) No (x) is (y) (L00)
    Some (x) is (y)   Some (x) is (y) L00
    Some (x) is y   Some (x) is y L01
Some x is (y)   Some x is (y) L10
Some x is y   Some x is y L11

Table 15. Simple Qualifiers of Propositions (n = 2)
x : 1100 f (L11) (L10) (L01) (L00) L00 L01 L10 L11
y : 1010   no x
is y
no x
is (y)
no (x)
is y
no (x)
is (y)
some (x)
is (y)
some (x)
is y
some x
is (y)
some x
is y
f0 0000 ( ) 1 1 1 1 0 0 0 0
f1 0001 (x)(y) 1 1 1 0 1 0 0 0
f2 0010 (x) y 1 1 0 1 0 1 0 0
f3 0011 (x) 1 1 0 0 1 1 0 0
f4 0100 x (y) 1 0 1 1 0 0 1 0
f5 0101 (y) 1 0 1 0 1 0 1 0
f6 0110 (x, y) 1 0 0 1 0 1 1 0
f7 0111 (x y) 1 0 0 0 1 1 1 0
f8 1000 x y 0 1 1 1 0 0 0 1
f9 1001 ((x, y)) 0 1 1 0 1 0 0 1
f10 1010 y 0 1 0 1 0 1 0 1
f11 1011 (x (y)) 0 1 0 0 1 1 0 1
f12 1100 x 0 0 1 1 0 0 1 1
f13 1101 ((x) y) 0 0 1 0 1 0 1 1
f14 1110 ((x)(y)) 0 0 0 1 0 1 1 1
f15 1111 (( )) 0 0 0 0 1 1 1 1


Primary sources

  • Dewey, John (1910), How We Think, D.C. Heath, Lexington, MA, 1910. Reprinted, Prometheus Books, Buffalo, NY, 1991.
  • Dewey, John (1938), Logic: The Theory of Inquiry, Henry Holt and Company, New York, NY, 1938. Reprinted, pp. 1–527 in John Dewey, The Later Works, 1925–1953, Volume 12: 1938, Jo Ann Boydston (ed.), Kathleen Poulos (text. ed.), Ernest Nagel (intro.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1986.
  • Peirce, C.S. (1931–1935, 1958), Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA. Cited as CP volume.paragraph.
  • Peirce, C.S. (1981–), Writings of Charles S. Peirce: A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianoplis, IN. Cited as CE volume, page.
  • Peirce, C.S. (1865), "Harvard Lectures 'On the Logic of Science', Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project (eds.), Indiana University Press, Bloomington, IN, 1982.

Secondary sources

  • Awbrey, S.M., and Awbrey, J.L. (May 2001), “Conceptual Barriers to Creating Integrative Universities”, Organization : The Interdisciplinary Journal of Organization, Theory, and Society 8(2), Sage Publications, London, UK, pp. 269–284. Abstract.
  • Awbrey, S.M., and Awbrey, J.L. (September 1999), “Organizations of Learning or Learning Organizations : The Challenge of Creating Integrative Universities for the Next Century”, Second International Conference of the Journal ‘Organization’, Re-Organizing Knowledge, Trans-Forming Institutions : Knowing, Knowledge, and the University in the 21st Century, University of Massachusetts, Amherst, MA. Online.
  • Awbrey, J.L., and Awbrey, S.M. (Autumn 1995), “Interpretation as Action : The Risk of Inquiry”, Inquiry : Critical Thinking Across the Disciplines 15(1), pp. 40–52. Archive. Online.
  • Awbrey, J.L., and Awbrey, S.M. (June 1992), “Interpretation as Action : The Risk of Inquiry”, The Eleventh International Human Science Research Conference, Oakland University, Rochester, Michigan.
  • Awbrey, S.M., and Awbrey, J.L. (May 1991), “An Architecture for Inquiry : Building Computer Platforms for Discovery”, Proceedings of the Eighth International Conference on Technology and Education, Toronto, Canada, pp. 874–875. Online.
  • Awbrey, J.L., and Awbrey, S.M. (January 1991), “Exploring Research Data Interactively : Developing a Computer Architecture for Inquiry”, Poster presented at the Annual Sigma Xi Research Forum, University of Texas Medical Branch, Galveston, TX.
  • Awbrey, J.L., and Awbrey, S.M. (August 1990), “Exploring Research Data Interactively. Theme One : A Program of Inquiry”, Proceedings of the Sixth Annual Conference on Applications of Artificial Intelligence and CD-ROM in Education and Training, Society for Applied Learning Technology, Washington, DC, pp. 9–15.
  • Haack, Susan (1993), Evidence and Inquiry : Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford, UK.
  • Kneale, William, and Kneale, Martha, The Development of Logic, Oxford University Press, London, UK, 1962.

See Also


Document History

| Introduction to Inquiry Driven Systems
| Author:   Jon Awbrey
| Version:  Draft 12.03
| Created:  01 Aug 1996
| Revised:  20 Aug 2002

Amalgamates the following:

| Inquiry and Analogy
| Author:   Jon Awbrey
| Version:  Draft 3.24
| Created:  01 Jan 1995
| Revised:  28 Jul 2002
| Aspects of Inquiry
| Author:   Jon Awbrey
| Version:  Draft 11.30
| Created:  04 Aug 1996
| Revised:  31 Oct 2001
| Approaches to Inquiry
| Author:   Jon Awbrey
| Version:  Draft 6.30
| Created:  20 Aug 1996
| Revised:  26 Jul 2002