Difference between revisions of "Directory:Jon Awbrey/Papers/Inquiry Driven Systems : Part 6"

MyWikiBiz, Author Your Legacy — Wednesday December 25, 2024
Jump to navigationJump to search
(add cats & navigation bar)
 
(24 intermediate revisions by the same user not shown)
Line 756: Line 756:
 
First, <math>L\!</math> can be associated with a logical predicate or a proposition that says something about the space of effects, being true of certain effects and false of all others.  In this way, <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from <math>\textstyle\prod_i X_i</math> to the domain of truth values <math>\mathbb{B} = \{ 0, 1 \}.</math>  With the appropriate understanding, it is permissible to let the notation <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times X_k \to \mathbb{B} {}^{\prime\prime}</math> indicate this interpretation.
 
First, <math>L\!</math> can be associated with a logical predicate or a proposition that says something about the space of effects, being true of certain effects and false of all others.  In this way, <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from <math>\textstyle\prod_i X_i</math> to the domain of truth values <math>\mathbb{B} = \{ 0, 1 \}.</math>  With the appropriate understanding, it is permissible to let the notation <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times X_k \to \mathbb{B} {}^{\prime\prime}</math> indicate this interpretation.
  
Second, <math>L\!</math> can be associated with a piece of information that allows one to complete various sorts of partial data sets in the space of effects.  In particular, if one is given a partial effect or an incomplete <math>k\!</math>-tuple, say, one that is missing a value in the <math>j^\text{th}\!</math> place, as indicated by the notation <math>{}^{\backprime\backprime} (x_1, \ldots, \hat{j}, \ldots, x_k) {}^{\prime\prime},</math> then <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from the cartesian product of the domains at the filled places to the power set of the domain at the missing place.  With this in mind, it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to \operatorname{Pow}(X_j) {}^{\prime\prime}</math> indicate this use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>  If the sets in the range of this function are all singletons, then it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to X_j {}^{\prime\prime}</math> specify the corresponding use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>
+
Second, <math>L\!</math> can be associated with a piece of information that allows one to complete various sorts of partial data sets in the space of effects.  In particular, if one is given a partial effect or an incomplete <math>k\!</math>-tuple, say, one that is missing a value in the <math>j^\text{th}\!</math> place, as indicated by the notation <math>{}^{\backprime\backprime} (x_1, \ldots, \hat{j}, \ldots, x_k) {}^{\prime\prime},</math> then <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from the cartesian product of the domains at the filled places to the power set of the domain at the missing place.  With this in mind, it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to \mathrm{Pow}(X_j) {}^{\prime\prime}</math> indicate this use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>  If the sets in the range of this function are all singletons, then it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to X_j {}^{\prime\prime}</math> specify the corresponding use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>
  
 
In general, the indicated degrees of freedom in the interpretation of relation symbols can be exploited properly only if one understands the consequences of this interpretive liberality and is prepared to deal with the problems that arise from its &ldquo;polymorphic&rdquo; practices &mdash; from using the same sign in different contexts to refer to different types of objects.  For example, one should consider what happens, and what sort of IF is demanded to deal with it, when the name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is used equivocally in a statement like <math>L = L^{-1}(1),\!</math> where a sensible reading requires it to denote the relational set <math>L \subseteq \textstyle\prod_i X_i</math> on the first appearance and the propositional function <math>L : \textstyle\prod_i X_i \to \mathbb{B}</math> on the second appearance.
 
In general, the indicated degrees of freedom in the interpretation of relation symbols can be exploited properly only if one understands the consequences of this interpretive liberality and is prepared to deal with the problems that arise from its &ldquo;polymorphic&rdquo; practices &mdash; from using the same sign in different contexts to refer to different types of objects.  For example, one should consider what happens, and what sort of IF is demanded to deal with it, when the name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is used equivocally in a statement like <math>L = L^{-1}(1),\!</math> where a sensible reading requires it to denote the relational set <math>L \subseteq \textstyle\prod_i X_i</math> on the first appearance and the propositional function <math>L : \textstyle\prod_i X_i \to \mathbb{B}</math> on the second appearance.
  
A '''triadic relation''' is a relation on an ordered triple of nonempty sets.  Thus, <math>L\!</math> is a triadic relation on <math>(X, Y, Z)\!</math> if and only if <math>L \subseteq X \times Y \times Z.\!</math>  Exercising a proper degree of flexibility with notation, one can use the name of a triadic relation <math>L \subseteq X \times Y \times Z</math> to refer to a logical predicate or a propositional function, of the type <math>X \times Y \times Z \to \mathbb{B},</math> or any one of the derived binary operations, of the three types <math>X \times Y \to \operatorname{Pow}(Z),</math> <math>X \times Z \to \operatorname{Pow}(Y),</math> and <math>Y \times Z \to \operatorname{Pow}(X).</math>
+
A '''triadic relation''' is a relation on an ordered triple of nonempty sets.  Thus, <math>L\!</math> is a triadic relation on <math>(X, Y, Z)\!</math> if and only if <math>L \subseteq X \times Y \times Z.\!</math>  Exercising a proper degree of flexibility with notation, one can use the name of a triadic relation <math>L \subseteq X \times Y \times Z\!</math> to refer to a logical predicate or a propositional function, of the type <math>X \times Y \times Z \to \mathbb{B},\!</math> or any one of the derived binary operations, of the three types <math>X \times Y \to \mathrm{Pow}(Z),\!</math> <math>X \times Z \to \mathrm{Pow}(Y),\!</math> and <math>Y \times Z \to \mathrm{Pow}(X).\!</math>
  
A '''binary operation''' or '''law of composition''' (LOC) on a nonempty set <math>X\!</math> is a triadic relation <math>* \subseteq X \times X \times X\!</math> that is also a function <math>* : X \times X \to X.\!</math>  The notation <math>{}^{\backprime\backprime} x * y {}^{\prime\prime}\!</math> is used to indicate the functional value <math>*(x, y) \in X,\!</math> which is also referred to as the '''product''' of <math>x\!</math> and <math>y\!</math> under <math>*.\!</math>
+
A '''binary operation''' or '''law of composition''' (LOC) on a nonempty set <math>X\!</math> is a triadic relation <math>* \subseteq X \times X \times X\!</math> that is also a function <math>* : X \times X \to X.\!</math>  The notation <math>{}^{\backprime\backprime} x * y {}^{\prime\prime}\!</math> is used to indicate the functional value <math>*(x, y) \in X,~\!</math> which is also referred to as the '''product''' of <math>x\!</math> and <math>y\!</math> under <math>*.\!</math>
  
 
A binary operation or LOC <math>*\!</math> on <math>X\!</math> is '''associative''' if and only if <math>(x*y)*z = x*(y*z)\!</math> for every <math>x, y, z \in X.\!</math>
 
A binary operation or LOC <math>*\!</math> on <math>X\!</math> is '''associative''' if and only if <math>(x*y)*z = x*(y*z)\!</math> for every <math>x, y, z \in X.\!</math>
Line 774: Line 774:
 
A '''monoid''' is a semigroup with a unit element.  Formally, a monoid <math>\underline{X}\!</math> is an ordered triple <math>(X, *, e),\!</math> where <math>X\!</math> is a set, <math>*\!</math> is an associative LOC on the set <math>X,\!</math> and <math>e\!</math> is the unit element in the semigroup <math>(X, *).\!</math>
 
A '''monoid''' is a semigroup with a unit element.  Formally, a monoid <math>\underline{X}\!</math> is an ordered triple <math>(X, *, e),\!</math> where <math>X\!</math> is a set, <math>*\!</math> is an associative LOC on the set <math>X,\!</math> and <math>e\!</math> is the unit element in the semigroup <math>(X, *).\!</math>
  
An '''inverse''' of an element <math>x\!</math> in a monoid <math>\underline{X} = (X, *, e)\!</math> is an element <math>y \in X\!</math> such that <math>x*y = e = y*x.\!</math>  An element that has an inverse in <math>\underline{X}\!</math> is said to be '''invertible''' (relative to <math>*\!</math> and <math>e\!</math>).  If <math>x\!</math> has an inverse in <math>\underline{X},\!</math> then it is unique to <math>x.\!</math>  To see this, suppose that <math>y'\!</math> is also an inverse of <math>x.\!</math>  Then it follows that:
+
An '''inverse''' of an element <math>x\!</math> in a monoid <math>\underline{X} = (X, *, e)\!</math> is an element <math>y \in X\!</math> such that <math>x*y = e = y*x.\!</math>  An element that has an inverse in <math>\underline{X}\!</math> is said to be '''invertible''' (relative to <math>*\!</math> and <math>e\!</math>).  If <math>x\!</math> has an inverse in <math>{\underline{X}},\!</math> then it is unique to <math>x.\!</math>  To see this, suppose that <math>y'\!</math> is also an inverse of <math>x.\!</math>  Then it follows that:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 801: Line 801:
 
It is customary to use a number of abbreviations and conventions in discussing semigroups, monoids, and groups.  A system <math>\underline{X} = (X, *)\!</math> is given the adjective ''commutative'' if and only if <math>*\!</math> is commutative.  Commutative groups, however, are traditionally called ''abelian groups''.  By way of making comparisons with familiar systems and operations, the following usages are also common.
 
It is customary to use a number of abbreviations and conventions in discussing semigroups, monoids, and groups.  A system <math>\underline{X} = (X, *)\!</math> is given the adjective ''commutative'' if and only if <math>*\!</math> is commutative.  Commutative groups, however, are traditionally called ''abelian groups''.  By way of making comparisons with familiar systems and operations, the following usages are also common.
  
One says that <math>\underline{X}\!</math> is '''written multiplicatively''' to mean that a raised dot <math>(\cdot)\!</math> or concatenation is used instead of a star for the LOC.  In this case, the unit element is commonly written as an ordinary algebraic one, <math>1,\!</math> while the inverse of an element <math>x\!</math> is written as <math>x^{-1}.\!</math>  The multiplicative manner of presentation is the one that is usually taken by default in the most general types of situations.  In the multiplicative idiom, the following definitions of ''powers'', ''cyclic groups'', and ''generators'' are also common.
+
One says that <math>\underline{X}\!</math> is '''written multiplicatively''' to mean that a raised dot <math>{(\cdot)}\!</math> or concatenation is used instead of a star for the LOC.  In this case, the unit element is commonly written as an ordinary algebraic one, <math>1,\!</math> while the inverse of an element <math>x\!</math> is written as <math>x^{-1}.\!</math>  The multiplicative manner of presentation is the one that is usually taken by default in the most general types of situations.  In the multiplicative idiom, the following definitions of ''powers'', ''cyclic groups'', and ''generators'' are also common.
  
 
: In a semigroup, the <math>n^\text{th}\!</math> '''power''' of an element <math>x\!</math> is notated as <math>x^n\!</math> and defined for every positive integer <math>n\!</math> in the following manner.  Proceeding recursively, let <math>x^1 = x\!</math> and let <math>x^n = x^{n-1} \cdot x\!</math> for all <math>n > 1.\!</math>
 
: In a semigroup, the <math>n^\text{th}\!</math> '''power''' of an element <math>x\!</math> is notated as <math>x^n\!</math> and defined for every positive integer <math>n\!</math> in the following manner.  Proceeding recursively, let <math>x^1 = x\!</math> and let <math>x^n = x^{n-1} \cdot x\!</math> for all <math>n > 1.\!</math>
Line 873: Line 873:
 
|}
 
|}
  
To sum up the development so far in a general way:  A ''homomorphism'' is a mapping from a system to a system that preserves an aspect of systematic structure, usually one that is relevant to an understood purpose or context.  When the pertinent aspect of structure for both the source and the target system is a binary operation or a LOC, then the condition that the LOCs be preserved in passing from the pre-image to the image of the mapping is frequently expressed by stating that ''the image of the product is the product of the images''.  That is, if <math>h : X_1 \to X_2\!</math> is a homomorphism from <math>\underline{X}_1 = (X_1, *_1)\!</math> to <math>\underline{X}_2 = (X_2, *_2),\!</math> then for every <math>x, y \in X_1\!</math> the following condition holds:
+
To sum up the development so far in a general way:  A ''homomorphism'' is a mapping from a system to a system that preserves an aspect of systematic structure, usually one that is relevant to an understood purpose or context.  When the pertinent aspect of structure for both the source and the target system is a binary operation or a LOC, then the condition that the LOCs be preserved in passing from the pre-image to the image of the mapping is frequently expressed by stating that ''the image of the product is the product of the images''.  That is, if <math>h : X_1 \to X_2\!</math> is a homomorphism from <math>{\underline{X}_1 = (X_1, *_1)}\!</math> to <math>{\underline{X}_2 = (X_2, *_2)},\!</math> then for every <math>x, y \in X_1\!</math> the following condition holds:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 881: Line 881:
 
Next, the concept of a homomorphism or ''structure-preserving map'' is specialized to the different kinds of structure of interest here.
 
Next, the concept of a homomorphism or ''structure-preserving map'' is specialized to the different kinds of structure of interest here.
  
A '''semigroup homomorphism''' from a semigroup <math>\underline{X}_1 = (X_1, *_1)\!</math> to a semigroup <math>\underline{X}_2 = (X_2, *_2)\!</math> is a mapping between the underlying sets that preserves the structure appropriate to semigroups, namely, the LOCs.  This makes it a map <math>h : X_1 \to X_2\!</math> whose induced action on the LOCs is such that it takes every element of <math>*_1\!</math> to an element of <math>*_2.\!</math>  That is:
+
A '''semigroup homomorphism''' from a semigroup <math>{\underline{X}_1 = (X_1, *_1)}\!</math> to a semigroup <math>{\underline{X}_2 = (X_2, *_2)}\!</math> is a mapping between the underlying sets that preserves the structure appropriate to semigroups, namely, the LOCs.  This makes it a map <math>h : X_1 \to X_2\!</math> whose induced action on the LOCs is such that it takes every element of <math>*_1\!</math> to an element of <math>*_2.\!</math>  That is:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 899: Line 899:
 
Finally, to introduce two pieces of language that are often useful:  an '''endomorphism''' is a homomorphism from a system into itself, while an '''automorphism''' is an isomorphism from a system onto itself.
 
Finally, to introduce two pieces of language that are often useful:  an '''endomorphism''' is a homomorphism from a system into itself, while an '''automorphism''' is an isomorphism from a system onto itself.
  
If nothing more succinct is available, a group can be specified by means of its ''operation table'', usually styled either as a ''multiplication table'' or an ''addition table''.  Table&nbsp;32.1 illustrates the general scheme of a group operation table.  In this case the group operation, treated as a &ldquo;multiplication&rdquo;, is formally symbolized by a star <math>(*),\!</math> as in <math>x * y = z.\!</math>  In contexts where only algebraic operations are formalized it is common practice to omit the star, but when logical conjunctions (symbolized by a raised dot <math>(\cdot)\!</math> or by concatenation) appear in the same context, then the star is retained for the group operation.
+
If nothing more succinct is available, a group can be specified by means of its ''operation table'', usually styled either as a ''multiplication table'' or an ''addition table''.  Table&nbsp;32.1 illustrates the general scheme of a group operation table.  In this case the group operation, treated as a &ldquo;multiplication&rdquo;, is formally symbolized by a star <math>(*),\!</math> as in <math>x * y = z.\!</math>  In contexts where only algebraic operations are formalized it is common practice to omit the star, but when logical conjunctions (symbolized by a raised dot <math>{(\cdot)}\!</math> or by concatenation) appear in the same context, then the star is retained for the group operation.
  
 
Another way of approaching the study or presenting the structure of a group is by means of a ''group representation'', in particular, one that represents the group in the special form of a ''transformation group''.  This is a set of transformations acting on a concrete space of &ldquo;points&rdquo; or a designated set of &ldquo;objects&rdquo;.  In providing an abstractly given group with a representation as a transformation group, one is seeking to know the group by its effects, that is, in terms of the action it induces, through the representation, on a concrete domain of objects.  In the type of representation known as a ''regular representation'', one is seeking to know the group by its effects on itself.
 
Another way of approaching the study or presenting the structure of a group is by means of a ''group representation'', in particular, one that represents the group in the special form of a ''transformation group''.  This is a set of transformations acting on a concrete space of &ldquo;points&rdquo; or a designated set of &ldquo;objects&rdquo;.  In providing an abstractly given group with a representation as a transformation group, one is seeking to know the group by its effects, that is, in terms of the action it induces, through the representation, on a concrete domain of objects.  In the type of representation known as a ''regular representation'', one is seeking to know the group by its effects on itself.
Line 912: Line 912:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
|+ <math>\text{Table 32.1}~~\text{Scheme of a Group Operation Table}</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 32.1} ~~ \text{Scheme of a Group Operation Table}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>*\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>*\!</math>
Line 948: Line 949:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
|+ <math>\text{Table 32.2}~~\text{Scheme of the Regular Ante-Representation}</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 32.2} ~~ \text{Scheme of the Regular Ante-Representation}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
Line 989: Line 991:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
|+ <math>\text{Table 32.3}~~\text{Scheme of the Regular Post-Representation}</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 32.3} ~~ \text{Scheme of the Regular Post-Representation}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
Line 1,039: Line 1,042:
 
For the sake of comparison, I give a discussion of both these groups.
 
For the sake of comparison, I give a discussion of both these groups.
  
The next series of Tables presents the group operations and regular representations for the groups <math>V_4\!</math> and <math>Z_4.\!</math>  If a group is abelian, as both of these groups are, then its <math>h_1\!</math> and <math>h_2\!</math> representations are indistinguishable, and a single form of regular representation <math>h : G \to (G \to G)\!</math> will do for both.
+
The next series of Tables presents the group operations and regular representations for the groups <math>V_4\!</math> and <math>Z_4.\!</math>  If a group is abelian, as both of these groups are, then its <math>h_1\!</math> and <math>h_2\!</math> representations are indistinguishable, and a single form of regular representation <math>{h : G \to (G \to G)}\!</math> will do for both.
  
 
Table&nbsp;33.1 shows the multiplication table of the group <math>V_4,\!</math> while Tables&nbsp;33.2 and 33.3 present two versions of its regular representation.  The first version, somewhat hastily, gives the functional representation of each group element as a set of ordered pairs of group elements.  The second version, more circumspectly, gives the functional representative of each group element as a set of ordered pairs of element names, also referred to as ''objects'', ''points'', ''letters'', or ''symbols''.
 
Table&nbsp;33.1 shows the multiplication table of the group <math>V_4,\!</math> while Tables&nbsp;33.2 and 33.3 present two versions of its regular representation.  The first version, somewhat hastily, gives the functional representation of each group element as a set of ordered pairs of group elements.  The second version, more circumspectly, gives the functional representative of each group element as a set of ordered pairs of element names, also referred to as ''objects'', ''points'', ''letters'', or ''symbols''.
Line 1,046: Line 1,049:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 33.1}~~\text{Multiplication Operation of the Group}~V_4</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 33.1} ~~ \text{Multiplication Operation of the Group} ~ V_4\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{e}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{e}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{f}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{f}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{g}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{g}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{h}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{h}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{e}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{e}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{f}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{f}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{g}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{g}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{h}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{h}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
 
|}
 
|}
  
Line 1,082: Line 1,086:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 33.2}~~\text{Regular Representation of the Group}~V_4</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 33.2} ~~ \text{Regular Representation of the Group} ~ V_4\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{e}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{e}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>(\operatorname{e}, \operatorname{e}),</math>
+
| width="16%" | <math>(\mathrm{e}, \mathrm{e}),\!</math>
| width="20%" | <math>(\operatorname{f}, \operatorname{f}),</math>
+
| width="20%" | <math>(\mathrm{f}, \mathrm{f}),\!</math>
| width="20%" | <math>(\operatorname{g}, \operatorname{g}),</math>
+
| width="20%" | <math>(\mathrm{g}, \mathrm{g}),\!</math>
| width="16%" | <math>(\operatorname{h}, \operatorname{h})</math>
+
| width="16%" | <math>(\mathrm{h}, \mathrm{h})\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{f}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{f}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{e}, \operatorname{f}),</math>
+
| <math>(\mathrm{e}, \mathrm{f}),\!</math>
| <math>(\operatorname{f}, \operatorname{e}),</math>
+
| <math>(\mathrm{f}, \mathrm{e}),\!</math>
| <math>(\operatorname{g}, \operatorname{h}),</math>
+
| <math>(\mathrm{g}, \mathrm{h}),\!</math>
| <math>(\operatorname{h}, \operatorname{g})</math>
+
| <math>(\mathrm{h}, \mathrm{g})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{g}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{g}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{e}, \operatorname{g}),</math>
+
| <math>(\mathrm{e}, \mathrm{g}),\!</math>
| <math>(\operatorname{f}, \operatorname{h}),</math>
+
| <math>(\mathrm{f}, \mathrm{h}),\!</math>
| <math>(\operatorname{g}, \operatorname{e}),</math>
+
| <math>(\mathrm{g}, \mathrm{e}),\!</math>
| <math>(\operatorname{h}, \operatorname{f})</math>
+
| <math>(\mathrm{h}, \mathrm{f})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{h}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{h}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{e}, \operatorname{h}),</math>
+
| <math>(\mathrm{e}, \mathrm{h}),\!</math>
| <math>(\operatorname{f}, \operatorname{g}),</math>
+
| <math>(\mathrm{f}, \mathrm{g}),\!</math>
| <math>(\operatorname{g}, \operatorname{f}),</math>
+
| <math>(\mathrm{g}, \mathrm{f}),\!</math>
| <math>(\operatorname{h}, \operatorname{e})</math>
+
| <math>(\mathrm{h}, \mathrm{e})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,123: Line 1,128:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 33.3}~~\text{Regular Representation of the Group}~V_4</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 33.3} ~~ \text{Regular Representation of the Group} ~ V_4\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Symbols}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Symbols}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{e}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{e}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),</math>
+
| width="16%" | <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!</math>
| width="20%" | <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),</math>
+
| width="20%" | <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!</math>
| width="20%" | <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),</math>
+
| width="20%" | <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!</math>
| width="16%" | <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime})</math>
+
| width="16%" | <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime})\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{f}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{f}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime})</math>
+
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{g}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{g}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime})</math>
+
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{h}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{h}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime})</math>
+
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,196: Line 1,202:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 34.1}~~\text{Multiplicative Presentation of the Group}~Z_4(\cdot)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 34.1} ~~ \text{Multiplicative Presentation of the Group} ~ Z_4(\cdot)~\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{1}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{1}</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{a}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{a}</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{b}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{b}</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{c}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{c}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{1}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{a}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{a}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{b}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{b}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{c}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{c}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
 
|}
 
|}
  
Line 1,232: Line 1,239:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 34.2}~~\text{Regular Representation of the Group}~Z_4(\cdot)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 34.2} ~~ \text{Regular Representation of the Group} ~ Z_4(\cdot)\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{1}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>(\operatorname{1}, \operatorname{1}),</math>
+
| width="16%" | <math>(\mathrm{1}, \mathrm{1}),\!</math>
| width="20%" | <math>(\operatorname{a}, \operatorname{a}),</math>
+
| width="20%" | <math>(\mathrm{a}, \mathrm{a}),\!</math>
| width="20%" | <math>(\operatorname{b}, \operatorname{b}),</math>
+
| width="20%" | <math>(\mathrm{b}, \mathrm{b}),\!</math>
| width="16%" | <math>(\operatorname{c}, \operatorname{c})</math>
+
| width="16%" | <math>(\mathrm{c}, \mathrm{c})\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{a}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{a}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{1}, \operatorname{a}),</math>
+
| <math>(\mathrm{1}, \mathrm{a}),\!</math>
| <math>(\operatorname{a}, \operatorname{b}),</math>
+
| <math>(\mathrm{a}, \mathrm{b}),\!</math>
| <math>(\operatorname{b}, \operatorname{c}),</math>
+
| <math>(\mathrm{b}, \mathrm{c}),\!</math>
| <math>(\operatorname{c}, \operatorname{1})</math>
+
| <math>(\mathrm{c}, \mathrm{1})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{b}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{b}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{1}, \operatorname{b}),</math>
+
| <math>(\mathrm{1}, \mathrm{b}),\!</math>
| <math>(\operatorname{a}, \operatorname{c}),</math>
+
| <math>(\mathrm{a}, \mathrm{c}),\!</math>
| <math>(\operatorname{b}, \operatorname{1}),</math>
+
| <math>(\mathrm{b}, \mathrm{1}),\!</math>
| <math>(\operatorname{c}, \operatorname{a})</math>
+
| <math>(\mathrm{c}, \mathrm{a})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{c}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{c}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{1}, \operatorname{c}),</math>
+
| <math>(\mathrm{1}, \mathrm{c}),\!</math>
| <math>(\operatorname{a}, \operatorname{1}),</math>
+
| <math>(\mathrm{a}, \mathrm{1}),\!</math>
| <math>(\operatorname{b}, \operatorname{a}),</math>
+
| <math>(\mathrm{b}, \mathrm{a}),\!</math>
| <math>(\operatorname{c}, \operatorname{b})</math>
+
| <math>(\mathrm{c}, \mathrm{b})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,273: Line 1,281:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 35.1}~~\text{Additive Presentation of the Group}~Z_4(+)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 35.1} ~~ \text{Additive Presentation of the Group} ~ Z_4(+)\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>+\!</math>
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>+\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{0}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{0}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{1}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{1}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{2}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{2}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{3}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{3}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{0}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{0}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{1}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{2}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{2}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{3}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{3}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
 
|}
 
|}
  
Line 1,309: Line 1,318:
  
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 35.2}~~\text{Regular Representation of the Group}~Z_4(+)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 35.2} ~~ \text{Regular Representation of the Group} ~ Z_4(+)\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{0}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{0}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>(\operatorname{0}, \operatorname{0}),</math>
+
| width="16%" | <math>(\mathrm{0}, \mathrm{0}),\!</math>
| width="20%" | <math>(\operatorname{1}, \operatorname{1}),</math>
+
| width="20%" | <math>(\mathrm{1}, \mathrm{1}),\!</math>
| width="20%" | <math>(\operatorname{2}, \operatorname{2}),</math>
+
| width="20%" | <math>(\mathrm{2}, \mathrm{2}),\!</math>
| width="16%" | <math>(\operatorname{3}, \operatorname{3})</math>
+
| width="16%" | <math>(\mathrm{3}, \mathrm{3})~\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{1}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{0}, \operatorname{1}),</math>
+
| <math>(\mathrm{0}, \mathrm{1}),\!</math>
| <math>(\operatorname{1}, \operatorname{2}),</math>
+
| <math>(\mathrm{1}, \mathrm{2}),\!</math>
| <math>(\operatorname{2}, \operatorname{3}),</math>
+
| <math>(\mathrm{2}, \mathrm{3}),\!</math>
| <math>(\operatorname{3}, \operatorname{0})</math>
+
| <math>(\mathrm{3}, \mathrm{0})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{2}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{2}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{0}, \operatorname{2}),</math>
+
| <math>(\mathrm{0}, \mathrm{2}),\!</math>
| <math>(\operatorname{1}, \operatorname{3}),</math>
+
| <math>(\mathrm{1}, \mathrm{3}),\!</math>
| <math>(\operatorname{2}, \operatorname{0}),</math>
+
| <math>(\mathrm{2}, \mathrm{0}),\!</math>
| <math>(\operatorname{3}, \operatorname{1})</math>
+
| <math>(\mathrm{3}, \mathrm{1})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{3}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{3}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{0}, \operatorname{3}),</math>
+
| <math>(\mathrm{0}, \mathrm{3}),\!</math>
| <math>(\operatorname{1}, \operatorname{0}),</math>
+
| <math>(\mathrm{1}, \mathrm{0}),\!</math>
| <math>(\operatorname{2}, \operatorname{1}),</math>
+
| <math>(\mathrm{2}, \mathrm{1}),\!</math>
| <math>(\operatorname{3}, \operatorname{2})</math>
+
| <math>(\mathrm{3}, \mathrm{2})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,393: Line 1,403:
 
|}
 
|}
  
By convention for the case where <math>k = 0,\!</math> this gives <math>\underline{\underline{X}}^0 = \{ () \},</math> that is, the singleton set consisting of the empty sequence.  Depending on the setting, the empty sequence is referred to as the ''empty word'' or the ''empty sentence'', and is commonly denoted by an epsilon <math>{}^{\backprime\backprime} \varepsilon {}^{\prime\prime}</math> or a lambda <math>{}^{\backprime\backprime} \lambda {}^{\prime\prime}.</math>  In this text a variant epsilon symbol will be used for the empty sequence, <math>\varepsilon = ().\!</math>  In addition, a singly underlined epsilon will be used for the language that consists of a single empty sequence, <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}.</math>
+
By convention for the case where <math>k = 0,\!</math> this gives <math>\underline{\underline{X}}^0 = \{ () \},</math> that is, the singleton set consisting of the empty sequence.  Depending on the setting, the empty sequence is referred to as the ''empty word'' or the ''empty sentence'', and is commonly denoted by an epsilon <math>{}^{\backprime\backprime} \varepsilon {}^{\prime\prime}</math> or a lambda <math>{}^{\backprime\backprime} \lambda {}^{\prime\prime}.</math>  In this text a variant epsilon symbol will be used for the empty sequence, <math>{\varepsilon = ()}.\!</math>  In addition, a singly underlined epsilon will be used for the language that consists of a single empty sequence, <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}.</math>
  
 
It is probably worth remarking at this point that all empty sequences are indistinguishable (in a one-level formal language, that is), and thus all sets that consist of a single empty sequence are identical.  Consequently, <math>\underline{\underline{X}}^0 = \{ () \} = \underline{\varepsilon} = \underline{\underline{Y}}^0,</math> for all resources <math>\underline{\underline{X}}</math> and <math>\underline{\underline{Y}}.</math>  However, the empty language <math>\varnothing = \{ \}</math> and the language that consists of a single empty sequence <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}</math> need to be distinguished from each other.
 
It is probably worth remarking at this point that all empty sequences are indistinguishable (in a one-level formal language, that is), and thus all sets that consist of a single empty sequence are identical.  Consequently, <math>\underline{\underline{X}}^0 = \{ () \} = \underline{\varepsilon} = \underline{\underline{Y}}^0,</math> for all resources <math>\underline{\underline{X}}</math> and <math>\underline{\underline{Y}}.</math>  However, the empty language <math>\varnothing = \{ \}</math> and the language that consists of a single empty sequence <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}</math> need to be distinguished from each other.
Line 1,482: Line 1,492:
 
By way of definition, a sign <math>q\!</math> in a sign relation <math>L \subseteq O \times S \times I\!</math> is said to be, to constitute, or to make a '''plural indefinite reference''' ('''PIR''') to (every element in) a set of objects, <math>X \subseteq O,\!</math> if and only if <math>q\!</math> denotes every element of <math>X.\!</math>  This relationship can be expressed in a succinct formula by making use of one additional definition.
 
By way of definition, a sign <math>q\!</math> in a sign relation <math>L \subseteq O \times S \times I\!</math> is said to be, to constitute, or to make a '''plural indefinite reference''' ('''PIR''') to (every element in) a set of objects, <math>X \subseteq O,\!</math> if and only if <math>q\!</math> denotes every element of <math>X.\!</math>  This relationship can be expressed in a succinct formula by making use of one additional definition.
  
The '''denotation''' of <math>q\!</math> in <math>L,\!</math> written <math>\operatorname{De}(q, L),\!</math> is defined as follows:
+
The '''denotation''' of <math>q\!</math> in <math>L,\!</math> written <math>\mathrm{De}(q, L),\!</math> is defined as follows:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{De}(q, L) ~=~ \operatorname{Den}(L) \cdot q ~=~ L_{OS} \cdot q ~=~ \{ o \in O : (o, q, i) \in L, ~\text{for some}~ i \in I \}.</math>
+
| <math>\mathrm{De}(q, L) ~=~ \mathrm{Den}(L) \cdot q ~=~ L_{OS} \cdot q ~=~ \{ o \in O : (o, q, i) \in L, ~\text{for some}~ i \in I \}.</math>
 
|}
 
|}
  
Then <math>q\!</math> makes a PIR to <math>X\!</math> in <math>L\!</math> if and only if <math>X \subseteq \operatorname{De}(q, L).\!</math>  Of course, this includes the limiting case where <math>X\!</math> is a singleton, say <math>X = \{ o \}.\!</math>  In this case the reference is neither plural nor indefinite, properly speaking, but <math>q\!</math> denotes <math>o\!</math> uniquely.
+
Then <math>q\!</math> makes a PIR to <math>X\!</math> in <math>L\!</math> if and only if <math>X \subseteq \mathrm{De}(q, L).\!</math>  Of course, this includes the limiting case where <math>X\!</math> is a singleton, say <math>X = \{ o \}.\!</math>  In this case the reference is neither plural nor indefinite, properly speaking, but <math>q\!</math> denotes <math>o\!</math> uniquely.
  
 
The proper exploitation of PIRs in sign relations makes it possible to finesse the distinction between HI signs and HU signs, in other words, to provide a ready means of translating between the two kinds of signs that preserves all the relevant information, at least, for many purposes.  This is accomplished by connecting the sides of the distinction in two directions.  First, a HI sign that makes a PIR to many triples of the form <math>(o, s, i)\!</math> can be taken as tantamount to a HU sign that denotes the corresponding sign relation.  Second, a HU sign that denotes a singleton sign relation can be taken as tantamount to a HI sign that denotes its single triple.  The relation of one sign being &ldquo;tantamount to&rdquo; another is not exactly a full-fledged semantic equivalence, but it is a reasonable approximation to it, and one that serves a number of practical purposes.
 
The proper exploitation of PIRs in sign relations makes it possible to finesse the distinction between HI signs and HU signs, in other words, to provide a ready means of translating between the two kinds of signs that preserves all the relevant information, at least, for many purposes.  This is accomplished by connecting the sides of the distinction in two directions.  First, a HI sign that makes a PIR to many triples of the form <math>(o, s, i)\!</math> can be taken as tantamount to a HU sign that denotes the corresponding sign relation.  Second, a HU sign that denotes a singleton sign relation can be taken as tantamount to a HI sign that denotes its single triple.  The relation of one sign being &ldquo;tantamount to&rdquo; another is not exactly a full-fledged semantic equivalence, but it is a reasonable approximation to it, and one that serves a number of practical purposes.
Line 1,605: Line 1,615:
 
|}
 
|}
  
The intent of this succession, as interpreted in FL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}</math> denotes or refers to <math>{}^{\langle} x {}^{\rangle},</math> which denotes or refers to <math>x.\!</math>  Moreover, its computational realization, as implemented in CL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}</math> addresses or evaluates to <math>{}^{\langle} x {}^{\rangle},</math> which addresses or evaluates to <math>x.\!</math>
+
The intent of this succession, as interpreted in FL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> denotes or refers to <math>{}^{\langle} x {}^{\rangle},\!</math> which denotes or refers to <math>x.\!</math>  Moreover, its computational realization, as implemented in CL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> addresses or evaluates to <math>{}^{\langle} x {}^{\rangle},\!</math> which addresses or evaluates to <math>x.\!</math>
  
 
The designations ''higher order'' and ''lower order'' are attributed to signs in a casual, local, and transitory way.  At this point they signify nothing beyond the occurrence in a sign relation of a pair of triples having the form shown in Table&nbsp;37.
 
The designations ''higher order'' and ''lower order'' are attributed to signs in a casual, local, and transitory way.  At this point they signify nothing beyond the occurrence in a sign relation of a pair of triples having the form shown in Table&nbsp;37.
Line 1,744: Line 1,754:
 
In ordinary discourse HA signs are usually generated by means of linguistic devices for quoting pieces of text.  In computational frameworks these quoting mechanisms are implemented as functions that map syntactic arguments into numerical or syntactic values.  A quoting function, given a sign or expression as its single argument, needs to accomplish two things:  first, to defer the reference of that sign, in other words, to inhibit, delay, or prevent the evaluation of its argument expression, and then, to exhibit or produce another sign whose object is precisely that argument expression.
 
In ordinary discourse HA signs are usually generated by means of linguistic devices for quoting pieces of text.  In computational frameworks these quoting mechanisms are implemented as functions that map syntactic arguments into numerical or syntactic values.  A quoting function, given a sign or expression as its single argument, needs to accomplish two things:  first, to defer the reference of that sign, in other words, to inhibit, delay, or prevent the evaluation of its argument expression, and then, to exhibit or produce another sign whose object is precisely that argument expression.
  
The rest of this section considers the development of sign relations that have moderate capacities to reference their own signs as objects.  In each case, these extensions are assumed to begin with sign relations like <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> that have disjoint sets of objects and signs and thus have no reflective capacity at the outset.  The status of <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as the reflective origins of the associated reflective developments is recalled by saying that <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> themselves are the ''zeroth order reflective extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> in symbols, <math>L(\text{A}) = \operatorname{Ref}^0 L(\text{A})\!</math> and <math>L(\text{B}) = \operatorname{Ref}^0 L(\text{B}).\!</math>
+
The rest of this section considers the development of sign relations that have moderate capacities to reference their own signs as objects.  In each case, these extensions are assumed to begin with sign relations like <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> that have disjoint sets of objects and signs and thus have no reflective capacity at the outset.  The status of <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as the reflective origins of the associated reflective developments is recalled by saying that <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> themselves are the ''zeroth order reflective extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> in symbols, <math>L(\text{A}) = \mathrm{Ref}^0 L(\text{A})\!</math> and <math>L(\text{B}) = \mathrm{Ref}^0 L(\text{B}).\!</math>
  
 
The next set of Tables illustrates a few the most common ways that sign relations can begin to develop reflective extensions.  For ease of reference, Tables&nbsp;40 and 41 repeat the contents of Tables&nbsp;1 and 2, respectively, merely replacing ordinary quotes with arch quotes.
 
The next set of Tables illustrates a few the most common ways that sign relations can begin to develop reflective extensions.  For ease of reference, Tables&nbsp;40 and 41 repeat the contents of Tables&nbsp;1 and 2, respectively, merely replacing ordinary quotes with arch quotes.
Line 1,751: Line 1,761:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 40.} ~~ \text{Reflective Origin} ~ \operatorname{Ref}^0 L(\text{A})\!</math>
+
|+ style="height:30px" | <math>\text{Table 40.} ~~ \text{Reflective Origin} ~ \mathrm{Ref}^0 L(\text{A})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 1,823: Line 1,833:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 41.} ~~ \text{Reflective Origin} ~ \operatorname{Ref}^0 L(\text{B})\!</math>
+
|+ style="height:30px" | <math>\text{Table 41.} ~~ \text{Reflective Origin} ~ \mathrm{Ref}^0 L(\text{B})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 1,894: Line 1,904:
 
<br>
 
<br>
  
Tables&nbsp;42 and 43 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\operatorname{Ref}^1 L(\text{A})\!</math> and <math>\operatorname{Ref}^1 L(\text{B}).\!</math>  These extensions add one layer of HA signs and their objects to the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  The new triples specify that, for each <math>{}^{\langle} x {}^{\rangle}\!</math> in the set <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},\!</math> the HA sign of the form <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> connotes itself while denoting <math>{}^{\langle} x {}^{\rangle}.\!</math>
+
Tables&nbsp;42 and 43 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\mathrm{Ref}^1 L(\text{A})\!</math> and <math>\mathrm{Ref}^1 L(\text{B}).\!</math>  These extensions add one layer of HA signs and their objects to the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  The new triples specify that, for each <math>{}^{\langle} x {}^{\rangle}\!</math> in the set <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},\!</math> the HA sign of the form <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> connotes itself while denoting <math>{}^{\langle} x {}^{\rangle}.\!</math>
  
 
Notice that the semantic equivalences of nouns and pronouns referring to each interpreter do not extend to semantic equivalences of their higher order signs, exactly as demanded by the literal character of quotations.  Also notice that the reflective extensions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> coincide in their reflective parts, since exactly the same triples were added to each set.
 
Notice that the semantic equivalences of nouns and pronouns referring to each interpreter do not extend to semantic equivalences of their higher order signs, exactly as demanded by the literal character of quotations.  Also notice that the reflective extensions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> coincide in their reflective parts, since exactly the same triples were added to each set.
Line 1,901: Line 1,911:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 42.} ~~ \text{Higher Ascent Sign Relation} ~ \operatorname{Ref}^1 L(\text{A})\!</math>
+
|+ style="height:30px" | <math>\text{Table 42.} ~~ \text{Higher Ascent Sign Relation} ~ \mathrm{Ref}^1 L(\text{A})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,004: Line 2,014:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 43.} ~~ \text{Higher Ascent Sign Relation} ~ \operatorname{Ref}^1 L(\text{B})\!</math>
+
|+ style="height:30px" | <math>\text{Table 43.} ~~ \text{Higher Ascent Sign Relation} ~ \mathrm{Ref}^1 L(\text{B})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,106: Line 2,116:
 
<br>
 
<br>
  
There are many ways to extend sign relations in an effort to develop their reflective capacities.  The implicit goal of a reflective project is to reach a condition of ''reflective closure'', a configuration satisfying the inclusion <math>S \subseteq O,\!</math> where every sign is an object.  It is important to note that not every process of reflective extension can achieve a reflective closure in a finite sign relation.  This can only happen if there are additional equivalence relations that keep the effective orders of signs within finite bounds.  As long as there are higher order signs that remain distinct from all lower order signs, the sign relation driven by a reflective process is forced to keep expanding.  In particular, the process that is ''freely'' suggested by the formation of <math>\operatorname{Ref}^1 L(\text{A})\!</math> and <math>\operatorname{Ref}^1 L(\text{B})\!</math> cannot reach closure if it continues as indicated, without further constraints.
+
There are many ways to extend sign relations in an effort to develop their reflective capacities.  The implicit goal of a reflective project is to reach a condition of ''reflective closure'', a configuration satisfying the inclusion <math>S \subseteq O,\!</math> where every sign is an object.  It is important to note that not every process of reflective extension can achieve a reflective closure in a finite sign relation.  This can only happen if there are additional equivalence relations that keep the effective orders of signs within finite bounds.  As long as there are higher order signs that remain distinct from all lower order signs, the sign relation driven by a reflective process is forced to keep expanding.  In particular, the process that is ''freely'' suggested by the formation of <math>\mathrm{Ref}^1 L(\text{A})~\!</math> and <math>\mathrm{Ref}^1 L(\text{B})~\!</math> cannot reach closure if it continues as indicated, without further constraints.
  
 
Tables&nbsp;44 and 45 present ''higher import extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  These are just higher order sign relations that add selections of higher import signs and their objects to the underlying set of triples in <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  One way to understand these extensions is as follows.  The interpreters <math>\text{A}\!</math> and <math>\text{B}\!</math> each use nouns and pronouns just as before, except that the nouns are given additional denotations that refer to the interpretive conduct of the interpreter named.  In this form of development, using a noun as a canonical form that refers indifferently to all the <math>(o, s, i)\!</math> triples of a sign relation is a pragmatic way that a sign relation can refer to itself and to other sign relations.
 
Tables&nbsp;44 and 45 present ''higher import extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  These are just higher order sign relations that add selections of higher import signs and their objects to the underlying set of triples in <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  One way to understand these extensions is as follows.  The interpreters <math>\text{A}\!</math> and <math>\text{B}\!</math> each use nouns and pronouns just as before, except that the nouns are given additional denotations that refer to the interpretive conduct of the interpreter named.  In this form of development, using a noun as a canonical form that refers indifferently to all the <math>(o, s, i)\!</math> triples of a sign relation is a pragmatic way that a sign relation can refer to itself and to other sign relations.
Line 2,113: Line 2,123:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 44.} ~~ \text{Higher Import Sign Relation} ~ \operatorname{HI}^1 L(\text{A})\!</math>
+
|+ style="height:30px" | <math>\text{Table 44.} ~~ \text{Higher Import Sign Relation} ~ \mathrm{HI}^1 L(\text{A})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,309: Line 2,319:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 45.} ~~ \text{Higher Import Sign Relation} ~ \operatorname{HI}^1 L(\text{B})\!</math>
+
|+ style="height:30px" | <math>\text{Table 45.} ~~ \text{Higher Import Sign Relation} ~ \mathrm{HI}^1 L(\text{B})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,504: Line 2,514:
 
<br>
 
<br>
  
Several important facts about the class of higher order sign relations in general are illustrated by these examples.  First, the notations appearing in the object columns of <math>\operatorname{HI}^1 L(\text{A})\!</math> and <math>\operatorname{HI}^1 L(\text{B})\!</math> are not the terms that these newly extended interpreters are depicted as using to describe their objects, but the kinds of language that you and I, or other external observers, would typically make available to distinguish them.  The sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> as extended by the transactions of <math>\operatorname{HI}^1 L(\text{A})\!</math> and <math>\operatorname{HI}^1 L(\text{B}),\!</math> respectively, are still restricted to their original syntactic domain <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}.\!</math>  This means that there need be nothing especially articulate about a HI sign relation just because it qualifies as higher order.  Indeed, the sign relations <math>\operatorname{HI}^1 L(\text{A})\!</math> and <math>\operatorname{HI}^1 L(\text{B})\!</math> are not very discriminating in their descriptions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> referring to many different things under the very same signs that you and I and others would explicitly distinguish, especially in marking the distinction between an interpretive agent and any one of its individual transactions.
+
Several important facts about the class of higher order sign relations in general are illustrated by these examples.  First, the notations appearing in the object columns of <math>\mathrm{HI}^1 L(\text{A})\!</math> and <math>\mathrm{HI}^1 L(\text{B})\!</math> are not the terms that these newly extended interpreters are depicted as using to describe their objects, but the kinds of language that you and I, or other external observers, would typically make available to distinguish them.  The sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> as extended by the transactions of <math>\mathrm{HI}^1 L(\text{A})\!</math> and <math>\mathrm{HI}^1 L(\text{B}),\!</math> respectively, are still restricted to their original syntactic domain <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}.\!</math>  This means that there need be nothing especially articulate about a HI sign relation just because it qualifies as higher order.  Indeed, the sign relations <math>\mathrm{HI}^1 L(\text{A})\!</math> and <math>\mathrm{HI}^1 L(\text{B})\!</math> are not very discriminating in their descriptions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> referring to many different things under the very same signs that you and I and others would explicitly distinguish, especially in marking the distinction between an interpretive agent and any one of its individual transactions.
  
 
In practice, it does an interpreter little good to have the higher import signs for referring to triples of objects, signs, and interpretants if it does not also have the higher ascent signs for referring to each triple's syntactic portions.  Consequently, the higher order sign relations that one is likely to observe in practice are typically a mixed bag, having both higher ascent and higher import sections.  Moreover, the ambiguity involved in having signs that refer equivocally to simple world elements and also to complex structures formed from these ingredients would most likely be resolved by drawing additional information from context and fashioning more distinctive signs.
 
In practice, it does an interpreter little good to have the higher import signs for referring to triples of objects, signs, and interpretants if it does not also have the higher ascent signs for referring to each triple's syntactic portions.  Consequently, the higher order sign relations that one is likely to observe in practice are typically a mixed bag, having both higher ascent and higher import sections.  Moreover, the ambiguity involved in having signs that refer equivocally to simple world elements and also to complex structures formed from these ingredients would most likely be resolved by drawing additional information from context and fashioning more distinctive signs.
Line 2,512: Line 2,522:
 
The technique illustrated here represents a general strategy, one that can be exploited to derive certain benefits of set theory without having to pay the overhead that is needed to maintain sets as abstract objects.  Using an identified type of a sign as a canonical form that can refer indifferently to all the members of a set is a pragmatic way of making plural reference to the members of a set without invoking the set itself as an abstract object.  Of course, it is not that one can get something for nothing by these means.  One is merely banking on one's recurring investment in the setting of a certain sign relation, a particular set of elementary transactions that is taken for granted as already funded.
 
The technique illustrated here represents a general strategy, one that can be exploited to derive certain benefits of set theory without having to pay the overhead that is needed to maintain sets as abstract objects.  Using an identified type of a sign as a canonical form that can refer indifferently to all the members of a set is a pragmatic way of making plural reference to the members of a set without invoking the set itself as an abstract object.  Of course, it is not that one can get something for nothing by these means.  One is merely banking on one's recurring investment in the setting of a certain sign relation, a particular set of elementary transactions that is taken for granted as already funded.
  
As a rule, it is desirable for the grammatical system that one uses to construct and interpret higher order signs, that is, signs for referring to signs as objects, to mesh in a comfortable fashion with the overall pragmatic system that one uses to assign syntactic codes to objects in general.  For future reference, I call this requirement the problem of creating a ''conformally reflective extension'' (CRE) for a given sign relation.  A good way to think about this task is to imagine oneself beginning with a sign relation <math>L \subseteq O \times S \times I,\!</math> and to consider its denotative component <math>\operatorname{Den}_L = L_{OS} \subseteq O \times S.\!</math>  Typically one has a ''naming function'', say <math>\operatorname{Nom},\!</math> that maps objects into signs:
+
As a rule, it is desirable for the grammatical system that one uses to construct and interpret higher order signs, that is, signs for referring to signs as objects, to mesh in a comfortable fashion with the overall pragmatic system that one uses to assign syntactic codes to objects in general.  For future reference, I call this requirement the problem of creating a ''conformally reflective extension'' (CRE) for a given sign relation.  A good way to think about this task is to imagine oneself beginning with a sign relation <math>L \subseteq O \times S \times I,\!</math> and to consider its denotative component <math>\mathrm{Den}_L = L_{OS} \subseteq O \times S.\!</math>  Typically one has a ''naming function'', say <math>\mathrm{Nom},\!</math> that maps objects into signs:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Nom} \subseteq \operatorname{Den}_L \subseteq O \times S ~\text{such that}~ \operatorname{Nom} : O \to S.\!</math>
+
| <math>\mathrm{Nom} \subseteq \mathrm{Den}_L \subseteq O \times S ~\text{such that}~ \mathrm{Nom} : O \to S.\!</math>
 
|}
 
|}
  
Part of the task of making a sign relation more reflective is to extend it in ways that turn more of its signs into objects.  This is the reason for creating higher order signs, which are just signs for making objects out of signs.  One effect of progressive reflection is to extend the initial naming function <math>\operatorname{Nom}\!</math> through a succession of new naming functions <math>\operatorname{Nom}',\!</math> <math>\operatorname{Nom}'',\!</math> and so on, assigning unique names to larger allotments of the original and subsequent signs.  With respect to the difficulties of construction, the ''hard core'' or ''adamant part'' of creating extended naming functions resides in the initial portion <math>\operatorname{Nom}\!</math> that maps objects of the &ldquo;external world&rdquo; to signs in the &ldquo;internal world&rdquo;.  The subsequent task of assigning conventional names to signs is supposed to be comparatively natural and ''easy'', perhaps on account of the ''nominal'' nature of signs themselves.
+
Part of the task of making a sign relation more reflective is to extend it in ways that turn more of its signs into objects.  This is the reason for creating higher order signs, which are just signs for making objects out of signs.  One effect of progressive reflection is to extend the initial naming function <math>\mathrm{Nom}\!</math> through a succession of new naming functions <math>\mathrm{Nom}',\!</math> <math>\mathrm{Nom}'',\!</math> and so on, assigning unique names to larger allotments of the original and subsequent signs.  With respect to the difficulties of construction, the ''hard core'' or ''adamant part'' of creating extended naming functions resides in the initial portion <math>\mathrm{Nom}\!</math> that maps objects of the &ldquo;external world&rdquo; to signs in the &ldquo;internal world&rdquo;.  The subsequent task of assigning conventional names to signs is supposed to be comparatively natural and ''easy'', perhaps on account of the ''nominal'' nature of signs themselves.
  
 
The effect of reflection on the original sign relation <math>L \subseteq O \times S \times I\!</math> can be analyzed as follows.  Suppose that a step of reflection creates higher order signs for a subset of <math>S.\!</math>  Then this step involves the construction of a newly extended sign relation:
 
The effect of reflection on the original sign relation <math>L \subseteq O \times S \times I\!</math> can be analyzed as follows.  Suppose that a step of reflection creates higher order signs for a subset of <math>S.\!</math>  Then this step involves the construction of a newly extended sign relation:
Line 2,529: Line 2,539:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Nom}_1 : O_1 \to S_1 ~\text{such that}~ \operatorname{Nom}_1 : x \mapsto {}^{\langle} x {}^{\rangle}.\!</math>
+
| <math>\mathrm{Nom}_1 : O_1 \to S_1 ~\text{such that}~ \mathrm{Nom}_1 : x \mapsto {}^{\langle} x {}^{\rangle}.\!</math>
 
|}
 
|}
  
Finally, the reflectively extended naming function <math>\operatorname{Nom}' : O' \to S'\!</math> is defined as <math>\operatorname{Nom}' = \operatorname{Nom} \cup \operatorname{Nom}_1.\!</math>
+
Finally, the reflectively extended naming function <math>\mathrm{Nom}' : O' \to S'\!</math> is defined as <math>\mathrm{Nom}' = \mathrm{Nom} \cup \mathrm{Nom}_1.\!</math>
  
 
A few remarks are necessary to see how this way of defining a CRE can be regarded as legitimate.
 
A few remarks are necessary to see how this way of defining a CRE can be regarded as legitimate.
Line 2,538: Line 2,548:
 
In the present context an application of the arch notation, for example, <math>{}^{\langle} x {}^{\rangle},\!</math> is read on analogy with the use of any other functional notation, for example, <math>f(x),\!</math> where <math>{}^{\backprime\backprime} f {}^{\prime\prime}\!</math> is the name of a function <math>f,\!</math> <math>{}^{\backprime\backprime} f(~) {}^{\prime\prime}\!</math> is the context of its application, <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> is the name of an argument <math>x,\!</math> and where the functional abstraction <math>{}^{\backprime\backprime} x \mapsto f(x) {}^{\prime\prime}\!</math> is just another name for the function <math>f.\!</math>
 
In the present context an application of the arch notation, for example, <math>{}^{\langle} x {}^{\rangle},\!</math> is read on analogy with the use of any other functional notation, for example, <math>f(x),\!</math> where <math>{}^{\backprime\backprime} f {}^{\prime\prime}\!</math> is the name of a function <math>f,\!</math> <math>{}^{\backprime\backprime} f(~) {}^{\prime\prime}\!</math> is the context of its application, <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> is the name of an argument <math>x,\!</math> and where the functional abstraction <math>{}^{\backprime\backprime} x \mapsto f(x) {}^{\prime\prime}\!</math> is just another name for the function <math>f.\!</math>
  
It is clear that some form of functional abstraction is being invoked in the above definition of <math>\operatorname{Nom}_1.\!</math>  Otherwise, the expression <math>x \mapsto {}^{\langle} x {}^{\rangle}\!</math> would indicate a constant function, one that maps every <math>x\!</math> in its domain to the same code or sign for the letter <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>  But if this is allowed, then it appears to pose a dilemma, either to invoke a more powerful concept of functional abstraction than the concept being defined, or else to attempt an improper definition of the naming function in terms of itself.
+
It is clear that some form of functional abstraction is being invoked in the above definition of <math>\mathrm{Nom}_1.\!</math>  Otherwise, the expression <math>x \mapsto {}^{\langle} x {}^{\rangle}\!</math> would indicate a constant function, one that maps every <math>x\!</math> in its domain to the same code or sign for the letter <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>  But if this is allowed, then it appears to pose a dilemma, either to invoke a more powerful concept of functional abstraction than the concept being defined, or else to attempt an improper definition of the naming function in terms of itself.
  
 
Although it appears that this form of functional abstraction is being used to define the CRE in terms of itself, trying to extend the definition of the naming function in terms of a definition that is already assumed to be available, in reality this only uses a finite function, a finite table look up, to define the naming function for an unlimited number of higher order signs.
 
Although it appears that this form of functional abstraction is being used to define the CRE in terms of itself, trying to extend the definition of the naming function in terms of a definition that is already assumed to be available, in reality this only uses a finite function, a finite table look up, to define the naming function for an unlimited number of higher order signs.
Line 2,562: Line 2,572:
 
===6.11. Higher Order Sign Relations : Application===
 
===6.11. Higher Order Sign Relations : Application===
  
Given the language in which a notation like <math>{}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime}\!</math> makes sense, or in prospect of being given such a language, it is instructive to ask:  &ldquo;What must be assumed about the context of interpretation in which this language is supposed to make sense?&rdquo;  According to the theory of signs that is being examined here, the relevant formal aspects of that context are embodied in a particular sign relation, call it <math>{}^{\backprime\backprime} Q {}^{\prime\prime}.\!</math>  With respect to the hypothetical sign relation <math>Q,\!</math> commonly personified as the prospective reader or the ideal interpreter of the intended language, the denotation of the expression <math>{}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime}\!</math> is given by:
+
Given the language in which a notation like <math>{}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}\!</math> makes sense, or in prospect of being given such a language, it is instructive to ask:  &ldquo;What must be assumed about the context of interpretation in which this language is supposed to make sense?&rdquo;  According to the theory of signs that is being examined here, the relevant formal aspects of that context are embodied in a particular sign relation, call it <math>{}^{\backprime\backprime} Q {}^{\prime\prime}.\!</math>  With respect to the hypothetical sign relation <math>Q,\!</math> commonly personified as the prospective reader or the ideal interpreter of the intended language, the denotation of the expression <math>{}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}\!</math> is given by:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{De}( {}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime}, Q ).\!</math>
+
| <math>\mathrm{De}( {}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}, Q ).\!</math>
 
|}
 
|}
  
Line 2,573: Line 2,583:
 
|
 
|
 
<math>\begin{array}{lccc}
 
<math>\begin{array}{lccc}
\operatorname{De}( & {}^{\backprime\backprime} \operatorname{De} {}^{\prime\prime} & , & Q)
+
\mathrm{De}( & {}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime} & , & Q)
 
\\[6pt]
 
\\[6pt]
\operatorname{De}( & {}^{\backprime\backprime} q {}^{\prime\prime} & , & Q)
+
\mathrm{De}( & {}^{\backprime\backprime} q {}^{\prime\prime} & , & Q)
 
\\[6pt]
 
\\[6pt]
\operatorname{De}( & {}^{\backprime\backprime} L {}^{\prime\prime} & , & Q)
+
\mathrm{De}( & {}^{\backprime\backprime} L {}^{\prime\prime} & , & Q)
 
\end{array}</math>
 
\end{array}</math>
 
|}
 
|}
  
What are the roles of the signs <math>{}^{\backprime\backprime} \operatorname{De} {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} q {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> and what are they supposed to mean to <math>Q\!</math>?  Evidently, <math>{}^{\backprime\backprime} \operatorname{De} {}^{\prime\prime}\!</math> is a constant name that refers to a particular function, <math>{}^{\backprime\backprime} q {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of signs, and <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of sign relations.
+
What are the roles of the signs <math>{}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} q {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> and what are they supposed to mean to <math>Q\!</math>?  Evidently, <math>{}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime}\!</math> is a constant name that refers to a particular function, <math>{}^{\backprime\backprime} q {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of signs, and <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of sign relations.
  
 
This is not the place to take up the possibility of an ideal, universal, or even a very comprehensive interpreter for the language indicated here, so I specialize the account to consider an interpreter <math>Q_{\text{AB}} = Q(\text{A}, \text{B})\!</math> that is competent to cover the initial level of reflections that arise from the dialogue of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
 
This is not the place to take up the possibility of an ideal, universal, or even a very comprehensive interpreter for the language indicated here, so I specialize the account to consider an interpreter <math>Q_{\text{AB}} = Q(\text{A}, \text{B})\!</math> that is competent to cover the initial level of reflections that arise from the dialogue of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
  
For the interpreter <math>Q_{\text{AB}},\!</math> the sign variable <math>q\!</math> need only range over the syntactic domain <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!</math> and the relation variable <math>L\!</math> need only range over the set of sign relations <math>\{ L(\text{A}), L(\text{B}) \}.\!</math>  These requirements can be accomplished as follows:
+
For the interpreter <math>Q_\text{AB},\!</math> the sign variable <math>q\!</math> need only range over the syntactic domain <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!</math> and the relation variable <math>L\!</math> need only range over the set of sign relations <math>\{ L(\text{A}), L(\text{B}) \}.\!</math>  These requirements can be accomplished as follows:
  
# The variable name <math>{}^{\backprime\backprime} q {}^{\prime\prime}</math> is a HA sign that makes a PIR to the elements of <math>S.\!</math>
+
# The variable name <math>{}^{\backprime\backprime} q {}^{\prime\prime}</math> is a HA sign that makes a PIR to the elements of <math>S.~\!</math>
# The variable name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is a HU sign that makes a PIR to the elements of <math>\{ L(\text{A}), L(\text{B}) \}.\!</math>
+
# The variable name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is a HU sign that makes a PIR to the elements of <math>\{ L(\text{A}), L(\text{B}) \}.~\!</math>
# The constant name <math>{}^{\backprime\backprime} L(\text{A}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{A}).\!</math>
+
# The constant name <math>{}^{\backprime\backprime} L(\text{A}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{A}).~\!</math>
# The constant name <math>{}^{\backprime\backprime} L(\text{B}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{B}).\!</math>
+
# The constant name <math>{}^{\backprime\backprime} L(\text{B}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{B}).~\!</math>
  
This results in a higher order sign relation for <math>Q_{\text{AB}},\!</math> that is shown in Table&nbsp;46.
+
This results in a higher order sign relation for <math>Q_\text{AB},\!</math> that is shown in Table&nbsp;46.
  
 
<br>
 
<br>
Line 2,608: Line 2,618:
 
\\
 
\\
 
\text{B}
 
\text{B}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,614: Line 2,624:
 
\\
 
\\
 
{}^{\langle} L {}^{\rangle}
 
{}^{\langle} L {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,620: Line 2,630:
 
\\
 
\\
 
{}^{\langle} L {}^{\rangle}
 
{}^{\langle} L {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,631: Line 2,641:
 
\\
 
\\
 
{}^{\langle} \text{u} {}^{\rangle}
 
{}^{\langle} \text{u} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,641: Line 2,651:
 
\\
 
\\
 
{}^{\langle} q {}^{\rangle}
 
{}^{\langle} q {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,651: Line 2,661:
 
\\
 
\\
 
{}^{\langle} q {}^{\rangle}
 
{}^{\langle} q {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,670: Line 2,680:
 
\\
 
\\
 
( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & )
 
( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & )
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,688: Line 2,698:
 
\\
 
\\
 
{}^{\langle} \text{A} {}^{\rangle}
 
{}^{\langle} \text{A} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,706: Line 2,716:
 
\\
 
\\
 
{}^{\langle} \text{A} {}^{\rangle}
 
{}^{\langle} \text{A} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,725: Line 2,735:
 
\\
 
\\
 
( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & )
 
( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & )
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,743: Line 2,753:
 
\\
 
\\
 
{}^{\langle} \text{B} {}^{\rangle}
 
{}^{\langle} \text{B} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,761: Line 2,771:
 
\\
 
\\
 
{}^{\langle} \text{B} {}^{\rangle}
 
{}^{\langle} \text{B} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,780: Line 2,790:
 
\\
 
\\
 
(( & {}^{\langle} \text{u} {}^{\rangle} & , & \text{B} & ), & \text{A} & )
 
(( & {}^{\langle} \text{u} {}^{\rangle} & , & \text{B} & ), & \text{A} & )
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|}
 
|}
  
 
<br>
 
<br>
  
Following the manner of construction in this extremely reduced example, it is possible to see how answers to the above questions, concerning the meaning of <math>{}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime},\!</math> might be worked out.  In the present instance:
+
Following the manner of construction in this extremely reduced example, it is possible to see how answers to the above questions, concerning the meaning of <math>{}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime},\!</math> might be worked out.  In the present instance:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
 
|
 
|
 
<math>\begin{array}{lll}
 
<math>\begin{array}{lll}
\operatorname{De} ({}^{\backprime\backprime} q {}^{\prime\prime}, Q_{\text{AB}})
+
\mathrm{De} ({}^{\backprime\backprime} q {}^{\prime\prime}, Q_{\text{AB}})
 
& = & S
 
& = & S
 
\\[6pt]
 
\\[6pt]
\operatorname{De} ({}^{\backprime\backprime} L {}^{\prime\prime}, Q_{\text{AB}})
+
\mathrm{De} ({}^{\backprime\backprime} L {}^{\prime\prime}, Q_{\text{AB}})
 
& = & \{ L(\text{A}), L(\text{B}) \}
 
& = & \{ L(\text{A}), L(\text{B}) \}
 
\end{array}</math>
 
\end{array}</math>
Line 2,904: Line 2,914:
 
<p>The ''nominal resource'' (''nominal alphabet'' or ''nominal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
 
<p>The ''nominal resource'' (''nominal alphabet'' or ''nominal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
  
<p><math>X^{\backprime\backprime\prime\prime} = \operatorname{Nom}(X) = \{ {}^{\backprime\backprime} x_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} x_n {}^{\prime\prime} \}.</math></p>
+
<p><math>X^{\backprime\backprime\prime\prime} = \mathrm{Nom}(X) = \{ {}^{\backprime\backprime} x_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} x_n {}^{\prime\prime} \}.</math></p>
  
 
<p>This concept is intended to capture the ordinary usage of this set of signs in one familiar context or another.</p></li>
 
<p>This concept is intended to capture the ordinary usage of this set of signs in one familiar context or another.</p></li>
Line 2,911: Line 2,921:
 
<p>The ''mediate resource'' (''mediate alphabet'' or ''mediate lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
 
<p>The ''mediate resource'' (''mediate alphabet'' or ''mediate lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
  
<p><math>X^{\langle\rangle} = \operatorname{Med}(X) = \{ {}^{\langle} x_1 {}^{\rangle}, \ldots, {}^{\langle} x_n {}^{\rangle} \}.</math></p>
+
<p><math>X^{\langle\rangle} = \mathrm{Med}(X) = \{ {}^{\langle} x_1 {}^{\rangle}, \ldots, {}^{\langle} x_n {}^{\rangle} \}.</math></p>
  
 
<p>This concept provides a middle ground between the nominal resource above and the literal resource described next.</p></li>
 
<p>This concept provides a middle ground between the nominal resource above and the literal resource described next.</p></li>
Line 2,918: Line 2,928:
 
<p>The ''literal resource'' (''literal alphabet'' or ''literal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
 
<p>The ''literal resource'' (''literal alphabet'' or ''literal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
  
<p><math>X = \operatorname{Lit}(X) = \{ x_1, \ldots, x_n \}.</math></p>
+
<p><math>X = \mathrm{Lit}(X) = \{ x_1, \ldots, x_n \}.</math></p>
  
 
<p>This concept is intended to supply a set of signs that can be used in ways analogous to familiar usages, but which are more subject to free variation and thematic control.</p></li></ol>
 
<p>This concept is intended to supply a set of signs that can be used in ways analogous to familiar usages, but which are more subject to free variation and thematic control.</p></li></ol>
Line 2,939: Line 2,949:
 
In the ''elemental construal'' of variables, a variable <math>x\!</math> is just an existing object <math>x\!</math> that is an element of a set <math>X,\!</math> the catch being &ldquo;which element?&rdquo;  In spite of this lack of information, one is still permitted to write <math>{}^{\backprime\backprime} x \in X {}^{\prime\prime}\!</math> as a syntactically well-formed expression and otherwise treat the variable name <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> as a pronoun on a grammatical par with a noun.  Given enough information about the contexts of usage and interpretation, this explanation of the variable <math>x\!</math> as an unknown object would complete itself in a determinate indication of the element intended, just as if a constant object had always been named by <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>
 
In the ''elemental construal'' of variables, a variable <math>x\!</math> is just an existing object <math>x\!</math> that is an element of a set <math>X,\!</math> the catch being &ldquo;which element?&rdquo;  In spite of this lack of information, one is still permitted to write <math>{}^{\backprime\backprime} x \in X {}^{\prime\prime}\!</math> as a syntactically well-formed expression and otherwise treat the variable name <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> as a pronoun on a grammatical par with a noun.  Given enough information about the contexts of usage and interpretation, this explanation of the variable <math>x\!</math> as an unknown object would complete itself in a determinate indication of the element intended, just as if a constant object had always been named by <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>
  
In the ''functional construal'' of variables, a variable is a function of unknown circumstances that results in a known range of definite values.  This tactic pushes the ostensible location of the uncertainty back a bit, into the domain of a named function, but it cannot eliminate it entirely.  Thus, a variable is a function <math>x : X \to Y\!</math> that maps a domain of unknown circumstances, or a ''sample space'' <math>X,\!</math> into a range <math>Y\!</math> of outcome values.  Typically, variables of this sort come in sets of the form <math>\{ x_i : X \to Y \},\!</math> collectively called ''coordinate projections'' and together constituting a basis for a whole class of functions <math>x : X \to Y\!</math> sharing a similar type.  This construal succeeds in giving each variable name <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> an objective referent, namely, the coordinate projection <math>x_i,\!</math> but the explanation is partial to the extent that the domain of unknown circumstances remains to be explained.  Completing this explanation of variables, to the extent that it can be accomplished, requires an account of how these unknown circumstances can be known exactly to the extent that they are in fact described, that is, in terms of their effects under the given projections.
+
In the ''functional construal'' of variables, a variable is a function of unknown circumstances that results in a known range of definite values.  This tactic pushes the ostensible location of the uncertainty back a bit, into the domain of a named function, but it cannot eliminate it entirely.  Thus, a variable is a function <math>x : X \to Y\!</math> that maps a domain of unknown circumstances, or a ''sample space'' <math>X,\!</math> into a range <math>Y\!</math> of outcome values.  Typically, variables of this sort come in sets of the form <math>\{ x_i : X \to Y \},\!</math> collectively called ''coordinate projections'' and together constituting a basis for a whole class of functions <math>x : X \to Y\!</math> sharing a similar type.  This construal succeeds in giving each variable name <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> an objective referent, namely, the coordinate projection <math>{x_i},\!</math> but the explanation is partial to the extent that the domain of unknown circumstances remains to be explained.  Completing this explanation of variables, to the extent that it can be accomplished, requires an account of how these unknown circumstances can be known exactly to the extent that they are in fact described, that is, in terms of their effects under the given projections.
  
 
As suggested by the whole direction of the present work, the ultimate explanation of variables is to be given by the pragmatic theory of signs, where variables are treated as a special class of signs called ''indices''.
 
As suggested by the whole direction of the present work, the ultimate explanation of variables is to be given by the pragmatic theory of signs, where variables are treated as a special class of signs called ''indices''.
Line 2,998: Line 3,008:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\underline{\underline{X}} = \operatorname{Lit}(X) = \{ \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \}.\!</math>
+
| <math>\underline{\underline{X}} = \mathrm{Lit}(X) = \{ \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \}.\!</math>
 
|}
 
|}
  
Line 3,016: Line 3,026:
 
# The reflective (or critical) acceptation is to see the list before all else as a list of signs, each of which may or may not have a EU-object.  This is the attitude that must be taken in formal language theory and in any setting where computational constraints on interpretation are being contemplated.  In these contexts it cannot be assumed without question that every sign, whose participation in a denotation relation would have to be indicated by a recursive function and implemented by an effective program, does in fact have an existential denotation, much less a unique object.  The entire body of implicit assumptions that go to make up this acceptation, although they operate more like interpretive suspicions than automatic dispositions, will be referred to as the ''sign convention''.
 
# The reflective (or critical) acceptation is to see the list before all else as a list of signs, each of which may or may not have a EU-object.  This is the attitude that must be taken in formal language theory and in any setting where computational constraints on interpretation are being contemplated.  In these contexts it cannot be assumed without question that every sign, whose participation in a denotation relation would have to be indicated by a recursive function and implemented by an effective program, does in fact have an existential denotation, much less a unique object.  The entire body of implicit assumptions that go to make up this acceptation, although they operate more like interpretive suspicions than automatic dispositions, will be referred to as the ''sign convention''.
  
In the present context, I can answer questions about the ontology of a &ldquo;variable&rdquo; by saying that each variable <math>x_i\!</math> is a kind of a sign, in the boolean case capable of denoting an element of <math>\mathbb{B} = \{ 0, 1 \}\!</math> as its object, with the actual value depending on the interpretation of the moment.  Note that <math>x_i\!</math> is a sign, and that <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> is another sign that denotes it.  This acceptation of the list <math>X = \{ x_i \}\!</math> corresponds to what was just called the ''sign convention''.
+
In the present context, I can answer questions about the ontology of a &ldquo;variable&rdquo; by saying that each variable <math>x_i\!</math> is a kind of a sign, in the boolean case capable of denoting an element of <math>{\mathbb{B} = \{ 0, 1 \}}\!</math> as its object, with the actual value depending on the interpretation of the moment.  Note that <math>x_i\!</math> is a sign, and that <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> is another sign that denotes it.  This acceptation of the list <math>X = \{ x_i \}\!</math> corresponds to what was just called the ''sign convention''.
  
 
In a context where all the signs that ought to have EU-objects are in fact safely assured to do so, then it is usually less bothersome to assume the object convention.  Otherwise, discussion must resort to the less natural but more careful sign convention.  This convention is only &ldquo;artificial&rdquo; in the sense that it recalls the artifactual nature and the instrumental purpose of signs, and does nothing more out of the way than to call an implement &ldquo;an implement&rdquo;.
 
In a context where all the signs that ought to have EU-objects are in fact safely assured to do so, then it is usually less bothersome to assume the object convention.  Otherwise, discussion must resort to the less natural but more careful sign convention.  This convention is only &ldquo;artificial&rdquo; in the sense that it recalls the artifactual nature and the instrumental purpose of signs, and does nothing more out of the way than to call an implement &ldquo;an implement&rdquo;.
Line 3,029: Line 3,039:
  
 
<li>
 
<li>
<p>The sign <math>{}^{\backprime\backprime} x_i {}^{\prime\prime},\!</math> appearing in the contextual frame <math>{}^{\backprime\backprime} \underline{~~~} : \mathbb{B}^n \to \mathbb{B} {}^{\prime\prime},\!</math> or interpreted as belonging to that frame, denotes the <math>i^\text{th}\!</math> coordinate function <math>\underline{\underline{x_i}} : \mathbb{B}^n \to \mathbb{B}.</math>  The entire collection of coordinate maps in <math>\underline{\underline{X}} = \{ \underline{\underline{x_i}} \}\!</math> contributes to the definition of the ''coordinate space'' or ''vector space'' <math>\underline{X} : \mathbb{B}^n,\!</math> notated as follows:</p>
+
<p>The sign <math>{}^{\backprime\backprime} x_i {}^{\prime\prime},\!</math> appearing in the contextual frame <math>{}^{\backprime\backprime} \underline{[[User:Jon Awbrey|Jon Awbrey]] ([[User talk:Jon Awbrey|talk]])} : \mathbb{B}^n \to \mathbb{B} {}^{\prime\prime},\!</math> or interpreted as belonging to that frame, denotes the <math>i^\text{th}\!</math> coordinate function <math>\underline{\underline{x_i}} : \mathbb{B}^n \to \mathbb{B}.</math>  The entire collection of coordinate maps in <math>{\underline{\underline{X}} = \{ \underline{\underline{x_i}} \}}\!</math> contributes to the definition of the ''coordinate space'' or ''vector space'' <math>\underline{X} : \mathbb{B}^n,\!</math> notated as follows:</p>
  
 
<p><math>\underline{X} = \langle \underline{\underline{X}} \rangle = \langle \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \rangle = \{ (\underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}}) \} : \mathbb{B}^n.\!</math></p>
 
<p><math>\underline{X} = \langle \underline{\underline{X}} \rangle = \langle \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \rangle = \{ (\underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}}) \} : \mathbb{B}^n.\!</math></p>
Line 3,081: Line 3,091:
 
Next, it is necessary to consider the stylistic differences among the logical, functional, and geometric conceptions of propositional logic.  Logically, a domain of properties or propositions is known by the axioms it is subject to.  Concretely, one thinks of a particular property or proposition as applying to the things or situations it is true of.  With the synthesis just indicated, this can be expressed in a unified form:  In abstract logical terms, a DOP is known by the axioms to which it is subject.  In concrete functional or geometric terms, a particular element of a DOP is known by the things of which it is true.
 
Next, it is necessary to consider the stylistic differences among the logical, functional, and geometric conceptions of propositional logic.  Logically, a domain of properties or propositions is known by the axioms it is subject to.  Concretely, one thinks of a particular property or proposition as applying to the things or situations it is true of.  With the synthesis just indicated, this can be expressed in a unified form:  In abstract logical terms, a DOP is known by the axioms to which it is subject.  In concrete functional or geometric terms, a particular element of a DOP is known by the things of which it is true.
  
With the appropriate correspondences between these three domains in mind, the general term ''proposition'' can be interpreted in a flexible manner to cover logical, functional, and geometric types of objects.  Thus, a locution like <math>{}^{\backprime\backprime} \text{the proposition}~ F {}^{\prime\prime}\!</math> can be interpreted in three ways:  (1) literally, to denote a logical proposition, (2) functionally, to denote a mapping from a space <math>X\!</math> of propertied or proposed objects to the domain <math>\mathbb{B} = \{ 0, 1 \}\!</math> of truth values, and (3) geometrically, to denote the so-called ''fiber of truth'' <math>F^{-1}(1)\!</math> as a region or a subset of <math>X.\!</math>  For all of these reasons, it is desirable to set up a suitably flexible interpretive framework for propositional logic, where an object introduced as a logical proposition <math>F\!</math> can be recast as a boolean function <math>F : X \to \mathbb{B},\!</math> and understood to indicate the region of the space <math>X\!</math> that is ruled by <math>F.\!</math>
+
With the appropriate correspondences between these three domains in mind, the general term ''proposition'' can be interpreted in a flexible manner to cover logical, functional, and geometric types of objects.  Thus, a locution like <math>{}^{\backprime\backprime} \text{the proposition}~ F {}^{\prime\prime}\!</math> can be interpreted in three ways:  (1) literally, to denote a logical proposition, (2) functionally, to denote a mapping from a space <math>X\!</math> of propertied or proposed objects to the domain <math>{\mathbb{B} = \{ 0, 1 \}}\!</math> of truth values, and (3) geometrically, to denote the so-called ''fiber of truth'' <math>F^{-1}(1)\!</math> as a region or a subset of <math>X.\!</math>  For all of these reasons, it is desirable to set up a suitably flexible interpretive framework for propositional logic, where an object introduced as a logical proposition <math>F\!</math> can be recast as a boolean function <math>F : X \to \mathbb{B},\!</math> and understood to indicate the region of the space <math>X\!</math> that is ruled by <math>F.\!</math>
  
 
Generally speaking, it does not seem possible to disentangle these three domains from each other or to determine which one is more fundamental.  In practice, due to its concern with the computational implementations of every concept it uses, the present work is biased toward the functional interpretation of propositions.  From this point of view, the abstract intention of a logical proposition <math>F\!</math> is regarded as being realized only when a program is found that computes the function <math>F : X \to \mathbb{B}.\!</math>
 
Generally speaking, it does not seem possible to disentangle these three domains from each other or to determine which one is more fundamental.  In practice, due to its concern with the computational implementations of every concept it uses, the present work is biased toward the functional interpretation of propositions.  From this point of view, the abstract intention of a logical proposition <math>F\!</math> is regarded as being realized only when a program is found that computes the function <math>F : X \to \mathbb{B}.\!</math>
Line 3,089: Line 3,099:
 
One of the reasons for pursuing a pragmatic hybrid of semantic and syntactic approaches, rather than keeping to the purely syntactic ways of manipulating meaningless tokens according to abstract rules of proof, is that the model theoretic strategy preserves the form of connection that exists between an agent's concrete particular experiences and the abstract propositions and general properties that it uses to describe its experience.  This makes it more likely that a hybrid approach will serve in the realistic pursuits of inquiry, since these efforts involve the integration of deductive, inductive, and abductive sources of knowledge.
 
One of the reasons for pursuing a pragmatic hybrid of semantic and syntactic approaches, rather than keeping to the purely syntactic ways of manipulating meaningless tokens according to abstract rules of proof, is that the model theoretic strategy preserves the form of connection that exists between an agent's concrete particular experiences and the abstract propositions and general properties that it uses to describe its experience.  This makes it more likely that a hybrid approach will serve in the realistic pursuits of inquiry, since these efforts involve the integration of deductive, inductive, and abductive sources of knowledge.
  
In this approach to propositional logic, with a view toward computational realization, one begins with a space <math>X,\!</math> called a ''universe of discourse'', whose points can be reasonably well described by means of a finite set of logical features.  Since the points of the space <math>X\!</math> are effectively known only in terms of their computable features, one can assume that there is a finite set of computable coordinate projections <math>x_i : X \to \mathbb{B},\!</math> for <math>i = 1 ~\text{to}~ n,\!</math> for some <math>n,\!</math> that can serve to describe the points of <math>X.\!</math>  This means that there is a computable coordinate representation for <math>X,\!</math> in other words, a computable map <math>T : X \to \mathbb{B}^n\!</math> that describes the points of <math>X\!</math> insofar as they are known.  Thus, each proposition <math>F : X \to \mathbb{B}\!</math> can be factored through the coordinate representation <math>T : X \to \mathbb{B}^n\!</math> to yield a related proposition <math>f : \mathbb{B}^n \to \mathbb{B},\!</math> one that speaks directly about coordinate <math>n\!</math>-tuples but indirectly about points of <math>X.\!</math>  Composing maps on the right, the mapping <math>f\!</math> is defined by the equation <math>F = T \circ f.\!</math>  For all practical purposes served by the representation <math>T,\!</math> the proposition <math>f\!</math> can be taken as a proxy for the proposition <math>F,\!</math> saying things about the points of <math>X\!</math> by means of <math>X\!</math>'s encoding to <math>\mathbb{B}^n.\!</math>
+
In this approach to propositional logic, with a view toward computational realization, one begins with a space <math>X,\!</math> called a ''universe of discourse'', whose points can be reasonably well described by means of a finite set of logical features.  Since the points of the space <math>X\!</math> are effectively known only in terms of their computable features, one can assume that there is a finite set of computable coordinate projections <math>x_i : X \to \mathbb{B},\!</math> for <math>{i = 1 ~\text{to}~ n,}\!</math> for some <math>n,\!</math> that can serve to describe the points of <math>X.\!</math>  This means that there is a computable coordinate representation for <math>X,\!</math> in other words, a computable map <math>T : X \to \mathbb{B}^n\!</math> that describes the points of <math>X\!</math> insofar as they are known.  Thus, each proposition <math>F : X \to \mathbb{B}\!</math> can be factored through the coordinate representation <math>T : X \to \mathbb{B}^n\!</math> to yield a related proposition <math>f : \mathbb{B}^n \to \mathbb{B},\!</math> one that speaks directly about coordinate <math>n\!</math>-tuples but indirectly about points of <math>X.\!</math>  Composing maps on the right, the mapping <math>f\!</math> is defined by the equation <math>F = T \circ f.\!</math>  For all practical purposes served by the representation <math>T,\!</math> the proposition <math>f\!</math> can be taken as a proxy for the proposition <math>F,\!</math> saying things about the points of <math>X\!</math> by means of <math>X\!</math>'s encoding to <math>\mathbb{B}^n.\!</math>
  
Working under the functional perspective, the formal system known as ''propositional calculus'' is introduced as a general system of notations for referring to boolean functions.  Typically, one takes a space <math>X\!</math> and a coordinate representation <math>T : X \to \mathbb{B}^n\!</math> as parameters of a particular system and speaks of the propositional calculus on a finite set of variables <math>\{ \underline{\underline{x_i}} \}.\!</math>  In objective terms, this constitutes the ''domain of propositions'' on the basis <math>\{ \underline{\underline{x_i}} \},\!</math> notated as <math>\operatorname{DOP}\{ \underline{\underline{x_i}} \}.\!</math>  Ideally, one does not want to become too fixed on a particular set of logical features or to let the momentary dimensions of the space be cast in stone.  In practice, this means that the formalism and its computational implementation should allow for the automatic embedding of <math>\operatorname{DOP}(\underline{\underline{X}})\!</math> into <math>\operatorname{DOP}(\underline{\underline{Y}})\!</math> whenever <math>\underline{\underline{X}} \subseteq \underline{\underline{Y}}.\!</math>
+
Working under the functional perspective, the formal system known as ''propositional calculus'' is introduced as a general system of notations for referring to boolean functions.  Typically, one takes a space <math>X\!</math> and a coordinate representation <math>T : X \to \mathbb{B}^n\!</math> as parameters of a particular system and speaks of the propositional calculus on a finite set of variables <math>\{ \underline{\underline{x_i}} \}.\!</math>  In objective terms, this constitutes the ''domain of propositions'' on the basis <math>\{ \underline{\underline{x_i}} \},\!</math> notated as <math>\mathrm{DOP}\{ \underline{\underline{x_i}} \}.\!</math>  Ideally, one does not want to become too fixed on a particular set of logical features or to let the momentary dimensions of the space be cast in stone.  In practice, this means that the formalism and its computational implementation should allow for the automatic embedding of <math>\mathrm{DOP}(\underline{\underline{X}})\!</math> into <math>\mathrm{DOP}(\underline{\underline{Y}})\!</math> whenever <math>\underline{\underline{X}} \subseteq \underline{\underline{Y}}.\!</math>
  
 
The rest of this section presents the elements of a particular calculus for propositional logic.  First, I establish the basic notations and summarize the axiomatic presentation of the calculus, and then I give special attention to its functional and geometric interpretations.
 
The rest of this section presents the elements of a particular calculus for propositional logic.  First, I establish the basic notations and summarize the axiomatic presentation of the calculus, and then I give special attention to its functional and geometric interpretations.
Line 3,257: Line 3,267:
 
<p>In order to render this MON instructive for the development of a RIF, something intended to be a deliberately ''self-conscious'' construction, it is important to remedy the excessive lucidity of this MONs reflections, the confusing mix of opacity and transparency that comes in proportion to one's very familiarity with an object and that is compounded by one's very fluency in a language.  To do this, it is incumbent on a proper analysis of the situation to slow the MON down, to interrupt one's own comprehension of its developing intent, and to articulate the details of the sign process that mediates it much more carefully than is customary.</p>
 
<p>In order to render this MON instructive for the development of a RIF, something intended to be a deliberately ''self-conscious'' construction, it is important to remedy the excessive lucidity of this MONs reflections, the confusing mix of opacity and transparency that comes in proportion to one's very familiarity with an object and that is compounded by one's very fluency in a language.  To do this, it is incumbent on a proper analysis of the situation to slow the MON down, to interrupt one's own comprehension of its developing intent, and to articulate the details of the sign process that mediates it much more carefully than is customary.</p>
  
<p>These goals can be achieved by singling out the formal language that is used by this MON to denote its set theoretic objects.  This involves separating the object domain <math>O = O_\text{MON}\!</math> from the sign domain <math>S = S_\text{MON},\!</math> paying closer attention to the naive level of set notation that is actually used by this MON, and treating its primitive set theoretic expressions as a formal language all its own.</p>
+
<p>These goals can be achieved by singling out the formal language that is used by this MON to denote its set theoretic objects.  This involves separating the object domain <math>{O = O_\text{MON}}\!</math> from the sign domain <math>{S = S_\text{MON}},\!</math> paying closer attention to the naive level of set notation that is actually used by this MON, and treating its primitive set theoretic expressions as a formal language all its own.</p>
  
 
<p>Thus, I need to discuss a variety of formal languages on the following alphabet:</p>
 
<p>Thus, I need to discuss a variety of formal languages on the following alphabet:</p>
Line 3,341: Line 3,351:
 
I close this section by discussing the relationship among the three views of systems that are relevant to the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
 
I close this section by discussing the relationship among the three views of systems that are relevant to the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
  
[Variant] How do these three perspectives bear on the example of <math>\text{A}\!</math> and <math>\text{B}\!</math>?
+
'''[Variant]''' How do these three perspectives bear on the example of <math>\text{A}\!</math> and <math>\text{B}\!</math>?
  
[Variant] In order to show how these three perspectives bear on the present inquiry, I will now discuss the relationship they exhibit in the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
+
'''[Variant]''' In order to show how these three perspectives bear on the present inquiry, I will now discuss the relationship they exhibit in the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
  
 
In the present example, concerned with the form of communication that takes place between the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> the topic of interest is not the type of dynamics that would change one of the original objects, <math>\text{A}\!</math> or <math>\text{B},\!</math> into the other.  Thus, the object system is nothing more than the object domain <math>O = \{ \text{A}, \text{B} \}\!</math> shared between the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  In this case, where the object system reduces to an abstract set, falling under the action of a trivial dynamics, one says that the object system is ''stable'' or ''static''.  In more developed examples, when the dynamics at the level of the object system becomes more interesting, the ''objects'' in the object system are usually referred to as ''objective configurations'' or ''object states''.  Later examples will take on object systems that enjoy significant variations in the sequences of their objective states.
 
In the present example, concerned with the form of communication that takes place between the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> the topic of interest is not the type of dynamics that would change one of the original objects, <math>\text{A}\!</math> or <math>\text{B},\!</math> into the other.  Thus, the object system is nothing more than the object domain <math>O = \{ \text{A}, \text{B} \}\!</math> shared between the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  In this case, where the object system reduces to an abstract set, falling under the action of a trivial dynamics, one says that the object system is ''stable'' or ''static''.  In more developed examples, when the dynamics at the level of the object system becomes more interesting, the ''objects'' in the object system are usually referred to as ''objective configurations'' or ''object states''.  Later examples will take on object systems that enjoy significant variations in the sequences of their objective states.
Line 3,368: Line 3,378:
 
A sign relation is a complex object and its representations, insofar as they faithfully preserve its structure, are complex signs.  Accordingly, the problems of translating between ERs and IRs of sign relations, of detecting when representations alleged to be of sign relations do indeed represent objects of the specified character, and of recognizing whether different representations do or do not represent the same sign relation as their common object &mdash; these are the familiar questions that would be asked of the signs and interpretants in a simple sign relation, but this time asked at a higher level, in regard to the complex signs and complex interpretants that are posed by the different stripes of representation.  At the same time, it should be obvious that these are also the natural questions to be faced in building a bridge between representations.
 
A sign relation is a complex object and its representations, insofar as they faithfully preserve its structure, are complex signs.  Accordingly, the problems of translating between ERs and IRs of sign relations, of detecting when representations alleged to be of sign relations do indeed represent objects of the specified character, and of recognizing whether different representations do or do not represent the same sign relation as their common object &mdash; these are the familiar questions that would be asked of the signs and interpretants in a simple sign relation, but this time asked at a higher level, in regard to the complex signs and complex interpretants that are posed by the different stripes of representation.  At the same time, it should be obvious that these are also the natural questions to be faced in building a bridge between representations.
  
How many different sorts of entities are conceivably involved in translating between ERs and IRs of sign relations?  To address this question it helps to introduce a system of type notations that can be used to keep track of the various sorts of things, or the varieties of objects of thought, that are generated in the process of answering it.  Table&nbsp;47.1 summarizes the basic types of things that are needed in this pursuit, while the rest can be derived by constructions of the form <math>X ~\operatorname{of}~ Y,\!</math> notated <math>X(Y)\!</math> or just <math>XY,\!</math> for any basic types <math>X\!</math> and <math>Y.\!</math>  The constructed types of things involved in the ERs and IRs of sign relations are listed in Tables&nbsp;47.2 and 47.3, respectively.
+
How many different sorts of entities are conceivably involved in translating between ERs and IRs of sign relations?  To address this question it helps to introduce a system of type notations that can be used to keep track of the various sorts of things, or the varieties of objects of thought, that are generated in the process of answering it.  Table&nbsp;47.1 summarizes the basic types of things that are needed in this pursuit, while the rest can be derived by constructions of the form <math>X ~\mathrm{of}~ Y,\!</math> notated <math>X(Y)\!</math> or just <math>XY,\!</math> for any basic types <math>X\!</math> and <math>Y.\!</math>  The constructed types of things involved in the ERs and IRs of sign relations are listed in Tables&nbsp;47.2 and 47.3, respectively.
  
 
<br>
 
<br>
Line 3,399: Line 3,409:
 
|-
 
|-
 
| <math>\text{Relation}\!</math>
 
| <math>\text{Relation}\!</math>
| <math>R\!</math>
+
| <math>{R}\!</math>
| <math>S(T(U))\!</math>
+
| <math>{S(T(U))}\!</math>
 
|}
 
|}
  
Line 3,424: Line 3,434:
 
Let <math>\underline{S}\!</math> be the type of signs, <math>S\!</math> the type of sets, <math>T\!</math> the type of triples, and <math>U\!</math> the type of underlying objects.  Now consider the various sorts of things, or the varieties of objects of thought, that are invoked on each side, annotating each type as it is mentioned:
 
Let <math>\underline{S}\!</math> be the type of signs, <math>S\!</math> the type of sets, <math>T\!</math> the type of triples, and <math>U\!</math> the type of underlying objects.  Now consider the various sorts of things, or the varieties of objects of thought, that are invoked on each side, annotating each type as it is mentioned:
  
ERs of sign relations describe them as sets <math>(Ss)\!</math> of triples <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>  This makes for three levels of objective structure that must be put in coordination with each other, a task that is projected to be carried out in the appropriate OF of sign relations.  Corresponding to this aspect of structure in the OF, there is a parallel aspect of structure in the IF of sign relations.  Namely, the accessory sign relations that are used to discuss a targeted sign relation need to have signs for sets <math>(\underline{S}Ss),\!</math> signs for triples <math>(\underline{S}Ts),\!</math> and signs for the underlying elements <math>(\underline{S}Us).\!</math>  This accounts for three levels of syntactic structure in the IF of sign relations that must be coordinated with each other and also with the targeted levels of objective structure.
+
ERs of sign relations describe them as sets <math>(Ss)\!</math> of triples <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>  This makes for three levels of objective structure that must be put in coordination with each other, a task that is projected to be carried out in the appropriate OF of sign relations.  Corresponding to this aspect of structure in the OF, there is a parallel aspect of structure in the IF of sign relations.  Namely, the accessory sign relations that are used to discuss a targeted sign relation need to have signs for sets <math>{(\underline{S}Ss)},\!</math> signs for triples <math>{(\underline{S}Ts)},\!</math> and signs for the underlying elements <math>{(\underline{S}Us)}.\!</math>  This accounts for three levels of syntactic structure in the IF of sign relations that must be coordinated with each other and also with the targeted levels of objective structure.
  
[Variant] IRs of sign relations describe them in terms of properties <math>(Ps)\!</math> that are taken as primitive entities in their own right.  /// refer to properties <math>(Ps)\!</math> of transactions <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>
+
'''[Variant]''' IRs of sign relations describe them in terms of properties <math>(Ps)\!</math> that are taken as primitive entities in their own right.  /// refer to properties <math>(Ps)\!</math> of transactions <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>
  
[Variant] IRs of sign relations refer to properties of sets <math>(PSs),\!</math> properties of triples <math>(PTs),\!</math> and properties of underlying elements <math>(PUs).\!</math>  This amounts to three more levels of objective structure in the OF of the IR that need to be coordinated with each other and interlaced with the OF of the ER if the two are to be brought into the same discussion, possibly for the purpose of translating either into the other.  Accordingly, the accessory sign relations that are used to discuss an IR of a targeted sign relation need to have <math>\underline{S}PSs,\!</math> <math>\underline{S}PTs,\!</math> and <math>\underline{S}PUs.\!</math>
+
'''[Variant]''' IRs of sign relations refer to properties of sets <math>(PSs),\!</math> properties of triples <math>(PTs),\!</math> and properties of underlying elements <math>(PUs).\!</math>  This amounts to three more levels of objective structure in the OF of the IR that need to be coordinated with each other and interlaced with the OF of the ER if the two are to be brought into the same discussion, possibly for the purpose of translating either into the other.  Accordingly, the accessory sign relations that are used to discuss an IR of a targeted sign relation need to have <math>\underline{S}PSs,\!</math> <math>\underline{S}PTs,\!</math> and <math>\underline{S}PUs.\!</math>
  
 
===6.22. Extensional Representations of Sign Relations===
 
===6.22. Extensional Representations of Sign Relations===
Line 3,438: Line 3,448:
 
Starting from a standpoint in concrete constructions, the easiest way to begin developing an explicit treatment of ERs is to gather the relevant materials in the forms already presented, to fill out the missing details and expand the abbreviated contents of these forms, and to review their full structures in a more formal light.  Consequently, this section inaugurates the formal discussion of ERs by taking a second look at the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> recollecting the Tables of their sign relations and finishing up the Tables of their dyadic components.  Since the form of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> no longer presents any novelty, I can exploit their second presentation as a first opportunity to examine a selection of finer points, previously overlooked.  Also, in the process of reviewing this material it is useful to anticipate a number of incidental issues that are reaching the point of becoming critical within this discussion and to begin introducing the generic types of technical devices that are needed to deal with them.
 
Starting from a standpoint in concrete constructions, the easiest way to begin developing an explicit treatment of ERs is to gather the relevant materials in the forms already presented, to fill out the missing details and expand the abbreviated contents of these forms, and to review their full structures in a more formal light.  Consequently, this section inaugurates the formal discussion of ERs by taking a second look at the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> recollecting the Tables of their sign relations and finishing up the Tables of their dyadic components.  Since the form of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> no longer presents any novelty, I can exploit their second presentation as a first opportunity to examine a selection of finer points, previously overlooked.  Also, in the process of reviewing this material it is useful to anticipate a number of incidental issues that are reaching the point of becoming critical within this discussion and to begin introducing the generic types of technical devices that are needed to deal with them.
  
The next set of Tables summarizes the ERs of <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  For ease of reference, Tables&nbsp;48.1 and 49.1 repeat the contents of Tables&nbsp;1 and 2, respectively, the only difference being that appearances of ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> are transcribed as invocations of the ''arch operator'' <math>({}^{\langle} \ldots {}^{\rangle}).\!</math>  The reason for this slight change of notation will be explained shortly.  The denotative components <math>\operatorname{Den}(\text{A})\!</math> and <math>\operatorname{Den}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.2 and 49.2, respectively, while the third column gives the transition from sign to object as an ordered pair <math>(s, o).\!</math>  The connotative components <math>\operatorname{Con}(\text{A})\!</math> and <math>\operatorname{Con}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.3 and 49.3, respectively, while the third column gives the transition from sign to interpretant as an ordered pair <math>(s, i).\!</math>
+
The next set of Tables summarizes the ERs of <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  For ease of reference, Tables&nbsp;48.1 and 49.1 repeat the contents of Tables&nbsp;1 and 2, respectively, the only difference being that appearances of ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> are transcribed as invocations of the ''arch operator'' <math>({}^{\langle} \ldots {}^{\rangle}).\!</math>  The reason for this slight change of notation will be explained shortly.  The denotative components <math>\mathrm{Den}(\text{A})\!</math> and <math>\mathrm{Den}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.2 and 49.2, respectively, while the third column gives the transition from sign to object as an ordered pair <math>(s, o).\!</math>  The connotative components <math>\mathrm{Con}(\text{A})\!</math> and <math>\mathrm{Con}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.3 and 49.3, respectively, while the third column gives the transition from sign to interpretant as an ordered pair <math>(s, i).\!</math>
  
 
<br>
 
<br>
Line 3,444: Line 3,454:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 48.1} ~~ \operatorname{ER}(L_\text{A}) : \text{Extensional Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 48.1} ~~ \mathrm{ER}(L_\text{A}) : \text{Extensional Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,517: Line 3,527:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 48.2} ~~ \operatorname{ER}(\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 48.2} ~~ \mathrm{ER}(\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,566: Line 3,576:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 48.3} ~~ \operatorname{ER}(\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 48.3} ~~ \mathrm{ER}(\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Sign}\!</math>
 
| <math>\text{Sign}\!</math>
Line 3,639: Line 3,649:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 49.1} ~~ \operatorname{ER}(L_\text{B}) : \text{Extensional Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 49.1} ~~ \mathrm{ER}(L_\text{B}) : \text{Extensional Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,712: Line 3,722:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 49.2} ~~ \operatorname{ER}(\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 49.2} ~~ \mathrm{ER}(\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}~\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,735: Line 3,745:
 
\\
 
\\
 
({}^{\langle} \text{u} {}^{\rangle}, \text{A})
 
({}^{\langle} \text{u} {}^{\rangle}, \text{A})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 3,761: Line 3,771:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 49.3} ~~ \operatorname{ER}(\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 49.3} ~~ \mathrm{ER}(\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Sign}\!</math>
 
| <math>\text{Sign}\!</math>
Line 3,840: Line 3,850:
 
For the sake of maximum clarity and reusability of results, I begin by articulating the abstract skeleton of the paradigm structure, treating the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as sundry aspects of a single, unitary, but still uninterpreted object.  Then I return at various successive stages to differentiate and individualize the two interpreters, to arrange more functional flesh on the basis provided by their structural bones, and to illustrate how their bare forms can be arrayed in many different styles of qualitative detail.
 
For the sake of maximum clarity and reusability of results, I begin by articulating the abstract skeleton of the paradigm structure, treating the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as sundry aspects of a single, unitary, but still uninterpreted object.  Then I return at various successive stages to differentiate and individualize the two interpreters, to arrange more functional flesh on the basis provided by their structural bones, and to illustrate how their bare forms can be arrayed in many different styles of qualitative detail.
  
In building connections between ERs and IRs of sign relations the discussion turns on two types of partially ordered sets, or ''posets''.  Suppose that <math>L\!</math> is one of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> and let <math>\operatorname{ER}(L)\!</math> be an ER of <math>L.\!</math>
+
In building connections between ERs and IRs of sign relations the discussion turns on two types of partially ordered sets, or ''posets''.  Suppose that <math>L\!</math> is one of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> and let <math>\mathrm{ER}(L)\!</math> be an ER of <math>L.\!</math>
  
 
In the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> both of their ERs are based on a common world set:
 
In the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> both of their ERs are based on a common world set:
Line 3,884: Line 3,894:
 
To devise an IR of any relation <math>L\!</math> one needs to describe <math>L\!</math> in terms of properties of its ingredients.  Broadly speaking, the ingredients of a relation include its elementary relations or <math>n\!</math>-tuples and the elementary components of these <math>n\!</math>-tuples that reside in the relational domains.
 
To devise an IR of any relation <math>L\!</math> one needs to describe <math>L\!</math> in terms of properties of its ingredients.  Broadly speaking, the ingredients of a relation include its elementary relations or <math>n\!</math>-tuples and the elementary components of these <math>n\!</math>-tuples that reside in the relational domains.
  
The poset <math>\operatorname{Pos}(W)\!</math> of interest here is the power set <math>\mathcal{P}(W) = \operatorname{Pow}(W).\!</math>
+
The poset <math>\mathrm{Pos}(W)\!</math> of interest here is the power set <math>\mathcal{P}(W) = \mathrm{Pow}(W).\!</math>
  
 
The elements of these posets are abstractly regarded as ''properties'' or ''propositions'' that apply to the elements of <math>W.\!</math>  These properties and propositions are independently given entities.  In other words, they are primitive elements in their own right, and cannot in general be defined in terms of points, but they exist in relation to these points, and their extensions can be represented as sets of points.
 
The elements of these posets are abstractly regarded as ''properties'' or ''propositions'' that apply to the elements of <math>W.\!</math>  These properties and propositions are independently given entities.  In other words, they are primitive elements in their own right, and cannot in general be defined in terms of points, but they exist in relation to these points, and their extensions can be represented as sets of points.
  
[Variant] For a variety of foundational reasons that I do not fully understand, perhaps most of all because theoretically given structures have their real foundations outside the realm of theory, in empirically given structures, it is best to regard points, properties, and propositions as equally primitive elements, related to each other but not defined in terms of each other, analogous to the undefined elements of a geometry.
+
'''[Variant]''' For a variety of foundational reasons that I do not fully understand, perhaps most of all because theoretically given structures have their real foundations outside the realm of theory, in empirically given structures, it is best to regard points, properties, and propositions as equally primitive elements, related to each other but not defined in terms of each other, analogous to the undefined elements of a geometry.
  
[Variant] There is a foundational issue arising in this context that I do not pretend to fully understand and cannot attempt to finally dispatch.  What I do understand I will try to express in terms of an aesthetic principle:  On balance, it seems best to regard extensional elements and intensional features as independently given entities.  This involves treating points and properties as fundamental realities in their own rights, placing them on an equal basis with each other, and seeking their relation to each other, but not trying to reduce one to the other.
+
'''[Variant]''' There is a foundational issue arising in this context that I do not pretend to fully understand and cannot attempt to finally dispatch.  What I do understand I will try to express in terms of an aesthetic principle:  On balance, it seems best to regard extensional elements and intensional features as independently given entities.  This involves treating points and properties as fundamental realities in their own rights, placing them on an equal basis with each other, and seeking their relation to each other, but not trying to reduce one to the other.
  
The discussion is now specialized to consider the IRs of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> their denotative projections as the digraphs <math>\operatorname{Den}(L_\text{A})\!</math> and <math>\operatorname{Den}(L_\text{B}),\!</math> and their connotative projections as the digraphs <math>\operatorname{Con}(L_\text{A})\!</math> and <math>\operatorname{Con}(L_\text{B}).\!</math>  In doing this I take up two different strategies of representation:
+
The discussion is now specialized to consider the IRs of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> their denotative projections as the digraphs <math>\mathrm{Den}(L_\text{A})\!</math> and <math>\mathrm{Den}(L_\text{B}),\!</math> and their connotative projections as the digraphs <math>\mathrm{Con}(L_\text{A})\!</math> and <math>\mathrm{Con}(L_\text{B}).\!</math>  In doing this I take up two different strategies of representation:
  
# The first strategy is called the ''literal coding'', because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>\mathcal{O}(n)\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
+
# The first strategy is called the ''literal coding'', because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>{\mathcal{O}(n)}\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
 
# The second strategy is called the ''analytic coding'', because it attends to the nuances of each sign's interpretation to fashion its code, or the ''<math>\log (n)\!</math> coding'', because it uses roughly <math>\log_2 (n)\!</math> binary features to represent a domain of <math>n\!</math> elements.
 
# The second strategy is called the ''analytic coding'', because it attends to the nuances of each sign's interpretation to fashion its code, or the ''<math>\log (n)\!</math> coding'', because it uses roughly <math>\log_2 (n)\!</math> binary features to represent a domain of <math>n\!</math> elements.
  
Line 4,348: Line 4,358:
 
Using two different strategies of representation:
 
Using two different strategies of representation:
  
'''Literal Coding.'''  The first strategy is called the ''literal coding'' because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>\mathcal{O}(n)\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
+
'''Literal Coding.'''  The first strategy is called the ''literal coding'' because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>{\mathcal{O}(n)}\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
  
 
Being superficial as a matter of principle, or adhering to the surface appearances of signs, enjoys the initial advantage that the very same codes can be used by any interpreter that is capable of observing them.  The down side of resorting to this technique is that it typically uses an excessive number of logical dimensions to get each point of the intended space across.
 
Being superficial as a matter of principle, or adhering to the surface appearances of signs, enjoys the initial advantage that the very same codes can be used by any interpreter that is capable of observing them.  The down side of resorting to this technique is that it typically uses an excessive number of logical dimensions to get each point of the intended space across.
  
Even while operating within the general lines of the literal, superficial, or <math>\mathcal{O}(n)\!</math> strategy, there are still a number of choices to be made in the style of coding to be employed.  For example, if there is an obvious distinction between different components of the world, like that between the objects in <math>O = \{ \text{A}, \text{B} \}\!</math> and the signs in <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> then it is common to let this distinction go formally unmarked in the LIR, that is, to omit the requirement of declaring an explicit logical feature to make a note of it in the formal coding.  The distinction itself, as a property of reality, is in no danger of being obliterated or permanently erased, but it can be obscured and temporarily ignored.  In practice, the distinction is not so much ignored as it is casually observed and informally attended to, usually being marked by incidental indices in the context of the representation.
+
Even while operating within the general lines of the literal, superficial, or <math>{\mathcal{O}(n)}\!</math> strategy, there are still a number of choices to be made in the style of coding to be employed.  For example, if there is an obvious distinction between different components of the world, like that between the objects in <math>O = \{ \text{A}, \text{B} \}\!</math> and the signs in <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> then it is common to let this distinction go formally unmarked in the LIR, that is, to omit the requirement of declaring an explicit logical feature to make a note of it in the formal coding.  The distinction itself, as a property of reality, is in no danger of being obliterated or permanently erased, but it can be obscured and temporarily ignored.  In practice, the distinction is not so much ignored as it is casually observed and informally attended to, usually being marked by incidental indices in the context of the representation.
  
 
'''Literal Coding'''
 
'''Literal Coding'''
Line 4,431: Line 4,441:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 53.1} ~~ \text{Elements of} ~ \operatorname{ER}(W)\!</math>
+
<math>\text{Table 53.1} ~~ \text{Elements of} ~ \mathrm{ER}(W)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| <math>\text{Mnemonic Element}\!</math> <br><br> <math>w \in W\!</math>
 
| <math>\text{Mnemonic Element}\!</math> <br><br> <math>w \in W\!</math>
Line 4,485: Line 4,495:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 53.2} ~~ \text{Features of} ~ \operatorname{LIR}(W)\!</math>
+
<math>\text{Table 53.2} ~~ \text{Features of} ~ \mathrm{LIR}(W)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
|
 
|
Line 4,543: Line 4,553:
 
<br>
 
<br>
  
If the world of <math>\text{A}\!</math> and <math>\text{B},\!</math> the set <math>W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> is viewed abstractly, as an arbitrary set of six atomic points, then there are exactly <math>2^6 = 64\!</math> ''abstract properties'' or ''potential attributes'' that might be applied to or recognized in these points.  The elements of <math>W\!</math> that possess a given property form a subset of <math>W\!</math> called the ''extension'' of that property.  Thus the extensions of abstract properties are exactly the subsets of <math>W.\!</math>  The set of all subsets of <math>W\!</math> is called the ''power set'' of <math>W,\!</math> notated as <math>\operatorname{Pow}(W)\!</math> or <math>\mathcal{P}(W).\!</math> In order to make this way of talking about properties consistent with the previous definition of reality, it is necessary to say that one potential property is never realized, since no point has it, and its extension is the empty set <math>\varnothing = \{ \}.\!</math>  All the ''natural'' properties of points that one observes in a concrete situation, properties whose extensions are known as ''natural kinds'', can be recognized among the ''abstract'', ''arbitrary'', or ''set-theoretic'' properties that are systematically generated in this way.  Typically, however, many of these abstract properties will not be recognized as falling among the more natural kinds.
+
If the world of <math>\text{A}\!</math> and <math>\text{B},\!</math> the set <math>W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> is viewed abstractly, as an arbitrary set of six atomic points, then there are exactly <math>2^6 = 64\!</math> ''abstract properties'' or ''potential attributes'' that might be applied to or recognized in these points.  The elements of <math>W\!</math> that possess a given property form a subset of <math>W\!</math> called the ''extension'' of that property.  Thus the extensions of abstract properties are exactly the subsets of <math>W.\!</math>  The set of all subsets of <math>W\!</math> is called the ''power set'' of <math>W,\!</math> notated as <math>\mathrm{Pow}(W)\!</math> or <math>\mathcal{P}(W).\!</math> In order to make this way of talking about properties consistent with the previous definition of reality, it is necessary to say that one potential property is never realized, since no point has it, and its extension is the empty set <math>\varnothing = \{ \}.\!</math>  All the ''natural'' properties of points that one observes in a concrete situation, properties whose extensions are known as ''natural kinds'', can be recognized among the ''abstract'', ''arbitrary'', or ''set-theoretic'' properties that are systematically generated in this way.  Typically, however, many of these abstract properties will not be recognized as falling among the more natural kinds.
  
Tables&nbsp;54.1, 54.2, and 54.3 show three different ways of representing the elements of the world set <math>W\!</math> as vectors in the coordinate space <math>\underline{W}\!</math> and as singular propositions in the universe of discourse <math>W^\circ\!.</math>  Altogether, these Tables present the ''literal'' codes for the elements of <math>\underline{W}\!</math> and <math>W^\circ\!</math> in their ''mnemonic'', ''pragmatic'', and ''abstract'' versions, respectively.  In each Table, Column&nbsp;1 lists the element <math>w \in W,\!</math> while Column&nbsp;2 gives the corresponding coordinate vector <math>\underline{w} \in \underline{W}\!</math> in the form of a bit string.  The next two Columns represent each <math>w \in W\!</math> as a proposition in <math>W^\circ\!,</math> in effect, reconstituting it as a function <math>w : \underline{W} \to \mathbb{B}.</math>  Column&nbsp;3 shows the propositional expression of each element in the form of a conjunct term, in other words, as a logical product of positive and negative features.  Column&nbsp;4 gives the compact code for each element, using a conjunction of positive features in subscripted angle brackets to represent the singular proposition corresponding to each element.
+
Tables&nbsp;54.1, 54.2, and 54.3 show three different ways of representing the elements of the world set <math>W\!</math> as vectors in the coordinate space <math>\underline{W}\!</math> and as singular propositions in the universe of discourse <math>W^\Box.\!</math>  Altogether, these Tables present the ''literal'' codes for the elements of <math>\underline{W}\!</math> and <math>W^\circ\!</math> in their ''mnemonic'', ''pragmatic'', and ''abstract'' versions, respectively.  In each Table, Column&nbsp;1 lists the element <math>w \in W,\!</math> while Column&nbsp;2 gives the corresponding coordinate vector <math>\underline{w} \in \underline{W}\!</math> in the form of a bit string.  The next two Columns represent each <math>w \in W\!</math> as a proposition in <math>W^\circ\!,</math> in effect, reconstituting it as a function <math>w : \underline{W} \to \mathbb{B}.</math>  Column&nbsp;3 shows the propositional expression of each element in the form of a conjunct term, in other words, as a logical product of positive and negative features.  Column&nbsp;4 gives the compact code for each element, using a conjunction of positive features in subscripted angle brackets to represent the singular proposition corresponding to each element.
  
 
<br>
 
<br>
Line 4,848: Line 4,858:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 55.1} ~~ \operatorname{LIR}_1 (L_\text{A}) : \text{Literal Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 55.1} ~~ \mathrm{LIR}_1 (L_\text{A}) : \text{Literal Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 4,914: Line 4,924:
 
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|}
 
|}
  
Line 4,921: Line 4,931:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 55.2} ~~ \operatorname{LIR}_1 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 55.2} ~~ \mathrm{LIR}_1 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 4,938: Line 4,948:
 
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 4,974: Line 4,984:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 55.3} ~~ \operatorname{LIR}_1 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 55.3} ~~ \mathrm{LIR}_1 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 5,002: Line 5,012:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 5,038: Line 5,048:
 
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 5,063: Line 5,073:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 56.1} ~~ \operatorname{LIR}_1 (L_\text{B}) : \text{Literal Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 56.1} ~~ \mathrm{LIR}_1 (L_\text{B}) : \text{Literal Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,129: Line 5,139:
 
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|}
 
|}
  
Line 5,136: Line 5,146:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 56.2} ~~ \operatorname{LIR}_1 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 56.2} ~~ \mathrm{LIR}_1 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,189: Line 5,199:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 56.3} ~~ \operatorname{LIR}_1 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 56.3} ~~ \mathrm{LIR}_1 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 5,217: Line 5,227:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 5,253: Line 5,263:
 
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 5,336: Line 5,346:
 
\underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}}
 
\underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}}
 
& \}
 
& \}
\end{array}</math>
+
\end{array}\!</math>
 
|}
 
|}
  
Line 5,594: Line 5,604:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 58.1} ~~ \operatorname{LIR}_2 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 58.1} ~~ \mathrm{LIR}_2 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,635: Line 5,645:
 
~\underline{\underline{\text{i}}}~
 
~\underline{\underline{\text{i}}}~
 
(\underline{\underline{\text{u}}})
 
(\underline{\underline{\text{u}}})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,723: Line 5,733:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 58.2} ~~ \operatorname{LIR}_2 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 58.2} ~~ \mathrm{LIR}_2 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,777: Line 5,787:
 
(\underline{\underline{\text{i}}})
 
(\underline{\underline{\text{i}}})
 
~\underline{\underline{\text{u}}}~
 
~\underline{\underline{\text{u}}}~
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,792: Line 5,802:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 58.3} ~~ \operatorname{LIR}_2 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 58.3} ~~ \mathrm{LIR}_2 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 5,819: Line 5,829:
 
~\underline{\underline{\text{i}}}~
 
~\underline{\underline{\text{i}}}~
 
(\underline{\underline{\text{u}}})
 
(\underline{\underline{\text{u}}})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,937: Line 5,947:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 59.1} ~~ \operatorname{LIR}_2 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 59.1} ~~ \mathrm{LIR}_2 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,066: Line 6,076:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 59.2} ~~ \operatorname{LIR}_2 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 59.2} ~~ \mathrm{LIR}_2 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,135: Line 6,145:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 59.3} ~~ \operatorname{LIR}_2 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 59.3} ~~ \mathrm{LIR}_2 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 6,206: Line 6,216:
 
(\underline{\underline{\text{di}}})
 
(\underline{\underline{\text{di}}})
 
(\underline{\underline{\text{du}}})
 
(\underline{\underline{\text{du}}})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" |
 
| valign="bottom" |
Line 6,280: Line 6,290:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 60.1} ~~ \operatorname{LIR}_3 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 60.1} ~~ \mathrm{LIR}_3 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,353: Line 6,363:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 60.2} ~~ \operatorname{LIR}_3 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 60.2} ~~ \mathrm{LIR}_3 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,406: Line 6,416:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 60.3} ~~ \operatorname{LIR}_3 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 60.3} ~~ \mathrm{LIR}_3 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 6,434: Line 6,444:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 6,473: Line 6,483:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 6,495: Line 6,505:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 61.1} ~~ \operatorname{LIR}_3 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 61.1} ~~ \mathrm{LIR}_3 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,568: Line 6,578:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 61.2} ~~ \operatorname{LIR}_3 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 61.2} ~~ \mathrm{LIR}_3 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,621: Line 6,631:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 61.3} ~~ \operatorname{LIR}_3 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 61.3} ~~ \mathrm{LIR}_3 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 6,649: Line 6,659:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 6,688: Line 6,698:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 6,989: Line 6,999:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 65.1} ~~ \operatorname{AIR}_1 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 65.1} ~~ \mathrm{AIR}_1 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,062: Line 7,072:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 65.2} ~~ \operatorname{AIR}_1 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 65.2} ~~ \mathrm{AIR}_1 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,111: Line 7,121:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 65.3} ~~ \operatorname{AIR}_1 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 65.3} ~~ \mathrm{AIR}_1 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,184: Line 7,194:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 66.1} ~~ \operatorname{AIR}_1 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 66.1} ~~ \mathrm{AIR}_1 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,257: Line 7,267:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 66.2} ~~ \operatorname{AIR}_1 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 66.2} ~~ \mathrm{AIR}_1 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,306: Line 7,316:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 66.3} ~~ \operatorname{AIR}_1 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 66.3} ~~ \mathrm{AIR}_1 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,379: Line 7,389:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 67.1} ~~ \operatorname{AIR}_2 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 67.1} ~~ \mathrm{AIR}_2 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,452: Line 7,462:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 67.2} ~~ \operatorname{AIR}_2 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 67.2} ~~ \mathrm{AIR}_2 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,501: Line 7,511:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 67.3} ~~ \operatorname{AIR}_2 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 67.3} ~~ \mathrm{AIR}_2 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,529: Line 7,539:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 7,560: Line 7,570:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,574: Line 7,584:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 68.1} ~~ \operatorname{AIR}_2 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 68.1} ~~ \mathrm{AIR}_2 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,647: Line 7,657:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 68.2} ~~ \operatorname{AIR}_2 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 68.2} ~~ \mathrm{AIR}_2 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,696: Line 7,706:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 68.3} ~~ \operatorname{AIR}_2 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 68.3} ~~ \mathrm{AIR}_2 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,724: Line 7,734:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 7,755: Line 7,765:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,799: Line 7,809:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
~x~ ~\operatorname{at}~ t
+
~x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
~x~ ~\operatorname{at}~ t
+
~x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(x) ~\operatorname{at}~ t
+
(x) ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(x) ~\operatorname{at}~ t
+
(x) ~\mathrm{at}~ t
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
~\operatorname{d}x~ ~\operatorname{at}~ t
+
~\mathrm{d}x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}x) ~\operatorname{at}~ t
+
(\mathrm{d}x) ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}x~ ~\operatorname{at}~ t
+
~\mathrm{d}x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}x) ~\operatorname{at}~ t
+
(\mathrm{d}x) ~\mathrm{at}~ t
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(x) ~\operatorname{at}~ t'
+
(x) ~\mathrm{at}~ t'
 
\\[4pt]
 
\\[4pt]
~x~ ~\operatorname{at}~ t'
+
~x~ ~\mathrm{at}~ t'
 
\\[4pt]
 
\\[4pt]
~x~ ~\operatorname{at}~ t'
+
~x~ ~\mathrm{at}~ t'
 
\\[4pt]
 
\\[4pt]
(x) ~\operatorname{at}~ t'
+
(x) ~\mathrm{at}~ t'
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,833: Line 7,843:
 
It might be thought that a notion of real time <math>(t \in \mathbb{R})\!</math> is needed at this point to fund the account of sequential processes.  From a logical point of view, however, I think it will be found that it is precisely out of such data that the notion of time has to be constructed.
 
It might be thought that a notion of real time <math>(t \in \mathbb{R})\!</math> is needed at this point to fund the account of sequential processes.  From a logical point of view, however, I think it will be found that it is precisely out of such data that the notion of time has to be constructed.
  
The symbol <math>{}^{\backprime\backprime} \ominus\!\!- {}^{\prime\prime},</math> read ''thus'', ''then'', or ''yields'', can be used to mark sequential inferences, allowing for expressions like <math>x \land \operatorname{d}x \ominus\!\!-~ (x).\!</math>  In each case, a suitable context of temporal moments <math>(t, t')\!</math> is understood to underlie the inference.
+
The symbol <math>{}^{\backprime\backprime} \ominus\!\!- {}^{\prime\prime},</math> read ''thus'', ''then'', or ''yields'', can be used to mark sequential inferences, allowing for expressions like <math>x \land \mathrm{d}x \ominus\!\!-~ (x).\!</math>  In each case, a suitable context of temporal moments <math>(t, t')\!</math> is understood to underlie the inference.
  
A ''sequential inference constraint'' is a logical condition that applies to a temporal system, providing information about the kinds of sequential inference that apply to the system in a hopefully large number of situations.  Typically, a sequential inference constraint is formulated in intensional terms and expressed by means of a collection of sequential inference rules or schemata that tell what sequential inferences apply to the system in particular situations.  Since it has the status of logical theory about an empirical system, a sequential inference constraint is subject to being reformulated in terms of its set-theoretic extension, and it can be established as existing in the customary sort of dual relationship with this extension.  Logically, it determines, and, empirically, it is determined by the corresponding set of ''sequential inference triples'', the <math>(x, y, z)\!</math> such that <math>x \land y \ominus\!\!-~ z.\!</math>  The set-theoretic extension of a sequential inference constraint is thus a triadic relation, generically notated as  <math>\ominus,\!</math> where <math>\ominus \subseteq X \times \operatorname{d}X \times X\!</math> is defined as follows.
+
A ''sequential inference constraint'' is a logical condition that applies to a temporal system, providing information about the kinds of sequential inference that apply to the system in a hopefully large number of situations.  Typically, a sequential inference constraint is formulated in intensional terms and expressed by means of a collection of sequential inference rules or schemata that tell what sequential inferences apply to the system in particular situations.  Since it has the status of logical theory about an empirical system, a sequential inference constraint is subject to being reformulated in terms of its set-theoretic extension, and it can be established as existing in the customary sort of dual relationship with this extension.  Logically, it determines, and, empirically, it is determined by the corresponding set of ''sequential inference triples'', the <math>(x, y, z)\!</math> such that <math>x \land y \ominus\!\!-~ z.\!</math>  The set-theoretic extension of a sequential inference constraint is thus a triadic relation, generically notated as  <math>\ominus,\!</math> where <math>\ominus \subseteq X \times \mathrm{d}X \times X\!</math> is defined as follows.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\ominus ~=~ \{ (x, y, z) \in  X \times \operatorname{d}X \times X : x \land y \ominus\!\!-~ z \}.\!</math>
+
| <math>\ominus ~=~ \{ (x, y, z) \in  X \times \mathrm{d}X \times X : x \land y \ominus\!\!-~ z \}.\!</math>
 
|}
 
|}
  
Using the appropriate isomorphisms, or recognizing how, in terms of the information given, that each of several descriptions is tantamount to the same object, the triadic relation <math>\ominus \subseteq X \times \operatorname{d}X \times X\!</math> constituted by a sequential inference constraint can be interpreted as a proposition <math>\ominus : X \times \operatorname{d}X \times X \to \mathbb{B}\!</math> about sequential inference triples, and thus as a map <math>\ominus : \operatorname{d}X \to (X \times X \to \mathbb{B})\!</math> from the space <math>\operatorname{d}X\!</math> of differential states to the space of propositions about transitions in <math>X.\!</math>
+
Using the appropriate isomorphisms, or recognizing how, in terms of the information given, that each of several descriptions is tantamount to the same object, the triadic relation <math>\ominus \subseteq X \times \mathrm{d}X \times X\!</math> constituted by a sequential inference constraint can be interpreted as a proposition <math>\ominus : X \times \mathrm{d}X \times X \to \mathbb{B}\!</math> about sequential inference triples, and thus as a map <math>\ominus : \mathrm{d}X \to (X \times X \to \mathbb{B})\!</math> from the space <math>\mathrm{d}X\!</math> of differential states to the space of propositions about transitions in <math>X.\!</math>
  
 
<br>
 
<br>
  
'''Question.'''  Group Actions?  <math>r : \operatorname{d}X \to (X \to X)\!</math>
+
'''Question.'''  Group Actions?  <math>r : \mathrm{d}X \to (X \to X)\!</math>
  
 
<br>
 
<br>
Line 7,851: Line 7,861:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 70.1} ~~ \text{Group Representation} ~ \operatorname{Rep}^\text{A} (V_4)\!</math>
+
<math>\text{Table 70.1} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{A} (V_4)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
Line 7,871: Line 7,881:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle \operatorname{d}! \rangle
+
\langle \mathrm{d}! \rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{a}}} ~
+
\mathrm{d}\underline{\underline{\text{a}}} ~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{b}}} ~
+
\mathrm{d}\underline{\underline{\text{b}}} ~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
\langle \operatorname{d}* \rangle
+
\langle \mathrm{d}* \rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{a}}} \cdot
+
\mathrm{d}\underline{\underline{\text{a}}} \cdot
\operatorname{d}\underline{\underline{\text{i}}} ~ !
+
\mathrm{d}\underline{\underline{\text{i}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{b}}} \cdot
+
\mathrm{d}\underline{\underline{\text{b}}} \cdot
\operatorname{d}\underline{\underline{\text{u}}} ~ !
+
\mathrm{d}\underline{\underline{\text{u}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 7,923: Line 7,933:
 
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{ai}}
+
\mathrm{d}_{\text{ai}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{bu}}
+
\mathrm{d}_{\text{bu}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{ai}} * \operatorname{d}_{\text{bu}}
+
\mathrm{d}_{\text{ai}} * \mathrm{d}_{\text{bu}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,935: Line 7,945:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 70.2} ~~ \text{Group Representation} ~ \operatorname{Rep}^\text{B} (V_4)\!</math>
+
<math>\text{Table 70.2} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{B} (V_4)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
Line 7,955: Line 7,965:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle \operatorname{d}! \rangle
+
\langle \mathrm{d}! \rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{a}}} ~
+
\mathrm{d}\underline{\underline{\text{a}}} ~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{b}}} ~
+
\mathrm{d}\underline{\underline{\text{b}}} ~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
\langle \operatorname{d}* \rangle
+
\langle \mathrm{d}* \rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{a}}} \cdot
+
\mathrm{d}\underline{\underline{\text{a}}} \cdot
\operatorname{d}\underline{\underline{\text{u}}} ~ !
+
\mathrm{d}\underline{\underline{\text{u}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{b}}} \cdot
+
\mathrm{d}\underline{\underline{\text{b}}} \cdot
\operatorname{d}\underline{\underline{\text{i}}} ~ !
+
\mathrm{d}\underline{\underline{\text{i}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 8,007: Line 8,017:
 
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{au}}
+
\mathrm{d}_{\text{au}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{bi}}
+
\mathrm{d}_{\text{bi}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{au}} * \operatorname{d}_{\text{bi}}
+
\mathrm{d}_{\text{au}} * \mathrm{d}_{\text{bi}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,019: Line 8,029:
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 70.3} ~~ \text{Group Representation} ~ \operatorname{Rep}^\text{C} (V_4)\!</math>
+
<math>{\text{Table 70.3} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{C} (V_4)}\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
Line 8,039: Line 8,049:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle\operatorname{d}!\rangle
+
\langle\mathrm{d}!\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{m}\rangle
+
\langle\mathrm{d}\text{m}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{n}\rangle
+
\langle\mathrm{d}\text{n}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}*\rangle
+
\langle\mathrm{d}*\rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{m}!
+
\mathrm{d}\text{m}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{n}!
+
\mathrm{d}\text{n}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 8,075: Line 8,085:
 
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}}
+
\mathrm{d}_{\text{m}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{n}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}} * \operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{m}} * \mathrm{d}_{\text{n}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,107: Line 8,117:
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle\operatorname{d}!\rangle
+
\langle\mathrm{d}!\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{m}\rangle
+
\langle\mathrm{d}\text{m}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{n}\rangle
+
\langle\mathrm{d}\text{n}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}*\rangle
+
\langle\mathrm{d}*\rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{m}!
+
\mathrm{d}\text{m}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{n}!
+
\mathrm{d}\text{n}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 8,143: Line 8,153:
 
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}}
+
\mathrm{d}_{\text{m}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{n}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}} * \operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{m}} * \mathrm{d}_{\text{n}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,158: Line 8,168:
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="25%" | <math>\text{Group Coset}\!</math>
 
| width="25%" | <math>\text{Group Coset}\!</math>
| width="25%" | <math>\text{Logical Coset}\!</math>
+
| width="25%" | <math>\text{Logical Coset}~\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
|-
 
|-
 
| <math>G_\text{m}\!</math>
 
| <math>G_\text{m}\!</math>
| <math>(\operatorname{d}\text{m})\!</math>
+
| <math>(\mathrm{d}\text{m})\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})(\operatorname{d}\text{n})
+
(\mathrm{d}\text{m})(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\text{m})~\operatorname{d}\text{n}~
+
(\mathrm{d}\text{m})~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
Line 8,174: Line 8,184:
 
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{n}
+
\mathrm{d}_\text{n}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
| <math>G_\text{m} * \operatorname{d}_\text{m}\!</math>
+
| <math>G_\text{m} * \mathrm{d}_\text{m}\!</math>
| <math>\operatorname{d}\text{m}\!</math>
+
| <math>\mathrm{d}\text{m}\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
~\operatorname{d}\text{m}~(\operatorname{d}\text{n})
+
~\mathrm{d}\text{m}~(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{m}~~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}_\text{m}
+
\mathrm{d}_\text{m}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{n} * \operatorname{d}_\text{m}
+
\mathrm{d}_\text{n} * \mathrm{d}_\text{m}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,200: Line 8,210:
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="25%" | <math>\text{Group Coset}\!</math>
 
| width="25%" | <math>\text{Group Coset}\!</math>
| width="25%" | <math>\text{Logical Coset}\!</math>
+
| width="25%" | <math>\text{Logical Coset}~\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
|-
 
|-
 
| <math>G_\text{n}\!</math>
 
| <math>G_\text{n}\!</math>
| <math>(\operatorname{d}\text{n})\!</math>
+
| <math>({\mathrm{d}\text{n})}\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})(\operatorname{d}\text{n})
+
(\mathrm{d}\text{m})(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~(\operatorname{d}\text{n})
+
~\mathrm{d}\text{m}~(\mathrm{d}\text{n})
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
Line 8,216: Line 8,226:
 
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{m}
+
\mathrm{d}_\text{m}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
| <math>G_\text{n} * \operatorname{d}_\text{n}\!</math>
+
| <math>G_\text{n} * \mathrm{d}_\text{n}\!</math>
| <math>\operatorname{d}\text{n}\!</math>
+
| <math>\mathrm{d}\text{n}\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})~\operatorname{d}\text{n}~
+
(\mathrm{d}\text{m})~\mathrm{d}\text{n}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{m}~~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}_\text{n}
+
\mathrm{d}_\text{n}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{m} * \operatorname{d}_\text{n}
+
\mathrm{d}_\text{m} * \mathrm{d}_\text{n}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,259: Line 8,269:
 
|}
 
|}
  
In other words, <math>P\!\!\And\!\!Q</math> is the intersection of the ''inverse projections'' <math>P' = \operatorname{Pr}_{12}^{-1}(P)\!</math> and <math>Q' = \operatorname{Pr}_{23}^{-1}(Q),\!</math> which are defined as follows:
+
In other words, <math>P\!\!\And\!\!Q</math> is the intersection of the ''inverse projections'' <math>P' = \mathrm{Pr}_{12}^{-1}(P)\!</math> and <math>Q' = \mathrm{Pr}_{23}^{-1}(Q),\!</math> which are defined as follows:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{Pr}_{12}^{-1}(P) & = & P \times Z & = & \{ (x, y, z) \in X \times Y \times Z : (x, y) \in P \}.
+
\mathrm{Pr}_{12}^{-1}(P) & = & P \times Z & = & \{ (x, y, z) \in X \times Y \times Z : (x, y) \in P \}.
 
\\[4pt]
 
\\[4pt]
\operatorname{Pr}_{23}^{-1}(Q) & = & X \times Q & = & \{ (x, y, z) \in X \times Y \times Z : (y, z) \in Q \}.
+
\mathrm{Pr}_{23}^{-1}(Q) & = & X \times Q & = & \{ (x, y, z) \in X \times Y \times Z : (y, z) \in Q \}.
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,276: Line 8,286:
 
Strictly speaking, the logical entity <math>p_S\!</math> is the intensional representation of the tribe, presiding at the highest level of abstraction, while <math>f_S\!</math> and <math>S\!</math> are its more concrete extensional representations, rendering its concept in functional and geometric materials, respectively.  Whenever it is possible to do so without confusion, I try to use identical or similar names for the corresponding objects and species of each type, and I generally ignore the distinctions that otherwise set them apart.  For instance, in moving toward computational settings, <math>f_S\!</math> makes the best computational proxy for <math>p_S,\!</math> so I commonly refer to the mapping <math>f_S : X \to \mathbb{B}\!</math> as a proposition on <math>X.\!</math>
 
Strictly speaking, the logical entity <math>p_S\!</math> is the intensional representation of the tribe, presiding at the highest level of abstraction, while <math>f_S\!</math> and <math>S\!</math> are its more concrete extensional representations, rendering its concept in functional and geometric materials, respectively.  Whenever it is possible to do so without confusion, I try to use identical or similar names for the corresponding objects and species of each type, and I generally ignore the distinctions that otherwise set them apart.  For instance, in moving toward computational settings, <math>f_S\!</math> makes the best computational proxy for <math>p_S,\!</math> so I commonly refer to the mapping <math>f_S : X \to \mathbb{B}\!</math> as a proposition on <math>X.\!</math>
  
Regarded as logical models, the elements of the contension <math>P\!\!\And\!\!Q</math> satisfy the proposition referred to as the ''conjunction of extensions'' <math>P'\!</math> and <math>Q'.\!</math>
+
Regarded as logical models, the elements of the contension <math>P\!\!\And\!\!Q</math> satisfy the proposition referred to as the ''conjunction of extensions'' <math>P^\prime\!</math> and <math>Q^\prime.\!</math>
  
 
Next, the ''composition'' of <math>P\!</math> and <math>Q\!</math> is a dyadic relation <math>R' \subseteq X \times Z\!</math> that is notated as <math>R' = P \circ Q\!</math> and defined as follows.
 
Next, the ''composition'' of <math>P\!</math> and <math>Q\!</math> is a dyadic relation <math>R' \subseteq X \times Z\!</math> that is notated as <math>R' = P \circ Q\!</math> and defined as follows.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>P \circ Q ~=~ \operatorname{Pr}_{13} (P\!\!\And\!\!Q) ~=~ \{ (x, z) \in X \times Z : (x, y, z) \in P\!\!\And\!\!Q \}.</math>
+
| <math>P \circ Q ~=~ \mathrm{Pr}_{13} (P\!\!\And\!\!Q) ~=~ \{ (x, z) \in X \times Z : (x, y, z) \in P\!\!\And\!\!Q \}.</math>
 
|}
 
|}
  
Line 8,398: Line 8,408:
 
In order to speak of generalized orders of relations I need to outline the dimensions of variation along which I intend the characters of already familiar orders of relations to be broadened.  Generally speaking, the taxonomic features of <math>n\!</math>-place relations that I wish to liberalize can be read off from their ''local incidence properties'' (LIPs).
 
In order to speak of generalized orders of relations I need to outline the dimensions of variation along which I intend the characters of already familiar orders of relations to be broadened.  Generally speaking, the taxonomic features of <math>n\!</math>-place relations that I wish to liberalize can be read off from their ''local incidence properties'' (LIPs).
  
'''Definition.'''  A ''local incidence property'' of a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is one that is based on the following type of data.  Pick an element <math>x\!</math> in one of the domains <math>X_j\!</math> of <math>L.\!</math>  Let <math>L_{x \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>x\!</math> at <math>j,\!</math>'' or the ''<math>x \,\text{at}\, j\!</math> flag of <math>L.\!</math>''  The ''local flag'' <math>L_{x \,\text{at}\, j} \subseteq L\!</math> is defined as follows.
+
'''Definition.'''  A ''local incidence property'' of a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is one that is based on the following type of data.  Pick an element <math>x\!</math> in one of the domains <math>{X_j}\!</math> of <math>L.\!</math>  Let <math>L_{x \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>x\!</math> at <math>{j},\!</math>'' or the ''<math>x \,\text{at}\, j\!</math> flag of <math>L.\!</math>''  The ''local flag'' <math>L_{x \,\text{at}\, j} \subseteq L\!</math> is defined as follows.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 8,406: Line 8,416:
 
Any property <math>P\!</math> of <math>L_{x \,\text{at}\, j}\!</math> constitutes a ''local incidence property'' of <math>L\!</math> with reference to the locus <math>x \,\text{at}\, j.\!</math>
 
Any property <math>P\!</math> of <math>L_{x \,\text{at}\, j}\!</math> constitutes a ''local incidence property'' of <math>L\!</math> with reference to the locus <math>x \,\text{at}\, j.\!</math>
  
'''Definition.'''  A <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is ''<math>P\!</math>-regular at <math>j\!</math>'' if and only if every flag of <math>L\!</math> with <math>x\!</math> at <math>j\!</math> is <math>P,\!</math> letting <math>x\!</math> range over the domain <math>X_j,\!</math> in symbols, if and only if <math>P(L_{x \,\text{at}\, j})\!</math> is true for all <math>x \in X_j.\!</math>
+
'''Definition.'''  A <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is ''<math>P\!</math>-regular at <math>j\!</math>'' if and only if every flag of <math>L\!</math> with <math>x\!</math> at <math>j\!</math> is <math>P,\!</math> letting <math>x\!</math> range over the domain <math>X_j,\!</math> in symbols, if and only if <math>P(L_{x \,\text{at}\, j})\!</math> is true for all <math>{x \in X_j}.\!</math>
  
 
Of particular interest are the local incidence properties of relations that can be calculated from the cardinalities of their local flags, and these are naturally called ''numerical incidence properties'' (NIPs).
 
Of particular interest are the local incidence properties of relations that can be calculated from the cardinalities of their local flags, and these are naturally called ''numerical incidence properties'' (NIPs).
  
For example, <math>L\!</math> is <math>c\text{-regular at}~ j\!</math> if and only if the cardinality of the local flag <math>L_{x \,\text{at}\, j}\!</math> is equal to <math>c\!</math> for all <math>x \in X_j,\!</math> coded in symbols, if and only if <math>|L_{x \,\text{at}\, j}| = c\!</math> for all <math>x \in X_j.\!</math>
+
For example, <math>L\!</math> is <math>c\text{-regular at}~ j\!</math> if and only if the cardinality of the local flag <math>L_{x \,\text{at}\, j}\!</math> is equal to <math>c\!</math> for all <math>x \in X_j,\!</math> coded in symbols, if and only if <math>|L_{x \,\text{at}\, j}| = c\!</math> for all <math>{x \in X_j}.\!</math>
  
 
In a similar fashion, it is possible to define the numerical incidence properties <math>(< c)\text{-regular at}~ j,\!</math> <math>(> c)\text{-regular at}~ j,\!</math> and so on.  For ease of reference, a few of these definitions are recorded below.
 
In a similar fashion, it is possible to define the numerical incidence properties <math>(< c)\text{-regular at}~ j,\!</math> <math>(> c)\text{-regular at}~ j,\!</math> and so on.  For ease of reference, a few of these definitions are recorded below.
Line 8,436: Line 8,446:
 
& \iff &
 
& \iff &
 
|L_{x \,\text{at}\, j}| \ge c ~\text{for all}~ x \in X_j.
 
|L_{x \,\text{at}\, j}| \ge c ~\text{for all}~ x \in X_j.
\end{array}</math>
+
\end{array}\!</math>
 
|}
 
|}
  
The definition of local flags can be broadened to give a definition of ''regional flags''.  Suppose <math>L \subseteq X_1 \times \ldots \times X_k\!</math> and choose a subset <math>M \subseteq X_j.\!</math>  Let <math>L_{M \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>M\!</math> at <math>j,\!</math>'' or the ''<math>M \,\text{at}\, j\!</math> flag of <math>L,\!</math>'' defined as follows.
+
The definition of local flags can be broadened to give a definition of ''regional flags''.  Suppose <math>L \subseteq X_1 \times \ldots \times X_k\!</math> and choose a subset <math>M \subseteq X_j.\!</math>  Let <math>L_{M \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>M\!</math> at <math>{j},\!</math>'' or the ''<math>M \,\text{at}\, j\!</math> flag of <math>L,\!</math>'' defined as follows.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 8,495: Line 8,505:
 
& \iff &
 
& \iff &
 
L ~\text{is}~ 1\text{-regular at}~ Y.
 
L ~\text{is}~ 1\text{-regular at}~ Y.
\end{array}</math>
+
\end{array}\!</math>
 
|}
 
|}
  
Line 8,529: Line 8,539:
 
For a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k,\!</math> we have the following usages.
 
For a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k,\!</math> we have the following usages.
  
# The notation <math>{}^{\backprime\backprime} \operatorname{Dom}_j (L) {}^{\prime\prime}\!</math> denotes the set <math>X_j,\!</math> called the ''domain of <math>L\!</math> at <math>j\!</math>'' or the ''<math>j^\text{th}\!</math> domain of <math>L.\!</math>''.
+
# The notation <math>{}^{\backprime\backprime} \mathrm{Dom}_j (L) {}^{\prime\prime}\!</math> denotes the set <math>X_j,\!</math> called the ''domain of <math>L\!</math> at <math>j\!</math>'' or the ''<math>j^\text{th}\!</math> domain of <math>L.\!</math>''.
# The notation <math>{}^{\backprime\backprime} \operatorname{Quo}_j (L) {}^{\prime\prime}\!</math> denotes a subset of <math>X_j\!</math> called the ''quorum of <math>L\!</math> at <math>j\!</math>'' or the ''<math>j^\text{th}\!</math> quorum of <math>L,\!</math>'' defined as follows.
+
# The notation <math>{}^{\backprime\backprime} \mathrm{Quo}_j (L) {}^{\prime\prime}\!</math> denotes a subset of <math>{X_j}\!</math> called the ''quorum of <math>L\!</math> at <math>j\!</math>'' or the ''<math>j^\text{th}\!</math> quorum of <math>L,\!</math>'' defined as follows.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
 
|
 
|
 
<math>\begin{array}{lll}
 
<math>\begin{array}{lll}
\operatorname{Quo}_j (L)
+
\mathrm{Quo}_j (L)
 
& = &
 
& = &
 
\text{the largest}~ Q \subseteq X_j ~\text{such that}~ ~L_{Q \,\text{at}\, j}~ ~\text{is}~ (> 1)\text{-regular at}~ j,
 
\text{the largest}~ Q \subseteq X_j ~\text{such that}~ ~L_{Q \,\text{at}\, j}~ ~\text{is}~ (> 1)\text{-regular at}~ j,
Line 8,547: Line 8,557:
  
 
# The arbitrarily designated domains <math>X_1 = X\!</math> and <math>X_2 = Y\!</math> that form the widest sets admitted to the dyadic relation are referred to as the ''domain'' or ''source'' and the ''codomain'' or ''target'', respectively, of the relation in question.
 
# The arbitrarily designated domains <math>X_1 = X\!</math> and <math>X_2 = Y\!</math> that form the widest sets admitted to the dyadic relation are referred to as the ''domain'' or ''source'' and the ''codomain'' or ''target'', respectively, of the relation in question.
# The terms ''quota'' and ''range'' are reserved for those uniquely defined sets whose elements actually appear as the first and second members, respectively, of the ordered pairs in that relation.  Thus, for a dyadic relation <math>L \subseteq X \times Y,\!</math> we identify <math>\operatorname{Quo} (L) = \operatorname{Quo}_1 (L) \subseteq X\!</math> with what is usually called the ''domain of definition'' of <math>L\!</math> and we identify <math>\operatorname{Ran} (L) = \operatorname{Quo}_2 (L) \subseteq Y\!</math> with the usual ''range'' of <math>L.\!</math>
+
# The terms ''quota'' and ''range'' are reserved for those uniquely defined sets whose elements actually appear as the first and second members, respectively, of the ordered pairs in that relation.  Thus, for a dyadic relation <math>L \subseteq X \times Y,\!</math> we identify <math>\mathrm{Quo} (L) = \mathrm{Quo}_1 (L) \subseteq X\!</math> with what is usually called the ''domain of definition'' of <math>L\!</math> and we identify <math>\mathrm{Ran} (L) = \mathrm{Quo}_2 (L) \subseteq Y\!</math> with the usual ''range'' of <math>L.\!</math>
  
A ''partial equivalence relation'' (PER) on a set <math>X\!</math> is a relation <math>L \subseteq X \times X\!</math> that is an equivalence relation on its domain of definition <math>\operatorname{Quo} (L) \subseteq X.\!</math>  In this situation, <math>[x]_L\!</math> is empty for each <math>x\!</math> in <math>X\!</math> that is not in <math>\operatorname{Quo} (L).\!</math>  Another way of reaching the same concept is to call a PER a dyadic relation that is symmetric and transitive, but not necessarily reflexive.  Like the &ldquo;self-identical elements&rdquo; of old that epitomized the very definition of self-consistent existence in classical logic, the property of being a self-related or self-equivalent element in the purview of a PER on <math>X\!</math> singles out the members of <math>\operatorname{Quo} (L)\!</math> as those for which a properly meaningful existence can be contemplated.
+
A ''partial equivalence relation'' (PER) on a set <math>X\!</math> is a relation <math>L \subseteq X \times X\!</math> that is an equivalence relation on its domain of definition <math>\mathrm{Quo} (L) \subseteq X.\!</math>  In this situation, <math>[x]_L\!</math> is empty for each <math>x\!</math> in <math>X\!</math> that is not in <math>\mathrm{Quo} (L).\!</math>  Another way of reaching the same concept is to call a PER a dyadic relation that is symmetric and transitive, but not necessarily reflexive.  Like the &ldquo;self-identical elements&rdquo; of old that epitomized the very definition of self-consistent existence in classical logic, the property of being a self-related or self-equivalent element in the purview of a PER on <math>X\!</math> singles out the members of <math>\mathrm{Quo} (L)\!</math> as those for which a properly meaningful existence can be contemplated.
  
 
A ''moderate equivalence relation'' (MER) on the ''modus'' <math>M \subseteq X\!</math> is a relation on <math>X\!</math> whose restriction to <math>M\!</math> is an equivalence relation on <math>M.\!</math>  In symbols, <math>L \subseteq X \times X\!</math> such that <math>L|M \subseteq M \times M\!</math> is an equivalence relation.  Notice that the subset of restriction, or modus <math>M,\!</math> is a part of the definition, so the same relation <math>L\!</math> on <math>X\!</math> could be a MER or not depending on the choice of <math>M.\!</math>  In spite of how it sounds, a moderate equivalence relation can have more ordered pairs in it than the ordinary sort of equivalence relation on the same set.
 
A ''moderate equivalence relation'' (MER) on the ''modus'' <math>M \subseteq X\!</math> is a relation on <math>X\!</math> whose restriction to <math>M\!</math> is an equivalence relation on <math>M.\!</math>  In symbols, <math>L \subseteq X \times X\!</math> such that <math>L|M \subseteq M \times M\!</math> is an equivalence relation.  Notice that the subset of restriction, or modus <math>M,\!</math> is a part of the definition, so the same relation <math>L\!</math> on <math>X\!</math> could be a MER or not depending on the choice of <math>M.\!</math>  In spite of how it sounds, a moderate equivalence relation can have more ordered pairs in it than the ordinary sort of equivalence relation on the same set.
Line 8,555: Line 8,565:
 
In applying the equivalence class notation to a sign relation <math>L,\!</math> the definitions and examples considered so far cover only the case where the connotative component <math>L_{SI}\!</math> is a total equivalence relation on the whole syntactic domain <math>S.\!</math>  The next job is to adapt this usage to PERs.
 
In applying the equivalence class notation to a sign relation <math>L,\!</math> the definitions and examples considered so far cover only the case where the connotative component <math>L_{SI}\!</math> is a total equivalence relation on the whole syntactic domain <math>S.\!</math>  The next job is to adapt this usage to PERs.
  
If <math>L\!</math> is a sign relation whose syntactic projection <math>L_{SI}\!</math> is a PER on <math>S\!</math> then we may still write <math>{}^{\backprime\backprime} [s]_L {}^{\prime\prime}\!</math> for the &ldquo;equivalence class of <math>s\!</math> under <math>L_{SI}\!</math>&rdquo;.  But now, <math>[s]_L\!</math> can be empty if <math>s\!</math> has no interpretant, that is, if <math>s\!</math> lies outside the &ldquo;adequately meaningful&rdquo; subset of the syntactic domain, where synonymy and equivalence of meaning are defined.  Otherwise, if <math>s\!</math> has an <math>i\!</math> then it also has an <math>o,\!</math> by the definition of <math>L_{SI}.\!</math>  In this case, there is a triple <math>(o, s, i) \in L,\!</math> and it is permissible to let <math>[o]_L = [s]_L.\!</math>
+
If <math>L\!</math> is a sign relation whose syntactic projection <math>L_{SI}\!</math> is a PER on <math>S\!</math> then we may still write <math>{}^{\backprime\backprime} [s]_L {}^{\prime\prime}\!</math> for the &ldquo;equivalence class of <math>s\!</math> under <math>L_{SI}\!</math>&rdquo;.  But now, <math>[s]_L\!</math> can be empty if <math>s\!</math> has no interpretant, that is, if <math>s\!</math> lies outside the &ldquo;adequately meaningful&rdquo; subset of the syntactic domain, where synonymy and equivalence of meaning are defined.  Otherwise, if <math>s\!</math> has an <math>i\!</math> then it also has an <math>o,\!</math> by the definition of <math>L_{SI}.\!</math>  In this case, there is a triple <math>{(o, s, i) \in L},\!</math> and it is permissible to let <math>[o]_L = [s]_L.\!</math>
  
 
===6.32. Partiality : Selective Operations===
 
===6.32. Partiality : Selective Operations===
Line 8,731: Line 8,741:
 
<p>The smaller and shorter-term index sets, typically having the form <math>I = \{ 1, \ldots, n \},\!</math> are used to keep tabs on the terms of finite sets and sequences, unions and intersections, sums and products.</p>
 
<p>The smaller and shorter-term index sets, typically having the form <math>I = \{ 1, \ldots, n \},\!</math> are used to keep tabs on the terms of finite sets and sequences, unions and intersections, sums and products.</p>
  
<p>In this context and elsewhere, the notation <math>[n] = \{ 1, \ldots, n \}\!</math> will be used to refer to a ''standard segment'' (finite initial subset) of the natural numbers <math>\mathbb{N} = \{ 1, 2, 3, \ldots \}.\!</math></p></li>
+
<p>In this context and elsewhere, the notation <math>{[n] = \{ 1, \ldots, n \}}\!</math> will be used to refer to a ''standard segment'' (finite initial subset) of the natural numbers <math>\mathbb{N} = \{ 1, 2, 3, \ldots \}.\!</math></p></li>
  
 
<li>
 
<li>
Line 8,762: Line 8,772:
 
<math>L\!</math> assigns a unique set of &ldquo;local habitations&rdquo; <math>L(s)\!</math> to each element <math>s\!</math> in the underlying set <math>S.\!</math>
 
<math>L\!</math> assigns a unique set of &ldquo;local habitations&rdquo; <math>L(s)\!</math> to each element <math>s\!</math> in the underlying set <math>S.\!</math>
  
'''Definition.'''  A ''numbered set'' <math>(S, f),\!</math> based on the set <math>S\!</math> and the injective function <math>f : S \to \mathbb{N},</math> is defined as follows. &hellip;
+
'''Definition.'''  A ''numbered set'' <math>(S, f),\!</math> based on the set <math>S\!</math> and the injective function <math>{f : S \to \mathbb{N}},</math> is defined as follows. &hellip;
  
 
'''Definition.'''  An ''enumerated set'' <math>(S, f)\!</math> is a numbered set with a bijective <math>f.\!</math> &hellip;
 
'''Definition.'''  An ''enumerated set'' <math>(S, f)\!</math> is a numbered set with a bijective <math>f.\!</math> &hellip;
Line 8,795: Line 8,805:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Proj}^{(2)} L ~=~ (\operatorname{proj}_{12} L, ~ \operatorname{proj}_{13} L, ~ \operatorname{proj}_{23} L).\!</math>
+
| <math>\mathrm{Proj}^{(2)} L ~=~ (\mathrm{proj}_{12} L, ~ \mathrm{proj}_{13} L, ~ \mathrm{proj}_{23} L).\!</math>
 
|}
 
|}
  
If <math>L\!</math> is visualized as a solid body in the 3-dimensional space <math>X \times Y \times Z,\!</math> then <math>\operatorname{Proj}^{(2)} L\!</math> can be visualized as the arrangement or ordered collection of shadows it throws on the <math>XY, ~ XZ, ~ YZ\!</math> planes, respectively.
+
If <math>L\!</math> is visualized as a solid body in the 3-dimensional space <math>X \times Y \times Z,\!</math> then <math>\mathrm{Proj}^{(2)} L\!</math> can be visualized as the arrangement or ordered collection of shadows it throws on the <math>XY, ~ XZ, ~ YZ\!</math> planes, respectively.
  
Two more set-theoretic constructions are worth introducing at this point, in particular for describing the source and target domains of the projection operator <math>\operatorname{Proj}^{(2)}.\!</math>
+
Two more set-theoretic constructions are worth introducing at this point, in particular for describing the source and target domains of the projection operator <math>\mathrm{Proj}^{(2)}.\!</math>
  
The set of subsets of a set <math>S\!</math> is called the ''power set'' of <math>S.\!</math>  This object is denoted by either of the forms <math>\operatorname{Pow}(S)\!</math> or <math>2^S\!</math> and defined as follows:
+
The set of subsets of a set <math>S\!</math> is called the ''power set'' of <math>S.\!</math>  This object is denoted by either of the forms <math>\mathrm{Pow}(S)\!</math> or <math>2^S\!</math> and defined as follows:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Pow}(S) ~=~ 2^S ~=~ \{ T : T \subseteq S \}.\!</math>
+
| <math>\mathrm{Pow}(S) ~=~ 2^S ~=~ \{ T : T \subseteq S \}.\!</math>
 
|}
 
|}
  
The power set notation can be used to provide an alternative description of relations.  In the case where <math>S\!</math> is a cartesian product, say <math>S = X_1 \times \ldots \times X_n,\!</math> then each <math>n\!</math>-place relation <math>L\!</math> described as a subset of <math>S,\!</math> say <math>L \subseteq X_1 \times \ldots \times X_n,\!</math> is equally well described as an element of <math>\operatorname{Pow}(S),\!</math> in other words, as <math>L \in \operatorname{Pow}(X_1 \times \ldots \times X_n).\!</math>
+
The power set notation can be used to provide an alternative description of relations.  In the case where <math>S\!</math> is a cartesian product, say <math>{S = X_1 \times \ldots \times X_n},\!</math> then each <math>n\!</math>-place relation <math>L\!</math> described as a subset of <math>S,\!</math> say <math>L \subseteq X_1 \times \ldots \times X_n,\!</math> is equally well described as an element of <math>\mathrm{Pow}(S),\!</math> in other words, as <math>L \in \mathrm{Pow}(X_1 \times \ldots \times X_n).\!</math>
  
The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets <math>(X, Y, Z),\!</math> is called the ''dyadic explosion'' of <math>X \times Y \times Z.\!</math>  This object is denoted <math>\operatorname{Explo}(X, Y, Z ~|~ 2),\!</math> read as the ''explosion of <math>X \times Y \times Z\!</math> by twos'', or more simply as <math>X, Y, Z ~\operatorname{choose}~ 2,\!</math> and defined as follows:
+
The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets <math>(X, Y, Z),\!</math> is called the ''dyadic explosion'' of <math>X \times Y \times Z.\!</math>  This object is denoted <math>\mathrm{Explo}(X, Y, Z ~|~ 2),\!</math> read as the ''explosion of <math>X \times Y \times Z\!</math> by twos'', or more simply as <math>X, Y, Z ~\mathrm{choose}~ 2,\!</math> and defined as follows:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Explo}(X, Y, Z ~|~ 2) ~=~ \operatorname{Pow}(X \times Y) \times \operatorname{Pow}(X \times Z) \times \operatorname{Pow}(Y \times Z).\!</math>
+
| <math>\mathrm{Explo}(X, Y, Z ~|~ 2) ~=~ \mathrm{Pow}(X \times Y) \times \mathrm{Pow}(X \times Z) \times \mathrm{Pow}(Y \times Z).\!</math>
 
|}
 
|}
  
 
This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.
 
This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.
  
By means of these constructions the operation that forms <math>\operatorname{Proj}^{(2)} L\!</math> for each triadic relation <math>L \subseteq X \times Y \times Z\!</math> can be expressed as a function:
+
By means of these constructions the operation that forms <math>\mathrm{Proj}^{(2)} L\!</math> for each triadic relation <math>L \subseteq X \times Y \times Z\!</math> can be expressed as a function:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Proj}^{(2)} : \operatorname{Pow}(X \times Y \times Z) \to \operatorname{Explo}(X, Y, Z ~|~ 2).\!</math>
+
| <math>\mathrm{Proj}^{(2)} : \mathrm{Pow}(X \times Y \times Z) \to \mathrm{Explo}(X, Y, Z ~|~ 2).\!</math>
 
|}
 
|}
  
In this setting the issue of whether triadic relations are ''reducible to'' or ''reconstructible from'' their dyadic projections, both in general and in specific cases, can be identified with the question of whether <math>\operatorname{Proj}^{(2)}\!</math> is injective.  The mapping <math>\operatorname{Proj}^{(2)}\!</math> is said to ''preserve information'' about the triadic relations <math>L \in \operatorname{Pow}(X \times Y \times Z)\!</math> if and only if it is injective, otherwise one says that some loss of information has occurred in taking the projections.  Given a specific instance of a triadic relation <math>L \in \operatorname{Pow}(X \times Y \times Z),\!</math> it can be said that <math>L\!</math> is ''determined by'' (''reducible to'' or ''reconstructible from'') its dyadic projections if and only if <math>(\operatorname{Proj}^{(2)})^{-1}(\operatorname{Proj}^{(2)}L)\!</math> is the singleton set <math>\{ L \}.\!</math>  Otherwise, there exists an <math>L'\!</math> such that <math>\operatorname{Proj}^{(2)}L = \operatorname{Proj}^{(2)}L',\!</math> and in this case <math>L\!</math> is said to be ''irreducibly triadic'' or ''genuinely triadic''.  Notice that irreducible or genuine triadic relations, when they exist, naturally occur in sets of two or more, the whole collection of them being equated or confounded with one another under <math>\operatorname{Proj}^{(2)}.\!</math>
+
In this setting the issue of whether triadic relations are ''reducible to'' or ''reconstructible from'' their dyadic projections, both in general and in specific cases, can be identified with the question of whether <math>\mathrm{Proj}^{(2)}\!</math> is injective.  The mapping <math>\mathrm{Proj}^{(2)}\!</math> is said to ''preserve information'' about the triadic relations <math>L \in \mathrm{Pow}(X \times Y \times Z)\!</math> if and only if it is injective, otherwise one says that some loss of information has occurred in taking the projections.  Given a specific instance of a triadic relation <math>L \in \mathrm{Pow}(X \times Y \times Z),\!</math> it can be said that <math>L\!</math> is ''determined by'' (''reducible to'' or ''reconstructible from'') its dyadic projections if and only if <math>(\mathrm{Proj}^{(2)})^{-1}(\mathrm{Proj}^{(2)}L)\!</math> is the singleton set <math>\{ L \}.\!</math>  Otherwise, there exists an <math>L'\!</math> such that <math>\mathrm{Proj}^{(2)}L = \mathrm{Proj}^{(2)}L',\!</math> and in this case <math>L\!</math> is said to be ''irreducibly triadic'' or ''genuinely triadic''.  Notice that irreducible or genuine triadic relations, when they exist, naturally occur in sets of two or more, the whole collection of them being equated or confounded with one another under <math>\mathrm{Proj}^{(2)}.\!</math>
  
The next series of Tables illustrates the operation of <math>\operatorname{Proj}^{(2)}\!</math> by means of its actions on the sign relations <math>L_\text{A}\!</math> and <math>L_\text{B}.\!</math>  For ease of reference, Tables&nbsp;72.1 and 73.1 repeat the contents of Tables&nbsp;1 and 2, respectively, while the dyadic relations comprising <math>\operatorname{Proj}^{(2)}L_\text{A}\!</math> and <math>\operatorname{Proj}^{(2)}L_\text{B}\!</math> are shown in Tables&nbsp;72.2 to 72.4 and Tables&nbsp;73.2 to 73.4, respectively.
+
The next series of Tables illustrates the operation of <math>\mathrm{Proj}^{(2)}\!</math> by means of its actions on the sign relations <math>L_\text{A}\!</math> and <math>L_\text{B}.\!</math>  For ease of reference, Tables&nbsp;72.1 and 73.1 repeat the contents of Tables&nbsp;1 and 2, respectively, while the dyadic relations comprising <math>\mathrm{Proj}^{(2)}L_\text{A}\!</math> and <math>\mathrm{Proj}^{(2)}L_\text{B}\!</math> are shown in Tables&nbsp;72.2 to 72.4 and Tables&nbsp;73.2 to 73.4, respectively.
  
 
<br>
 
<br>
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 72.1} ~~ \text{Sign Relation of Interpreter A}\!</math>
+
|+ style="height:30px" | <math>\text{Table 72.1} ~~ \text{Sign Relation of Interpreter A}~\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 9,024: Line 9,034:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 73.1} ~~ \text{Sign Relation of Interpreter B}\!</math>
+
|+ style="height:30px" | <math>\text{Table 73.1} ~~ \text{Sign Relation of Interpreter B}~\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 9,216: Line 9,226:
 
<br>
 
<br>
  
A comparison of the corresponding projections in <math>\operatorname{Proj}^{(2)} L(\text{A})\!</math> and <math>\operatorname{Proj}^{(2)} L(\text{B})\!</math> shows that the distinction between the triadic relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> is preserved by <math>\operatorname{Proj}^{(2)},\!</math> and this circumstance allows one to say that this much information, at least, can be derived from the dyadic projections.  However, to say that a triadic relation <math>L \in \operatorname{Pow} (O \times S \times I)\!</math> is reducible in this sense it is necessary to show that no distinct <math>L' \in \operatorname{Pow} (O \times S \times I)\!</math> exists such that <math>\operatorname{Proj}^{(2)} L = \operatorname{Proj}^{(2)} L',\!</math> and this can take a rather more exhaustive or comprehensive investigation of the space <math>\operatorname{Pow} (O \times S \times I).\!</math>
+
A comparison of the corresponding projections in <math>\mathrm{Proj}^{(2)} L(\text{A})\!</math> and <math>\mathrm{Proj}^{(2)} L(\text{B})\!</math> shows that the distinction between the triadic relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> is preserved by <math>\mathrm{Proj}^{(2)},\!</math> and this circumstance allows one to say that this much information, at least, can be derived from the dyadic projections.  However, to say that a triadic relation <math>L \in \mathrm{Pow} (O \times S \times I)\!</math> is reducible in this sense it is necessary to show that no distinct <math>L' \in \mathrm{Pow} (O \times S \times I)\!</math> exists such that <math>\mathrm{Proj}^{(2)} L = \mathrm{Proj}^{(2)} L',\!</math> and this can take a rather more exhaustive or comprehensive investigation of the space <math>\mathrm{Pow} (O \times S \times I).\!</math>
  
As it happens, each of the relations <math>L = L(\text{A})\!</math> or <math>L = L(\text{B})\!</math> is uniquely determined by its projective triple <math>\operatorname{Proj}^{(2)} L.\!</math>  This can be seen as follows.
+
As it happens, each of the relations <math>L = L(\text{A})\!</math> or <math>L = L(\text{B})\!</math> is uniquely determined by its projective triple <math>\mathrm{Proj}^{(2)} L.\!</math>  This can be seen as follows.
  
Consider any coordinate position <math>(s, i)\!</math> in the plane <math>S \times I.\!</math>  If <math>(s, i)\!</math> is not in <math>L_{SI}\!</math> then there can be no element <math>(o, s, i)\!</math> in <math>L,\!</math> therefore we may restrict our attention to positions <math>(s, i)\!</math> in <math>L_{SI},\!</math> knowing that there exist at least <math>|L_{SI}| = 8\!</math> elements in <math>L,\!</math> and seeking only to determine what objects <math>o\!</math> exist such that <math>(o, s, i)\!</math> is an element in the objective ''fiber'' of <math>(s, i).\!</math>  In other words, for what <math>o \in O\!</math> is <math>(o, s, i) \in \operatorname{proj}_{SI}^{-1}((s, i))?\!</math>  The fact that <math>L_{OS}\!</math> has exactly one element <math>(o, s)\!</math> for each coordinate <math>s \in S\!</math> and that <math>L_{OI}\!</math> has exactly one element <math>(o, i)\!</math> for each coordinate <math>i \in I,\!</math> plus the &ldquo;coincidence&rdquo; of it being the same <math>o\!</math> at any one choice for <math>(s, i),\!</math> tells us that <math>L\!</math> has just the one element <math>(o, s, i)\!</math> over each point of <math>S \times I.\!</math>  This proves that both <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> are reducible in an informational sense to triples of dyadic relations, that is, they are ''dyadically reducible''.
+
Consider any coordinate position <math>(s, i)\!</math> in the plane <math>S \times I.\!</math>  If <math>(s, i)\!</math> is not in <math>L_{SI}\!</math> then there can be no element <math>(o, s, i)\!</math> in <math>L,\!</math> therefore we may restrict our attention to positions <math>(s, i)\!</math> in <math>L_{SI},\!</math> knowing that there exist at least <math>|L_{SI}| = 8\!</math> elements in <math>L,\!</math> and seeking only to determine what objects <math>o\!</math> exist such that <math>(o, s, i)\!</math> is an element in the objective ''fiber'' of <math>(s, i).\!</math>  In other words, for what <math>{o \in O}\!</math> is <math>(o, s, i) \in \mathrm{proj}_{SI}^{-1}((s, i))?\!</math>  The fact that <math>L_{OS}\!</math> has exactly one element <math>(o, s)\!</math> for each coordinate <math>s \in S\!</math> and that <math>L_{OI}\!</math> has exactly one element <math>(o, i)\!</math> for each coordinate <math>i \in I,\!</math> plus the &ldquo;coincidence&rdquo; of it being the same <math>o\!</math> at any one choice for <math>(s, i),\!</math> tells us that <math>L\!</math> has just the one element <math>(o, s, i)\!</math> over each point of <math>S \times I.\!</math>  This proves that both <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> are reducible in an informational sense to triples of dyadic relations, that is, they are ''dyadically reducible''.
  
 
===6.36. Irreducibly Triadic Relations===
 
===6.36. Irreducibly Triadic Relations===
Line 9,228: Line 9,238:
 
In order to show what an irreducibly triadic relation looks like, this Section presents a pair of triadic relations that have the same dyadic projections, and thus cannot be distinguished from each other on this basis alone.  As it happens, these examples of triadic relations can be discussed independently of sign relational concerns, but structures of their general ilk are frequently found arising in signal-theoretic applications, and they are undoubtedly closely associated with problems of reliable coding and communication.
 
In order to show what an irreducibly triadic relation looks like, this Section presents a pair of triadic relations that have the same dyadic projections, and thus cannot be distinguished from each other on this basis alone.  As it happens, these examples of triadic relations can be discussed independently of sign relational concerns, but structures of their general ilk are frequently found arising in signal-theoretic applications, and they are undoubtedly closely associated with problems of reliable coding and communication.
  
Tables&nbsp;74.1 and 75.1 show a pair of irreducibly triadic relations <math>L_0\!</math> and <math>L_1,\!</math> respectively.  Tables&nbsp;74.2 to 74.4 and Tables&nbsp;75.2 to 75.4 show the dyadic relations comprising <math>\operatorname{Proj}^{(2)} L_0\!</math> and <math>\operatorname{Proj}^{(2)} L_1,\!</math> respectively.
+
Tables&nbsp;74.1 and 75.1 show a pair of irreducibly triadic relations <math>L_0\!</math> and <math>L_1,\!</math> respectively.  Tables&nbsp;74.2 to 74.4 and Tables&nbsp;75.2 to 75.4 show the dyadic relations comprising <math>\mathrm{Proj}^{(2)} L_0\!</math> and <math>\mathrm{Proj}^{(2)} L_1,\!</math> respectively.
  
 
<br>
 
<br>
Line 9,334: Line 9,344:
 
The relations <math>L_0, L_1 \subseteq \mathbb{B}^3\!</math> are defined by the following equations, with algebraic operations taking place as in <math>\text{GF}(2),\!</math> that is, with <math>1 + 1 = 0.\!</math>
 
The relations <math>L_0, L_1 \subseteq \mathbb{B}^3\!</math> are defined by the following equations, with algebraic operations taking place as in <math>\text{GF}(2),\!</math> that is, with <math>1 + 1 = 0.\!</math>
  
# The triple <math>(x, y, z)\!</math> in <math>\mathbb{B}^3\!</math> belongs to <math>L_0\!</math> if and only if <math>x + y + z = 0.\!</math>  Thus, <math>L_0\!</math> is the set of even-parity bit vectors, with <math>x + y = z.\!</math>
+
# The triple <math>(x, y, z)\!</math> in <math>\mathbb{B}^3\!</math> belongs to <math>L_0\!</math> if and only if <math>{x + y + z = 0}.\!</math>  Thus, <math>L_0\!</math> is the set of even-parity bit vectors, with <math>x + y = z.\!</math>
# The triple <math>(x, y, z)\!</math> in <math>\mathbb{B}^3\!</math> belongs to <math>L_1\!</math> if and only if <math>x + y + z = 1.\!</math>  Thus, <math>L_1\!</math> is the set of odd-parity bit vectors, with <math>x + y = z + 1.\!</math>
+
# The triple <math>(x, y, z)\!</math> in <math>\mathbb{B}^3\!</math> belongs to <math>L_1\!</math> if and only if <math>{x + y + z = 1}.\!</math>  Thus, <math>L_1\!</math> is the set of odd-parity bit vectors, with <math>x + y = z + 1.\!</math>
  
The corresponding projections of <math>\operatorname{Proj}^{(2)} L_0\!</math> and <math>\operatorname{Proj}^{(2)} L_1\!</math> are identical.  In fact, all six projections, taken at the level of logical abstraction, constitute precisely the same dyadic relation, isomorphic to the whole of <math>\mathbb{B} \times \mathbb{B}\!</math> and expressed by the universal constant proposition <math>1 : \mathbb{B} \times \mathbb{B} \to \mathbb{B}.\!</math>  In summary:
+
The corresponding projections of <math>\mathrm{Proj}^{(2)} L_0\!</math> and <math>\mathrm{Proj}^{(2)} L_1\!</math> are identical.  In fact, all six projections, taken at the level of logical abstraction, constitute precisely the same dyadic relation, isomorphic to the whole of <math>\mathbb{B} \times \mathbb{B}\!</math> and expressed by the universal constant proposition <math>1 : \mathbb{B} \times \mathbb{B} \to \mathbb{B}.\!</math>  In summary:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 9,386: Line 9,396:
 
[The following piece occurs in &sect; 6.35.]
 
[The following piece occurs in &sect; 6.35.]
  
The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets <math>(X, Y, Z),\!</math> is called the ''dyadic explosion'' of <math>X \times Y \times Z.\!</math>  This object is denoted <math>\operatorname{Explo}(X, Y, Z ~|~ 2),\!</math> read as the ''explosion of <math>X \times Y \times Z\!</math> by twos'', or more simply as <math>X, Y, Z ~\operatorname{choose}~ 2,\!</math> and defined as follows:
+
The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets <math>(X, Y, Z),\!</math> is called the ''dyadic explosion'' of <math>X \times Y \times Z.\!</math>  This object is denoted <math>\mathrm{Explo}(X, Y, Z ~|~ 2),\!</math> read as the ''explosion of <math>X \times Y \times Z\!</math> by twos'', or more simply as <math>X, Y, Z ~\mathrm{choose}~ 2,\!</math> and defined as follows:
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Explo}(X, Y, Z ~|~ 2) ~=~ \operatorname{Pow}(X \times Y) \times \operatorname{Pow}(X \times Z) \times \operatorname{Pow}(Y \times Z)\!</math>
+
| <math>\mathrm{Explo}(X, Y, Z ~|~ 2) ~=~ \mathrm{Pow}(X \times Y) \times \mathrm{Pow}(X \times Z) \times \mathrm{Pow}(Y \times Z)\!</math>
 
|}
 
|}
  
Line 9,450: Line 9,460:
 
|}
 
|}
  
Table&nbsp;76 displays the results of indexing every sign of the <math>\text{A}\!</math> and <math>\text{B}\!</math> example with a superscript indicating its source or ''exponent'', namely, the interpreter who actively communicates or transmits the sign.  The operation of attribution produces two new sign relations, but it turns out that both sign relations have the same form and content, so a single Table will do.  The new sign relation generated by this operation will be denoted <math>\operatorname{At} (\text{A}, \text{B})\!</math> and called the ''attributed sign relation'' for the <math>\text{A}\!</math> and <math>\text{B}\!</math> example.
+
Table&nbsp;76 displays the results of indexing every sign of the <math>\text{A}\!</math> and <math>\text{B}\!</math> example with a superscript indicating its source or ''exponent'', namely, the interpreter who actively communicates or transmits the sign.  The operation of attribution produces two new sign relations, but it turns out that both sign relations have the same form and content, so a single Table will do.  The new sign relation generated by this operation will be denoted <math>\mathrm{At} (\text{A}, \text{B})\!</math> and called the ''attributed sign relation'' for the <math>\text{A}\!</math> and <math>\text{B}\!</math> example.
  
 
<br>
 
<br>
Line 9,825: Line 9,835:
 
With this last modification, angle quotes become like ascribed quotes or attributed remarks, indexed with the name of the interpretive agent that issued the message in question.  In sum, the notation <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime}\!</math> is intended to situate the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> in the context of its contemplated use and to index the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> with the name of the interpreter that is considered to be using it on a given occasion.
 
With this last modification, angle quotes become like ascribed quotes or attributed remarks, indexed with the name of the interpretive agent that issued the message in question.  In sum, the notation <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime}\!</math> is intended to situate the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> in the context of its contemplated use and to index the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> with the name of the interpreter that is considered to be using it on a given occasion.
  
The notation <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime},\!</math> read <math>{}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{quoth}~ \text{B} {}^{\prime\prime}\!</math> or <math>{}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{used by}~ \text{B} {}^{\prime\prime},\!</math> is an expression that indicates the use of the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> by the interpreter <math>\text{B}.\!</math>  The expression inside the outer quotes is referred to as an ''indexed quotation'', since it is indexed by the name of the interpreter to which it is referred.
+
The notation <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime},~\!</math> read <math>{}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{quoth}~ \text{B} {}^{\prime\prime}\!</math> or <math>{}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{used by}~ \text{B} {}^{\prime\prime},\!</math> is an expression that indicates the use of the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> by the interpreter <math>\text{B}.\!</math>  The expression inside the outer quotes is referred to as an ''indexed quotation'', since it is indexed by the name of the interpreter to which it is referred.
  
 
Since angle quotes with a blank index are equivalent to ordinary quotes, we have the following equivalence.  [Not sure about this.]
 
Since angle quotes with a blank index are equivalent to ordinary quotes, we have the following equivalence.  [Not sure about this.]
Line 10,210: Line 10,220:
 
Working from these principles alone, there are numerous ways that a plausible dynamics can be invented for a given sign relation.  I will concentrate on two principal forms of dynamic realization, or two ways of interpreting and augmenting sign relations as sign processes.
 
Working from these principles alone, there are numerous ways that a plausible dynamics can be invented for a given sign relation.  I will concentrate on two principal forms of dynamic realization, or two ways of interpreting and augmenting sign relations as sign processes.
  
One form of realization lets each element of the object domain <math>O\!</math> correspond to the observed presence of an object in the environment of the systematic agent.  In this interpretation, the object <math>x\!</math> acts as an input datum that causes the system <math>Y\!</math> to shift from whatever sign state it happens to occupy at a given moment to a random sign state in <math>[x]_Y.\!</math>  Expressed in a cognitive vein, <math>{}^{\backprime\backprime} Y ~\operatorname{notes}~ x {}^{\prime\prime}.</math>
+
One form of realization lets each element of the object domain <math>O\!</math> correspond to the observed presence of an object in the environment of the systematic agent.  In this interpretation, the object <math>x\!</math> acts as an input datum that causes the system <math>Y\!</math> to shift from whatever sign state it happens to occupy at a given moment to a random sign state in <math>[x]_Y.\!</math>  Expressed in a cognitive vein, <math>{}^{\backprime\backprime} Y ~\mathrm{notes}~ x {}^{\prime\prime}.</math>
  
 
Another form of realization lets each element of the object domain <math>O\!</math> correspond to the autonomous intention of the systematic agent to denote an object, achieve an objective, or broadly speaking to accomplish any other purpose with respect to an object in its domain.  In this interpretation, the object <math>x\!</math> is a control parameter that brings the system <math>Y\!</math> into line with realizing a target set <math>[x]_Y.\!</math>
 
Another form of realization lets each element of the object domain <math>O\!</math> correspond to the autonomous intention of the systematic agent to denote an object, achieve an objective, or broadly speaking to accomplish any other purpose with respect to an object in its domain.  In this interpretation, the object <math>x\!</math> is a control parameter that brings the system <math>Y\!</math> into line with realizing a target set <math>[x]_Y.\!</math>
Line 10,460: Line 10,470:
 
Treated in accord with these interpretations, the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> constitute partially degenerate cases of dynamic processes, in which the transitions are totally non-deterministic up to semantic equivalence classes but still manage to preserve those classes.  Whether construed as present observation or projective speculation, the most significant feature to note about a sign process is how the contemplation of an object or objective leads the system from a less determined to a more determined condition.
 
Treated in accord with these interpretations, the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> constitute partially degenerate cases of dynamic processes, in which the transitions are totally non-deterministic up to semantic equivalence classes but still manage to preserve those classes.  Whether construed as present observation or projective speculation, the most significant feature to note about a sign process is how the contemplation of an object or objective leads the system from a less determined to a more determined condition.
  
On reflection, one observes that these processes are not completely trivial since they preserve the structure of their semantic partitions.  In fact, each sign process preserves the entire topology &mdash; the family of sets closed under finite intersections and arbitrary unions &mdash; that is generated by its semantic equivalence classes.  These topologies, <math>\operatorname{Top}(\text{A})\!</math> and <math>\operatorname{Top}(\text{B}),\!</math> can be viewed as partially ordered sets, <math>\operatorname{Poset}(\text{A})\!</math> and <math>\operatorname{Poset}(\text{B}),\!</math> by taking the inclusion ordering <math>(\subseteq)\!</math> as <math>(\le).\!</math>  For each of the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> as things stand in their respective orderings <math>\operatorname{Poset}(\text{A})\!</math> and <math>\operatorname{Poset}(\text{B}),\!</math> the semantic equivalence classes of <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> and <math>{}^{\backprime\backprime} \text{B} {}^{\prime\prime}\!</math> are situated as intermediate elements that are incomparable to each other.
+
On reflection, one observes that these processes are not completely trivial since they preserve the structure of their semantic partitions.  In fact, each sign process preserves the entire topology &mdash; the family of sets closed under finite intersections and arbitrary unions &mdash; that is generated by its semantic equivalence classes.  These topologies, <math>\mathrm{Top}(\text{A})\!</math> and <math>\mathrm{Top}(\text{B}),\!</math> can be viewed as partially ordered sets, <math>\mathrm{Poset}(\text{A})\!</math> and <math>\mathrm{Poset}(\text{B}),\!</math> by taking the inclusion ordering <math>(\subseteq)\!</math> as <math>(\le).\!</math>  For each of the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> as things stand in their respective orderings <math>\mathrm{Poset}(\text{A})\!</math> and <math>\mathrm{Poset}(\text{B}),\!</math> the semantic equivalence classes of <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> and <math>{}^{\backprime\backprime} \text{B} {}^{\prime\prime}\!</math> are situated as intermediate elements that are incomparable to each other.
  
 
{| align="center" cellspacing="6" width="90%"
 
{| align="center" cellspacing="6" width="90%"
 
|
 
|
 
<math>\begin{array}{lllll}
 
<math>\begin{array}{lllll}
\operatorname{Top}(\text{A})
+
\mathrm{Top}(\text{A})
 
& = &
 
& = &
\operatorname{Poset}(\text{A})
+
\mathrm{Poset}(\text{A})
 
& = &
 
& = &
 
\{
 
\{
Line 10,482: Line 10,492:
 
\}.
 
\}.
 
\\[6pt]
 
\\[6pt]
\operatorname{Top}(\text{B})
+
\mathrm{Top}(\text{B})
 
& = &
 
& = &
\operatorname{Poset}(\text{B})
+
\mathrm{Poset}(\text{B})
 
& = &
 
& = &
 
\{ \varnothing,
 
\{ \varnothing,
Line 10,512: Line 10,522:
 
{| align="center" cellspacing="6" width="90%"
 
{| align="center" cellspacing="6" width="90%"
 
|
 
|
<math>Y ~\text{at}~ x ~=~ \operatorname{At}[x]_Y ~=~ [x]_Y \cup \{ \text{arcs into}~ [x]_Y \}.</math>
+
<math>Y ~\text{at}~ x ~=~ \mathrm{At}[x]_Y ~=~ [x]_Y \cup \{ \text{arcs into}~ [x]_Y \}.</math>
 
|}
 
|}
  
Line 10,521: Line 10,531:
 
This section takes up the topic of reflective extensions in a more systematic fashion, starting from the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> once again and keeping its focus within their vicinity, but exploring the space of nearby extensions in greater detail.
 
This section takes up the topic of reflective extensions in a more systematic fashion, starting from the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> once again and keeping its focus within their vicinity, but exploring the space of nearby extensions in greater detail.
  
Tables&nbsp;80 and 81 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B}).\!</math>
+
Tables&nbsp;80 and 81 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B}).\!</math>
  
 
<br>
 
<br>
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 80.} ~~ \text{Reflective Extension} ~ \operatorname{Ref}^1 (\text{A})\!</math>
+
|+ style="height:30px" |
 +
<math>{\text{Table 80.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A})}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 10,629: Line 10,640:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 81.} ~~ \text{Reflective Extension} ~ \operatorname{Ref}^1 (\text{B})\!</math>
+
|+ style="height:30px" |
 +
<math>{\text{Table 81.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B})}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 10,731: Line 10,743:
 
<br>
 
<br>
  
The common ''world'' <math>W\!</math> of the reflective extensions <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B})\!</math> is the totality of objects and signs they contain, namely, the following set of 10 elements.
+
The common ''world'' <math>W\!</math> of the reflective extensions <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> is the totality of objects and signs they contain, namely, the following set of 10 elements.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 10,739: Line 10,751:
 
Raised angle brackets or ''supercilia'' <math>({}^{\langle} \ldots {}^{\rangle})\!</math> are here being used on a par with ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> to construct a new sign whose object is precisely the sign they enclose.
 
Raised angle brackets or ''supercilia'' <math>({}^{\langle} \ldots {}^{\rangle})\!</math> are here being used on a par with ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> to construct a new sign whose object is precisely the sign they enclose.
  
Regarded as sign relations in their own right, <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B})\!</math> are formed on the following relational domains.
+
Regarded as sign relations in their own right, <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> are formed on the following relational domains.
  
 
{| align="center" cellspacing="6" width="90%"
 
{| align="center" cellspacing="6" width="90%"
Line 10,793: Line 10,805:
 
|
 
|
 
<math>\begin{array}{lllll}
 
<math>\begin{array}{lllll}
\operatorname{Den}^1 (L)
+
\mathrm{Den}^1 (L)
 
& = &
 
& = &
(\operatorname{Ref}^1 (L))_{SO}
+
(\mathrm{Ref}^1 (L))_{SO}
 
& = &
 
& = &
\operatorname{proj}_{OS} (\operatorname{Ref}^1 (L))
+
\mathrm{proj}_{OS} (\mathrm{Ref}^1 (L))
 
\\[6pt]
 
\\[6pt]
\operatorname{Con}^1 (L)
+
\mathrm{Con}^1 (L)
 
& = &
 
& = &
(\operatorname{Ref}^1 (L))_{SI}
+
(\mathrm{Ref}^1 (L))_{SI}
 
& = &
 
& = &
\operatorname{proj}_{SI} (\operatorname{Ref}^1 (L))
+
\mathrm{proj}_{SI} (\mathrm{Ref}^1 (L))
\end{array}</math>
+
\end{array}\!</math>
 
|}
 
|}
  
 
The dyadic components of sign relations can be given graph-theoretic representations, namely, as ''digraphs'' (directed graphs), that provide concise pictures of their structural and potential dynamic properties.  By way of terminology, a directed edge <math>(x, y)\!</math> is called an ''arc'' from point <math>x\!</math> to point <math>y,\!</math> and a self-loop <math>(x, x)\!</math> is called a ''sling'' at <math>x.\!</math>
 
The dyadic components of sign relations can be given graph-theoretic representations, namely, as ''digraphs'' (directed graphs), that provide concise pictures of their structural and potential dynamic properties.  By way of terminology, a directed edge <math>(x, y)\!</math> is called an ''arc'' from point <math>x\!</math> to point <math>y,\!</math> and a self-loop <math>(x, x)\!</math> is called a ''sling'' at <math>x.\!</math>
  
The denotative components <math>\operatorname{Den}^1 (L_\text{A})\!</math> and <math>\operatorname{Den}^1 (L_\text{B})\!</math> can be viewed as digraphs on the 10 points of the world set <math>W.\!</math>  The arcs of these digraphs are given as follows.
+
The denotative components <math>\mathrm{Den}^1 (L_\text{A})\!</math> and <math>\mathrm{Den}^1 (L_\text{B})\!</math> can be viewed as digraphs on the 10 points of the world set <math>W.\!</math>  The arcs of these digraphs are given as follows.
  
 
<ol>
 
<ol>
<li><math>\operatorname{Den}^1 (L_\text{A})\!</math> has an arc from each point of <math>[\text{A}]_\text{A} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i}{}^{\rangle} \}\!</math> to <math>\text{A}\!</math> and from each point of <math>[\text{B}]_\text{A} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}\!</math> to <math>\text{B}.\!</math></li>
+
<li><math>\mathrm{Den}^1 (L_\text{A})\!</math> has an arc from each point of <math>[\text{A}]_\text{A} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i}{}^{\rangle} \}\!</math> to <math>\text{A}\!</math> and from each point of <math>[\text{B}]_\text{A} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}\!</math> to <math>\text{B}.\!</math></li>
  
<li><math>\operatorname{Den}^1 (L_\text{B})\!</math> has an arc from each point of <math>[\text{A}]_\text{B} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u}{}^{\rangle} \}\!</math> to <math>\text{A}\!</math> and from each point of <math>[\text{B}]_\text{B} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \}\!</math> to <math>\text{B}.\!</math></li>
+
<li><math>\mathrm{Den}^1 (L_\text{B})\!</math> has an arc from each point of <math>[\text{A}]_\text{B} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u}{}^{\rangle} \}\!</math> to <math>\text{A}\!</math> and from each point of <math>[\text{B}]_\text{B} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \}\!</math> to <math>\text{B}.\!</math></li>
  
<li>In the parts added by reflective extension <math>\operatorname{Den}^1 (L_\text{A})\!</math> and <math>\operatorname{Den}^1 (L_\text{B})\!</math> both have arcs from <math>{}^{\langle} s {}^{\rangle}\!</math> to <math>s,\!</math> for each <math>s \in S^{(1)}.\!</math></li>
+
<li>In the parts added by reflective extension <math>\mathrm{Den}^1 (L_\text{A})\!</math> and <math>\mathrm{Den}^1 (L_\text{B})\!</math> both have arcs from <math>{}^{\langle} s {}^{\rangle}\!</math> to <math>s,\!</math> for each <math>s \in S^{(1)}.\!</math></li>
 
</ol>
 
</ol>
  
Taken as transition digraphs, <math>\operatorname{Den}^1 (L_\text{A})\!</math> and <math>\operatorname{Den}^1 (L_\text{B})\!</math> summarize the upshots, end results, or effective steps of computation that are involved in the respective evaluations of signs in <math>S\!</math> by <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B}).\!</math>
+
Taken as transition digraphs, <math>\mathrm{Den}^1 (L_\text{A})\!</math> and <math>\mathrm{Den}^1 (L_\text{B})\!</math> summarize the upshots, end results, or effective steps of computation that are involved in the respective evaluations of signs in <math>S\!</math> by <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B}).\!</math>
  
The connotative components <math>\operatorname{Con}^1 (L_\text{A})\!</math> and <math>\operatorname{Con}^1 (L_\text{B})\!</math> can be viewed as digraphs on the eight points of the syntactic domain <math>S.\!</math>  The arcs of these digraphs are given as follows.
+
The connotative components <math>\mathrm{Con}^1 (L_\text{A})~\!</math> and <math>\mathrm{Con}^1 (L_\text{B})~\!</math> can be viewed as digraphs on the eight points of the syntactic domain <math>S.\!</math>  The arcs of these digraphs are given as follows.
  
 
<ol>
 
<ol>
<li><math>\operatorname{Con}^1 (L_\text{A})\!</math> inherits from <math>L_\text{A}\!</math> the structure of a semiotic equivalence relation on <math>S^{(1)},\!</math> having a sling on each point of <math>S^{(1)},\!</math> arcs in both directions between <math>{}^{\langle} \text{A} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{i}{}^{\rangle},\!</math> and arcs in both directions between <math>{}^{\langle} \text{B} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{u}{}^{\rangle}.\!</math>  The reflective extension <math>\operatorname{Ref}^1 (L_\text{A})\!</math> adds a sling on each point of <math>S^{(2)},\!</math> creating a semiotic equivalence relation on <math>S.\!</math></li>
+
<li><math>\mathrm{Con}^1 (L_\text{A})\!</math> inherits from <math>L_\text{A}\!</math> the structure of a semiotic equivalence relation on <math>S^{(1)},\!</math> having a sling on each point of <math>S^{(1)},\!</math> arcs in both directions between <math>{}^{\langle} \text{A} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{i}{}^{\rangle},\!</math> and arcs in both directions between <math>{}^{\langle} \text{B} {}^{\rangle}~\!</math> and <math>{}^{\langle} \text{u}{}^{\rangle}.~\!</math>  The reflective extension <math>\mathrm{Ref}^1 (L_\text{A})\!</math> adds a sling on each point of <math>S^{(2)},\!</math> creating a semiotic equivalence relation on <math>S.\!</math></li>
  
<li><math>\operatorname{Con}^1 (L_\text{B})\!</math> inherits from <math>L_\text{B}\!</math> the structure of a semiotic equivalence relation on <math>S^{(1)},\!</math> having a sling on each point of <math>S^{(1)},\!</math> arcs in both directions between <math>{}^{\langle} \text{A} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{u}{}^{\rangle},\!</math> and arcs in both directions between <math>{}^{\langle} \text{B} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{i}{}^{\rangle}.\!</math>  The reflective extension <math>\operatorname{Ref}^1 (L_\text{B})\!</math> adds a sling on each point of <math>S^{(2)},\!</math> creating a semiotic equivalence relation on <math>S.\!</math></li>
+
<li><math>\mathrm{Con}^1 (L_\text{B})~\!</math> inherits from <math>L_\text{B}\!</math> the structure of a semiotic equivalence relation on <math>S^{(1)},\!</math> having a sling on each point of <math>S^{(1)},\!</math> arcs in both directions between <math>{}^{\langle} \text{A} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{u}{}^{\rangle},\!</math> and arcs in both directions between <math>{}^{\langle} \text{B} {}^{\rangle}~\!</math> and <math>{}^{\langle} \text{i}{}^{\rangle}.~\!</math>  The reflective extension <math>\mathrm{Ref}^1 (L_\text{B})\!</math> adds a sling on each point of <math>S^{(2)},\!</math> creating a semiotic equivalence relation on <math>S.\!</math></li>
 
</ol>
 
</ol>
  
Taken as transition digraphs, <math>\operatorname{Con}^1 (L_\text{A})\!</math> and <math>\operatorname{Con}^1 (L_\text{B})\!</math> highlight the associations between signs in <math>\operatorname{Ref}^1 (L_\text{A})\!</math> and <math>\operatorname{Ref}^1 (L_\text{B}),\!</math> respectively.
+
Taken as transition digraphs, <math>\mathrm{Con}^1 (L_\text{A})~\!</math> and <math>\mathrm{Con}^1 (L_\text{B})~\!</math> highlight the associations between signs in <math>\mathrm{Ref}^1 (L_\text{A})\!</math> and <math>\mathrm{Ref}^1 (L_\text{B}),\!</math> respectively.
  
The semiotic equivalence relation given by <math>\operatorname{Con}^1 (L_\text{A})\!</math> for interpreter <math>\text{A}\!</math> has the following semiotic equations.
+
The semiotic equivalence relation given by <math>\mathrm{Con}^1 (L_\text{A})\!</math> for interpreter <math>\text{A}\!</math> has the following semiotic equations.
  
 
{| cellpadding="10"
 
{| cellpadding="10"
Line 10,844: Line 10,856:
 
|-
 
|-
 
| width="10%" | or
 
| width="10%" | or
| &nbsp;<math>{}^{\langle} \text{A} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{A} {}^{\rangle}~\!</math>
 
| <math>=_\text{A}\!</math>
 
| <math>=_\text{A}\!</math>
| &nbsp;<math>{}^{\langle} \text{i} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{i} {}^{\rangle}~\!</math>
 
| width="20%" | &nbsp;
 
| width="20%" | &nbsp;
| &nbsp;<math>{}^{\langle} \text{B} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{B} {}^{\rangle}~\!</math>
 
| <math>=_\text{A}\!</math>
 
| <math>=_\text{A}\!</math>
| &nbsp;<math>{}^{\langle} \text{u} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{u} {}^{\rangle}~\!</math>
 
|}
 
|}
  
Line 10,869: Line 10,881:
 
|}
 
|}
  
The semiotic equivalence relation given by <math>\operatorname{Con}^1 (L_\text{B})\!</math> for interpreter <math>\text{B}\!</math> has the following semiotic equations.
+
The semiotic equivalence relation given by <math>\mathrm{Con}^1 (L_\text{B})~\!</math> for interpreter <math>\text{B}\!</math> has the following semiotic equations.
  
 
{| cellpadding="10"
 
{| cellpadding="10"
Line 10,882: Line 10,894:
 
|-
 
|-
 
| width="10%" | or
 
| width="10%" | or
| &nbsp;<math>{}^{\langle} \text{A} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{A} {}^{\rangle}~\!</math>
 
| <math>=_\text{B}\!</math>
 
| <math>=_\text{B}\!</math>
| &nbsp;<math>{}^{\langle} \text{u} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{u} {}^{\rangle}~\!</math>
 
| width="20%" | &nbsp;
 
| width="20%" | &nbsp;
| &nbsp;<math>{}^{\langle} \text{B} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{B} {}^{\rangle}~\!</math>
 
| <math>=_\text{B}\!</math>
 
| <math>=_\text{B}\!</math>
| &nbsp;<math>{}^{\langle} \text{i} {}^{\rangle}\!</math>
+
| &nbsp;<math>{}^{\langle} \text{i} {}^{\rangle}~\!</math>
 
|}
 
|}
  
Line 10,911: Line 10,923:
 
There are many ways to extend sign relations in an effort to increase their reflective capacities.  The implicit goal of a reflective project is to achieve ''reflective closure'', <math>S \subseteq O,\!</math> where every sign is an object.
 
There are many ways to extend sign relations in an effort to increase their reflective capacities.  The implicit goal of a reflective project is to achieve ''reflective closure'', <math>S \subseteq O,\!</math> where every sign is an object.
  
Considered as reflective extensions, there is nothing unique about the constructions of <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B})\!</math> but their common pattern of development illustrates a typical approach toward reflective closure.  In a sense it epitomizes the project of ''free'', ''naive'', or ''uncritical'' reflection, since continuing this mode of production to its closure would generate an infinite sign relation, passing through infinitely many higher orders of signs, but without examining critically to what purpose the effort is directed or evaluating alternative constraints that might be imposed on the initial generators toward this end.
+
Considered as reflective extensions, there is nothing unique about the constructions of <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> but their common pattern of development illustrates a typical approach toward reflective closure.  In a sense it epitomizes the project of ''free'', ''naive'', or ''uncritical'' reflection, since continuing this mode of production to its closure would generate an infinite sign relation, passing through infinitely many higher orders of signs, but without examining critically to what purpose the effort is directed or evaluating alternative constraints that might be imposed on the initial generators toward this end.
  
 
At first sight it seems as though the imposition of reflective closure has multiplied a finite sign relation into an infinite profusion of highly distracting and largely redundant signs, all by itself and all in one step.  But this explosion of orders happens only with the complicity of another requirement, that of deterministic interpretation.
 
At first sight it seems as though the imposition of reflective closure has multiplied a finite sign relation into an infinite profusion of highly distracting and largely redundant signs, all by itself and all in one step.  But this explosion of orders happens only with the complicity of another requirement, that of deterministic interpretation.
Line 10,918: Line 10,930:
  
 
<ol>
 
<ol>
<li>A sign relation <math>L\!</math> has a non-deterministic denotation if its dyadic component <math>L_{SO}\!</math> is not a function <math>L_{SO} : S \to O,\!</math> in other words, if there are signs in <math>S\!</math> with missing or multiple objects in <math>O.\!</math></li>
+
<li>A sign relation <math>L\!</math> has a non-deterministic denotation if its dyadic component <math>{L_{SO}}\!</math> is not a function <math>L_{SO} : S \to O,\!</math> in other words, if there are signs in <math>S\!</math> with missing or multiple objects in <math>O.\!</math></li>
  
 
<li>A sign relation <math>L\!</math> has a non-deterministic connotation if its dyadic component <math>L_{SI}\!</math> is not a function <math>L_{SI} : S \to I,\!</math> in other words, if there are signs in <math>S\!</math> with missing or multiple interpretants in <math>I.\!</math>  As a rule, sign relations are rife with this variety of non-determinism, but it is usually felt to be under control so long as <math>L_{SI}\!</math> remains close to being an equivalence relation.</li>
 
<li>A sign relation <math>L\!</math> has a non-deterministic connotation if its dyadic component <math>L_{SI}\!</math> is not a function <math>L_{SI} : S \to I,\!</math> in other words, if there are signs in <math>S\!</math> with missing or multiple interpretants in <math>I.\!</math>  As a rule, sign relations are rife with this variety of non-determinism, but it is usually felt to be under control so long as <math>L_{SI}\!</math> remains close to being an equivalence relation.</li>
Line 10,929: Line 10,941:
 
As a flexible and fairly general strategy for describing reflective extensions, it is convenient to take the following tack.  Given a syntactic domain <math>S,\!</math> there is an independent formal language <math>F = F(S) = S \langle {}^{\langle\rangle} \rangle,\!</math> called the ''free quotational extension of <math>S,\!</math>'' that can be generated from <math>S\!</math> by embedding each of its signs to any depth of quotation marks.  Within <math>F,\!</math> the quoting operation can be regarded as a syntactic generator that is inherently free of constraining relations.  In other words, for every <math>s \in S,\!</math> the sequence <math>s, {}^{\langle} s {}^{\rangle}, {}^{\langle\langle} s {}^{\rangle\rangle}, \ldots\!</math> contains nothing but pairwise distinct elements in <math>F\!</math> no matter how far it is produced.  The set <math>F(s) = s \langle {}^{\langle\rangle} \rangle \subseteq F\!</math> that collects the elements of this sequence is called the ''subset of <math>F\!</math> generated from <math>s\!</math> by quotation''.
 
As a flexible and fairly general strategy for describing reflective extensions, it is convenient to take the following tack.  Given a syntactic domain <math>S,\!</math> there is an independent formal language <math>F = F(S) = S \langle {}^{\langle\rangle} \rangle,\!</math> called the ''free quotational extension of <math>S,\!</math>'' that can be generated from <math>S\!</math> by embedding each of its signs to any depth of quotation marks.  Within <math>F,\!</math> the quoting operation can be regarded as a syntactic generator that is inherently free of constraining relations.  In other words, for every <math>s \in S,\!</math> the sequence <math>s, {}^{\langle} s {}^{\rangle}, {}^{\langle\langle} s {}^{\rangle\rangle}, \ldots\!</math> contains nothing but pairwise distinct elements in <math>F\!</math> no matter how far it is produced.  The set <math>F(s) = s \langle {}^{\langle\rangle} \rangle \subseteq F\!</math> that collects the elements of this sequence is called the ''subset of <math>F\!</math> generated from <math>s\!</math> by quotation''.
  
Against this background, other varieties of reflective extension can be specified by means of semantic equations that are considered to be imposed on the elements of <math>F.\!</math>  Taking the reflective extensions <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B})\!</math> as the first orders of a &ldquo;free&rdquo; project toward reflective closure, variant extensions can be described by relating their entries with those of comparable members in the standard sequences <math>\operatorname{Ref}^n (\text{A})\!</math> and <math>\operatorname{Ref}^n (\text{B}).\!</math>
+
Against this background, other varieties of reflective extension can be specified by means of semantic equations that are considered to be imposed on the elements of <math>F.\!</math>  Taking the reflective extensions <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> as the first orders of a &ldquo;free&rdquo; project toward reflective closure, variant extensions can be described by relating their entries with those of comparable members in the standard sequences <math>\mathrm{Ref}^n (\text{A})\!</math> and <math>\mathrm{Ref}^n (\text{B}).\!</math>
  
A variant pair of reflective extensions, <math>\operatorname{Ref}^1 (\text{A} | E_1)\!</math> and <math>\operatorname{Ref}^1 (\text{B} | E_1),\!</math> is presented in Tables&nbsp;82 and 83, respectively.  These are identical to the corresponding free variants, <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B}),\!</math> with the exception of those entries that are constrained by the following system of semantic equations.
+
A variant pair of reflective extensions, <math>\mathrm{Ref}^1 (\text{A} | E_1)\!</math> and <math>\mathrm{Ref}^1 (\text{B} | E_1),\!</math> is presented in Tables&nbsp;82 and 83, respectively.  These are identical to the corresponding free variants, <math>\mathrm{Ref}^1 (\text{A})~\!</math> and <math>\mathrm{Ref}^1 (\text{B}),~\!</math> with the exception of those entries that are constrained by the following system of semantic equations.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 10,953: Line 10,965:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 82.} ~~ \text{Reflective Extension} ~ \operatorname{Ref}^1 (\text{A} | E_1)\!</math>
+
|+ style="height:30px" | <math>\text{Table 82.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A} | E_1)\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 11,056: Line 11,068:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 83.} ~~ \text{Reflective Extension} ~ \operatorname{Ref}^1 (\text{B} | E_1)\!</math>
+
|+ style="height:30px" | <math>\text{Table 83.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B} | E_1)\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 11,158: Line 11,170:
 
<br>
 
<br>
  
Another pair of reflective extensions, <math>\operatorname{Ref}^1 (\text{A} | E_2)\!</math> and <math>\operatorname{Ref}^1 (\text{B} | E_2),\!</math> is presented in Tables&nbsp;84 and 85, respectively.  These are identical to the corresponding free variants, <math>\operatorname{Ref}^1 (\text{A})\!</math> and <math>\operatorname{Ref}^1 (\text{B}),\!</math> except for the entries constrained by the following semantic equations.
+
Another pair of reflective extensions, <math>\mathrm{Ref}^1 (\text{A} | E_2)\!</math> and <math>\mathrm{Ref}^1 (\text{B} | E_2),\!</math> is presented in Tables&nbsp;84 and 85, respectively.  These are identical to the corresponding free variants, <math>\mathrm{Ref}^1 (\text{A})~\!</math> and <math>\mathrm{Ref}^1 (\text{B}),~\!</math> except for the entries constrained by the following semantic equations.
  
 
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 11,178: Line 11,190:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 84.} ~~ \text{Reflective Extension} ~ \operatorname{Ref}^1 (\text{A} | E_2)\!</math>
+
|+ style="height:30px" | <math>\text{Table 84.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A} | E_2)\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 11,281: Line 11,293:
  
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 85.} ~~ \text{Reflective Extension} ~ \operatorname{Ref}^1 (\text{B} | E_2)\!</math>
+
|+ style="height:30px" | <math>\text{Table 85.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B} | E_2)\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 11,405: Line 11,417:
 
The mark of an intelligent interpreter that is relevant in this context is the ability to face (encounter, countenance) a non-deterministic juncture of choices in a sign relation and to respond to it as such with actions appropriate to the uncertain nature of the situation.
 
The mark of an intelligent interpreter that is relevant in this context is the ability to face (encounter, countenance) a non-deterministic juncture of choices in a sign relation and to respond to it as such with actions appropriate to the uncertain nature of the situation.
  
[Variants]
+
'''[Variants]'''
  
 
An intelligent interpreter is one that can follow up several different interpretations at once, experimenting with the denotations and connotations that are available in a non-deterministic sign relation, &hellip;
 
An intelligent interpreter is one that can follow up several different interpretations at once, experimenting with the denotations and connotations that are available in a non-deterministic sign relation, &hellip;
Line 11,469: Line 11,481:
  
 
In principle, the successive grades of complexity enumerated above could be ascended in a straightforward way, if only the steps did not go straight up the cliffs of abstraction.  As always, the kinds of intentional objects that are the toughest to face are those whose realization is so distant that even the gear needed to approach their construction is not yet in existence.
 
In principle, the successive grades of complexity enumerated above could be ascended in a straightforward way, if only the steps did not go straight up the cliffs of abstraction.  As always, the kinds of intentional objects that are the toughest to face are those whose realization is so distant that even the gear needed to approach their construction is not yet in existence.
 
===6.47. Mutually Intelligible Codes===
 
 
Before this complex of relationships can be formalized in much detail, I must introduce linguistic devices for generating ''higher order signs'', used to indicate other signs, and ''situated signs'', indexed by the names of their users, their contexts of use, and other types of information incidental to their usage in general.  This leads to the consideration of ''systems of interpretation'' (SOIs) that maintain recursive mechanisms for naming everything within their purview.  This &ldquo;nominal generosity&rdquo; gives them a new order of generative capacity, producing a sufficient number of distinctive signs to name all the objects and then name the names that are needed in a given discussion.
 
 
Symbolic systems for quoting inscriptions and ascribing quotations are associated in metamathematics with ''gödel numberings'' of formal objects, enumerative functions that provide systematic but ostensibly arbitrary reference numbers for the signs and expressions in a formal language.  Assuming these signs and expressions denote anything at all, their formal enumerations become the ''codes'' of formal objects, just as programs taken literally are code names for certain mathematical objects known as computable functions.  Partial forms of specification notwithstanding, these codes are the only complete modes of representation that formal objects can have in the medium of mechanical activity.
 
 
In the dialogue of <math>\text{A}\!</math> and <math>\text{B}\!</math> there happens to be an exact coincidence between signs and states.  That is, the states of the interpretive systems <math>\text{A}\!</math> and <math>\text{B}\!</math> are not distinguished from the signs in <math>S\!</math> that are imagined to be mediating, moment by moment, the attentions of the interpretive agents <math>\text{A}\!</math> and <math>\text{B}\!</math> toward their respective objects in <math>O.\!</math>  So the question arises:  Is this identity bound to be a general property of all useful sign relations, or is it only a degenerate feature occurring by chance or unconscious design in the immediate example?
 
 
To move toward a resolution of this question I reason as follows.  In one direction, it seems obvious that a ''sign in use'' (SIU) by a particular interpreter constitutes a component of that agent's state.  In other words, the very notion of an identifiable SIU refers to numerous instances of a particular interpreter's state that share in the abstract property of being such instances, whether or not anyone can give a more concise or illuminating characterization of the concept under which these momentary states are gathered.  Conversely, it is at least conceivable that the whole state of a system, constituting its transitory response to the entirety of its environment, history, and goals, can be interpreted as a sign of something to someone.  In sum, there remains an outside chance of signs and states being precisely the same things, since nothing precludes the existence of an ''interpretive framework'' (IF) that could make it so.
 
 
Still, if the question about the distinction or coincidence between signs and states is restricted to the domains where existential realizations are conceivable, no matter whether in biological or computational media, then the prerequisites of the task become more severe, due to the narrower scope of materials that are admitted to answer them.  In focusing on this arena the problem is threefold:
 
 
# The crucial point is not just whether it is possible to imagine an ideal SOI, an external perspective or an independent POV, for which all states are signs, but whether this is so for the prospective SOI of the very agent that passes through these states.
 
# To what extent can the transient states and persistent conduct of each agent in a community of interpretation take on a moderately public and objective aspect in relation to the other participants?
 
# How far in this respect, in the common regard for this species of outward demeanor, can each agent's behavior act as a sign of genuine objects in the eyes of other interpreters?
 
 
The special task of a nuanced hermeneutic approach to computational interpretation is to realize the relativity of all formal codes to their formal coders, and to seek ways of facilitating mutual intelligibility among interpreters whose internal codes can be thoroughly private, synchronistically keyed to external events, and even a bit idiosyncratic.
 
 
<pre>
 
Ultimately, working through this maze of "meta" questions, as posed on the tentative grounds of the present project, leads to a question about the "logical reference frames" or "metamathematical coordinate systems" that are supposed to distinguish "objective" from "symbolic" entities and are imagined to discriminate a range of gradations along their lines.  The question is:  Whether any gauge of objectivity or scale of virtuality has invariant properties discoverable by all independent interpreters, or whether all is vanity and inane relativism, and everything concerning a subjective point of view is sheer caprice?
 
 
Thus, the problem of mutual intelligibility turns on the question of "common significance":  How can there be signs that are truly public, when the most natural signs that distinct agents can know, their own internal states, have no guarantee and very little likelihood of being related in systematically fathomable ways?  As a partial answer to this, I am willing to contemplate certain forms of pre established harmony, like the common evolution of a biological species or the shared culture of an interpretive community, but my experience has been that harmony, once established, quickly corrupts unless active means are available to maintain it.  So there still remains the task of identifying these means.  With or without the benefit of a prior consensus, or the assumption of an initial, but possibly fragile equilibrium, an explanation of robust harmony must detail the modes of maintaining communication that enable coordinated action to persist in the meanest of times.
 
 
The formal character of these questions, in the potential complexities that can be forced on contemplation in the pursuit of their answers, is independent of the species of interpreters that are chosen for the termini of comparison, whether person to person, person to computer, or computer to computer.  As always, the truth of this kind of thesis is formal, all too formal.  What it brings is a new refrain of an old motif:  Are there meaningful, if necessarily formal series of analogies that can be strung from the patterns of whizzing electrons and humming protons, whose controlled modes of collective excitation form and inform the conducts of computers, all the way to the rather different patterns of wizened electrons and humbled protons, whose deliberate energies of communal striving substantiate the forms of life known to be intelligible?
 
 
A full consideration of the geometries available for the spaces in which these levels of reflective abstraction are commonly imagined to reside leads to the conclusion that familiar distinctions of "top down" versus "bottom up" are being taken for granted in an arena that has not even been established to be orientable.  Thus, it needs to be recognized that the distinction between objects and signs is relative to a definite SOI.  The pragmatic theory of signs is designed, in part, precisely to deal with the circumstance that thoroughly objective states of systems can be signs of each other, undermining any pretended distinction between objects and signs that one might propose to draw on essential grounds.
 
 
From now on, I will reuse the ancient term "gnomon" in a technical sense to refer to the godel numbers or code names of formal objects.  In other words, a gnomon is a godel numbering or enumeration function that maps a domain of objects into a domain of signs, Gno : O  > S.  When the syntactic domain S is contained within the object domain O, then the part of the gnomon that maps S into S, providing names for signs and expressions, is usually regarded as a "quoting function".
 
 
In the pluralistic contexts that go with pragmatic theories of signs, it is no longer entirely appropriate to refer to "the" gnomon of any object.  At any moment of discussion, I can only have so and so's gnomon or code word for each thing under the sun.  Thus, apparent references to a uniquely determined gnomon only make sense if taken as enthymemic invocations of the ordinary context and of all that is comprehended to be implied in it, promising to convert tacit common sense into definite articulations of what is understood.  Actually achieving this requires each elliptic reference to the gnomon to be explicitly grounded in the context of informal discussion, interpreted with respect to the conventional basis of understanding assumed in it, and relayed to the indexing function taken for granted by all parties to it.
 
 
In computational terms, this brand of pluralism means that neither the gnomon nor the quoting function that forms a part of it can be viewed as well defined unless it is indexed, explicity or implicitly, by the name of a particular interpreter.  I will use the notations "Gnoi(x)" = "<x, i>" to indicate the gnomon of the object x with respect to the interpreter i.  The value Gnoi(x) = <x, i> C S is the "nominal sign in use" or the "name in use" (NIU) of the object x with respect to the interpreter i, and thus it constitutes a component of i's state.
 
 
In the special case where x is a sign or expression in the syntactic domain, then Gnoi(x) = <x, i> is tantamount to the quotation of x by and for the use of the ith interpreter, in short, the nominal sign to i that makes x an object for i.  For signs and expressions, it is usually only the quoting function that makes them objects.  But nothing is an object in any sense for an interpreter unless it is an object of a sign relation for that interpreter.  Therefore, ...
 
 
If it is now asked what measure of invariant understanding can be enjoyed by diverse parties of interpretive agents, then the discussion has come upon an issue with a familiar echo in mathematical analysis.  The organization of many local coordinate frames into systems capable of supporting communicative references to relatively "objective" objects is usually handled by means of the concept of a "manifold".  Therefore, the analogous task that is suggested for this project is to arrive at a workable definition of "sign relational manifolds".
 
 
The discrete nature of the A and B dialogue renders moot the larger share of issues of interest in continuous and differentiable manifolds.  However, it is still possible to get things moving in this direction by looking at simple structural analogies that connect the pragmatic theory of sign relations with the basic notions of analysis on manifolds.
 
</pre>
 
 
===6.48. Discourse Analysis : Ways and Means===
 
 
<pre>
 
Before the discussion of the A and B dialogue can proceed to richer veins of semantic structure it will be necessary to extract the relevant traces of embedded sign relations from their environments of informally interpreted syntax.
 
 
On the substantive front, sign relations serving as raw materials of discourse need to be refined and their content assayed, but first their identifying signatures must be sounded out, carved out, and lifted from their embroiling inclusions in the dense strata of obscure intuitions that sediment ordinary discussion.  On the instrumental front, sign relations serving as primitive tools of discourse analysis need to be identified and improved by a deliberate examination of their designs and purposes.
 
 
So far, the models and methods made available to formal treatment were borrowed outright, with little hesitation and less recognition, from the context of casual discussion.  Thus, these materials and mechanisms have come to the threshold of critical reflection already in play, devoid of concern for the presuppositions and consequences associated with their use, and only belatedly turned to the effortful work and odious formalities of self conscious exposition.
 
 
To reflect on the properties of complex and higher order sign relations with any degree of clarity it is necessary to arrange a clearer field of investigation and a less cluttered staging area for analytic work than is commonly provided.  Habitual processes of interpretation that typically operate as automatic routines and uncritical defaults in the informal context of discussion have to be selectively inhibited, slowed down, and critically examined as objective possibilities, instead of being taken for granted as absolute necessities.
 
 
In other words, an apparatus for critical reflection does not merely add more mirrors to the kaleidoscopic fun house of interpretive discourse, but it provides transient moments of equanimity, or balanced neutrality, and a moderately detached perspective on alternative points of view.  A scope so limited does not by any means grant a God's Eye View (GEV), but permits a sufficient quantity of light to consider how the original array of sights and reflections might have been created otherwise.
 
 
Ordinarily, the extra degree of attention to syntax that is needed for critical reflection on interpretive processes is called into play by means of syntactic operators and diacritical devices acting at the level of individual signs and elementary expressions.  For example, quotation marks are used to force one type of "semantic ascent", causing signs to be treated as objects and marking points of interpretive shift as they occur in the syntactic medium.  But these operators and devices must be symbolized, and these symbols must be interpreted.  Consequently, there is no way to avoid the invocation of a cohering interpretive framework, one that needs to be specialized for analytic purposes.
 
 
The best way to achieve the desired type of reflective capacity is by attaching a parameter to the IF used as an instrument of formal study, specifying certain choices or interpretive presumptions that affect the entire context of discussion.  The aesthetic distance needed to arrive at a formal perspective on sign relations is maintained, not by jury rigging ordinary discussion with locally effective syntactic devices, but by asking the reader to consider certain dimensions of parametric variation in the global IFs used to comprehend the sign relations under study.
 
 
The interpretive parameter of paramount importance to this work is one that is critical to reflection.  It can be presented as a choice between two alternative conventions, affecting the way one reflexively regards each sign in a text:  (1) as a sign provoking interest only in passing, exchanged for the sake of a meaningful object it is always taken for granted to have, or (2) as a sign comprising an interest in and of itself, a state of a system or a modification of a medium that can signify an external value but does not necessarily denote anything else at all.  I will name these options for responding to signs according to the aspects of character that are most appreciated in their net effects, whether signs for the sake of objects, or signs for their own sake, respectively.
 
 
The first option I call the "object convention", recognizing it as the natural default of informal language use.  In the ordinary language context it is the automatic assumption that signs and expressions are intended to denote something external to themselves, and even though it is quite obvious to all interpreters that the medium is filled with the appearances of signs and not with the objects themselves, this fact passes for little more than transitory interest in the rush to cash out tokens for their indicated values.
 
 
The object convention, as appropriate to an introduction that needs to begin in the context of ordinary discussion, is the parametric choice that was left in force throughout the treatment of the A and B example.  Doing things this way is like trying to roller skate in a buffalo herd, that is, it attempts to formalize a fragment of discussion on a patchwork of local scales without interrupting the automatic routines and default assumptions that prevail on a global basis in the informal context.  Ultimately, one cannot avoid stumbling over the hoofprints ("...") of overly cited and opaquely enthymemic textual deposits.
 
 
The second option I call the "sign convention", observing it to be the treatment of choice in programming and formal language studies.  In the formal language context it is necessary to consider the possibility that not all signs and expressions are assured to denote or even connote much of anything at all.  This danger is amplified in computational frameworks where it resonates with a related theme, that not all programs are guaranteed to terminate normally with a definite result.  In order to deal with these eventualities, a more cautious approach to sign relations is demanded to cover the risk of generating nonsense, in other words, to guard against degenerate forms of sign relations that fail to serve any significant purpose in communication or inquiry.
 
 
Whenever a greater degree of care is required, it becomes necessary to replace the object convention with the sign convention, which presumes to take for granted only what can be obvious to all observers, namely, the phenomenal appearances and temporal occurrences of objectified states of systems.  To be sure, these modulations of media are still presented as signs, but only potentially as signs of other things.  It goes with the territory of the formal language context to constantly check the inveterate impulses of the literate mind, to reflect on its automatic reflex toward meaning, to inhibit its uncontrolled operation, and to pause long enough in the rush to judgment to question whether its constant presumption of a motive is itself innocent.
 
 
In order to deal with these issues of discourse analysis in an explicit way, it is necessary to have in place a technical notation for marking the very kinds of interpretive assumptions that normally go unmarked.  Thus, I will describe a set of devices for annotating certain kinds of interpretive contingencies, called the "discourse analysis frames" (DAFs) or the "global interpretive frames" (GIFs), that can be operative at any given moment in a particular context of discussion.
 
 
To mark a context of discussion where a particular set J of interpretive conventions is being maintained, I use labeled brackets of the following two forms:  "unitary", as "{J| ... |J}, or "divided", as {J| ... | ... |J}.  The unitary form encloses a context of discussion by delimiting a range of text whose reading is subject to the interpretive constraints J.  The divided form specifies the objects, signs, and interpretive information in accord with which a species of discussion is generated.  Labeled brackets enclosing contexts can be nested in their scopes, with interpretive data on each outer envelope applying to every inclusion.  Labeled brackets arranging the "conversation pieces" or the "generators and relations" of a topic can lead to discussions that spill outside their frames, and thus are permitted to constitute overlapping contexts.
 
 
For the present, I will consider two types of interpretive parameters to be used as indices of labeled brackets.
 
 
1. Names of interpreters or other references to context can be used to indicate the provenance of the objects and signs that make up the assorted contents of brackets.  On occasion, I will use the first person singular pronoun to signify the immediate context of informal discussion, as in "{I| ... |I}", but more often than not this context goes unmarked.
 
 
2. Two other modifiers can be used to toggle between the options of the object convention, more common in casual or ordinary contexts, and the sign convention, more useful in formal or sign theoretic contexts.
 
 
a. The brackets "{o| ... |o}" mark a context of informal language use or ordinary discussion, where the object convention applies.  To specify the elements of a sign relation under these conditions, I use a form of presentation like the following:
 
 
{o|  A,  B  |||  "A", "B", "i", "u"  |o}.
 
 
Here, the names of objects are placed on the left side and the names of signs on the right side of the central divide, and the outer brackets stipulate that the object convention is in force throughout the discussion of a sign relation that is generated on these elements.
 
 
b. The brackets "{s| ... |s}" mark a context of formal language use or controlled discussion, where the sign convention applies.  To specify the elements of a sign relation in this case, I use a form like:
 
 
{s|  [A], [B]  |||  A,  B,  i,  u  |s}.
 
 
Again, expressions for objects are placed on the left and expressions of signs on the right, but formal language conventions are now invoked to let the alphabet letters and the lexical items of a formal vocabulary stand for themselves, and denotation brackets "[]" are placed around signs to indicate the corresponding objects, when they exist.
 
 
When the information carried by labeled brackets becomes more involved and more extensive, a set of convenient abbreviations and suggestions for "pretty printing" can be followed.  When the bracket labels become too long to bother repeating, I will leave the last label blank or use ditto marks, as with {a, b, c| ... |"}.  When it is necessary to break labeled brackets over several lines, multiple dividers "|" and dittos """ can be used to fill out corresponding columns, as in the following text.
 
 
{I, o| A ,  B
 
|||||| "A", "B", "i", "u"
 
|""""}
 
 
A notation for discourse analysis ought to find a crucial test of its usefulness in whether it can help to disclose structural properties of interpretive frameworks that would otherwise escape the attention due.  If the dimensions of interpretive choice that are represented by these devices are to serve a useful function, then ...
 
 
Although these devices for discourse analysis are bound to seem a bit ad hoc at this point, they have been designed with a sign relational bootstrap in mind, that is, with a view to being formalized and recognized as a species within the domain of sign relations itself, where this is the very domain that is laid out as their field of application.
 
 
One note of caution may help to prevent a common misunderstanding.  It is futile to imagine that any system of interpretive markers for discourse can become totally self sufficient, like the Worm Uroboros, determining all aspects of interpretation and eliminating all ambiguity.  The ultimate appeal of signs, and signs upon signs, is always to an intelligent interpreter, a reader who knows there are more interpretive choices to make than could ever be surrendered to signs, and whose free responsibility to appropriate interpretations cannot be abdicated to any text or abridged by any gloss on it, no matter how fit or finished.
 
 
In a sense, at least at first, nothing is being created that could not have been noticed without signs.  It is merely that actions are being articulated that were not articulated before, and hopefully in ways that make transient insights easier to remember and reuse on new occasions.  Instead, the requirement here is to devise a language, the marks of which can reflect the ambient light of observation on its own process.  It is not unusual to succeed at this in artificial environments crafted especially for the purpose, but to achieve the critical angle in vivo, in the living context of a natural language, takes more art.
 
</pre>
 
 
===6.49. Combinations of Sign Relations===
 
 
<pre>
 
At a point like this in the development of a formal subject matter, it is customary to introduce elements of a logical calculus that can be used to describe relevant aspects of the formal structures involved and to expedite reasoning about their manifold combinations and decompositions.  I will hold off from doing this for sign relations in any formal way at present.  Instead, I consider the informal requirements and the forseeable ends that a suitable calculus for sign relations might be expected to meet, and I present as tentative alternatives a few different ways of proceeding to formalize these intentions.
 
 
The first order of business for the "comparative anatomy" and the "developmental biology" of sign relations is to undertake a pair of closely related tasks:  (1) to examine the structural articulation of highly complex sign relations in terms of the primitive constituents that are found available, and (2) to explain the functional genesis of formal (that is, reflectively considered and critically regarded) sign relations as they naturally arise within the informal context of representational and communicational activities.
 
 
Converting to a political metaphor, how does the "republic" constituted by a sign relation — the representational community of agents invested with a congeries of legislative, executive, and interpretive powers, employing a consensual body of conventional languages, encompassing a commonwealth of comprehensible meanings, diversely but flexibly manifested in the practical administration of abiding and shared representations — how does all of this first come into being?
 
 
... and their development from primitive/ rudimentary to highly structured ...
 
 
The grasp of the discussion between A and B that is represented in the separate sign relations given for them can best be described as fragmentary.  It fails to capture what everyone knows A and B would know about each other's language use.
 
 
How can the fragmentary system of interpretation (SOI) constituted by the juxtaposition of individual sign relations A and B be combined or developed into a new SOI that represents what agents like A and B are sure to know about each other's language use?  In order to make it clear that this is a non trivial question, and in the process to illustrate different ways of combining sign relations, I begin by considering a couple of obvious suggestions for their integration that immediate reflection will show to miss the mark.
 
 
The first thing to try is the set theoretic union of the sign relations.  This commonly leads to a "confused" or "confounded" combination of the component sign relations.  For example, the sign relation defined as C = A U B is shown in Table&nbsp;86.  Interpreted as a transition digraph on the four points of the syntactic domain S = {"A", "B", "i", "u"}, the sign relation C specifies the following behavior for the conduct of its interpreter:
 
 
1. AC has a sling at each point of {"A", "i", "u"} and two way arcs on the pairs {"A", "i"} and {"A", "u"}.
 
 
2. BC has a sling at each point of {"B", "i", "u"} and two way arcs on the pairs {"B", "i"} and {"B", "u"}.
 
 
These sub-relations do not form equivalence relations on the relevant sets of signs.  If closed up under transitive compositions, then {"A", "i", "u"} are all equivalent in the presence of object A, but {"B", "i", "u"} are all equivalent in the presence of object B.  This may accurately represent certain types of political thinking, but it does not constitute the kind of sign relation that is wanted here.
 
 
Reflecting on this disappointing experience with using simple unions to combine sign relations, it appears that some type of indexed union or categorical co product might be demanded.  Table&nbsp;87 presents the results of taking the disjoint union D = A U B to constitute a new sign relation.
 
</pre>
 
 
<br>
 
 
<pre>
 
Table 86.  Confounded Sign Relation C
 
Object Sign Interpretant
 
A "A" "A"
 
A "A" "i"
 
A "A" "u"
 
A "i" "A"
 
A "i" "i"
 
A "u" "A"
 
A "u" "u"
 
B "B" "B"
 
B "B" "i"
 
B "B" "u"
 
B "i" "B"
 
B "i" "i"
 
B "u" "B"
 
B "u" "u"
 
</pre>
 
 
<br>
 
 
<pre>
 
Table 87.  Disjointed Sign Relation D
 
Object Sign Interpretant
 
AA "A"A "A"A
 
AA "A"A "i"A
 
AA "i"A "A"A
 
AA "i"A "i"A
 
AB "A"B "A"B
 
AB "A"B "u"B
 
AB "u"B "A"B
 
AB "u"B "u"B
 
BA "B"A "B"A
 
BA "B"A "u"A
 
BA "u"A "B"A
 
BA "u"A "u"A
 
BB "B"B "B"B
 
BB "B"B "i"B
 
BB "i"B "B"B
 
BB "i"B "i"B
 
</pre>
 
  
 
===6.50. Revisiting the Source===
 
===6.50. Revisiting the Source===
Line 11,665: Line 11,503:
 
</div>
 
</div>
 
----
 
----
 
<br><sharethis />
 
  
 
[[Category:Artificial Intelligence]]
 
[[Category:Artificial Intelligence]]

Latest revision as of 18:09, 28 August 2014


ContentsPart 1Part 2Part 3Part 4Part 5Part 6Part 7Part 8AppendicesReferencesDocument History


6. Reflective Interpretive Frameworks

We continue the discussion of formalization in terms of concrete examples and detail the construction of a reflective interpretive framework (RIF). This is a special type of sign-theoretic setting, illustrated in the present case by building on the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) but intended more generally to form a fully-developed environment of objective and interpretive resources, in the likes of which an “inquiry into inquiry” can reasonably be expected to find its home. We begin by presenting an outline of the developments ahead, working through the motivation, construction, and application of a RIF that is broad enough to mediate the dialogue of the interpreters \(\text{A}\!\) and \(\text{B}.\!\) The first fifteen Sections (§§ 1–15) deal with a selection of preliminary topics and techniques that are involved in approaching the construction of a RIF. The topics of these sections are described in greater detail below.

The first section (§ 1) takes up the phenomenology of reflection. The next three sections (§§ 2–4) are allotted to surveying the site of the planned construction, presenting it from three different points of view. An introductory discussion (§ 2) presents the main ideas that lead up to the genesis of a RIF. These ideas are treated at first acquaintance in an informal manner, located within a broader cultural context, and put in relation to the ways that intelligent agents can come to develop characteristic belief systems and communal perspectives on the world. The next section (§ 3) points out a specialized mechanism that serves to make inobvious types of observation of a reflective character. The last section (§ 4) takes steps to formalize the concepts of a point of view (POV) and a point of development (POD). These ideas characterize the outlooks, perspectives, world views, and other systems of belief, knowledge, or opinion that are employed by agents of inquiry, with especial regard to the ways that these outlooks develop over time.

A further discussion (§ 5), in preparation for the task of reflection, identifies three styles of linguistic usage that deploy increasing grades of formalization in their approaches to any given subject matter.

In the next three sections (§§ 6–8), the features that distinguish each style of usage are taken up individually and elaborated in detail. This is done by presenting the basic ideas of three theoretical subjects that develop under the corresponding points of view and that exemplify their respective ideals. The next three sections (§§ 9–11) take up the classes of higher order sign relations that play an important role in reflexive inquiries and then apply the battery of concepts arising with higher order sign relations to an example that anticipates many features of a realistic interpreter. In the light of the experience gained with the foregoing styles and subjects, the next three sections (§§ 12–14) are able to take up important issues regarding the status of theoretical entities that are needed in this work.

Finally (§ 15), the relevance of these styles, subjects, and issues is made concrete by bringing their various considerations to bear on a single example of a formal system that serves to integrate their concerns, namely, propositional calculus.

A point by point outline follows:

§ 1.   An approach to the phenomenology of reflective experience, as it bears on the conduct of reflective activity, is given its first explicit discussion.

§ 2.   The main ideas leading up to the development of a RIF are presented, starting from the bare necessity of applying inquiry to itself. I introduce the idea of a point of view (POV) in an informal way, as it arises from natural considerations about the relationship of an immanent system of interpretation (SOI) to a generated text of inquiry (TOI). In this connection, I pursue the idea of a point of development (POD), that captures a POV at a particular moment of its own proper time.

§ 3.   A Projective POV

§ 4.   The idea of a POV, as manifested from moment to moment in a series of PODs, is taken up in greater detail.

A formalization for talking about a diversity of POVs and their development through time is introduced and its consequences explored. Finally, this formalization is applied to an issue of pressing concern for the present project, namely, the status of the distinction between dynamic and symbolic aspects of intelligent systems.

§ 5.   The symbolic forms employed in the construction of a RIF are found at the nexus of several different interpretive influences. This section picks out three distinctive styles of usage that this work needs to draw on throughout its progress, usually without explicit notice, and discusses their relationships to each other in general terms. These three styles of usage, distinguished according to whether they encourage an ordinary language (OL), a formal language (FL), or a computational language (CL) approach, have their relevant properties illustrated in the next three sections (§§ 6–8), each style being exemplified by a theoretical subject that thrives under its guidance.

§ 6.   For ease of reference, the basic ideas of group theory used in this project are separated out and presented in this section. Throughout this work as a whole, the subject of group theory serves in both illustrative and instrumental roles, providing, besides a rough stock of exemplary materials to work on, a ready array of precision tools to work with.

Group theory, as a methodological subject, is used to illustrate the mathematical language (ML) approach, which ordinarily takes it for granted that signs denote something, if not always the objects intended. It is therefore recognizable as a special case of the OL style of usage.

To the basic assumption of the OL approach the ML style adds only the faith that every object one desires to name has a unique proper name to do it with, and thus that all the various expressions for an object can be traded duty free and without much ado for a suitably compact name to denote it. This means that the otherwise considerable work of practical computation, that is needed to associate arbitrarily obscure expressions with their clearest possible representatives, is not taken seriously as a feature that deserves theoretical attention, and is thus ignored as a factor of theoretical concern. This is appropriate to the mathematical level, which abstracts away from pragmatic factors and is intended precisely to do so.

More instrumentally to the aims of this investigation, and not entirely accidentally, group theory is one of the most adaptable of mathematical tools that can be used to understand the relation between general forms and particular instantiations, in other words, the relationship between abstract commonalities and their concrete diversities.

§ 7.   The basic notions of formal language theory are presented. Not surprisingly, formal language theory is used to illustrate the FL style of usage. Instrumentally, it is one of the most powerful tools available to clear away both the understandable confusions and the unjustifiable presuppositions of informal discourse.

§ 8.   The notion of computation that makes sense in this setting is one of a process that replaces an arbitrary sign with a better sign of the same object. In other words, computation is an interpretive process that improves the indications of intentions. To deal with computational processes it is necessary to extend the pragmatic theory of signs in a couple of new but coordinated directions. To the basic conception of a sign relation is added a notion of progress, which implies a notion of process together with a notion of quality.

§ 9.   This section introduces higher order sign relations, which are used to formalize the process of reflection on interpretation. The discussion is approaching a point where multiple levels of signs are becoming necessary, mainly for referring to previous levels of signs as the objects of an extended sign relation, and thereby enabling a process of reflection on interpretive conduct. To begin dealing with this issue, I take advantage of a second look at \(\text{A}\!\) and \(\text{B}\!\) to introduce the use of raised angle brackets \(({}^{\langle}~{}^{\rangle}),\) also called supercilia or arches, as quotation marks. Ordinary quotation marks \(({}^{\backprime\backprime}~{}^{\prime\prime})\) have the disadvantage, for formal purposes, of being used informally for many different tasks. To get around this obstacle, I use the arch operator to formalize one specific function of quotation marks in a computational context, namely, to create distinctive names for syntactic expressions, or what amounts to the same thing, to signify the generation of their gödel numbers.

§ 10.   Returning to the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) various kinds of higher order signs are exemplified by considering a series of higher order sign relations based on these two examples.

§ 11.   In this section the tools that come with the theory of higher order sign relations are applied to an illustrative exercise, roughing out the shape of a complex form of interpreter.

The next three sections (§§ 32–34) discuss how the identified styles of usage bear on three important issues in the usage of a technical language, namely, the respective theoretical statuses of signs, sets, and variables.

§ 12.   The Status of Signs

§ 13.   The Status of Sets

§ 14.   At this point the discussion touches on an topic, concerning the being of a so called variable, that issues in many unanswered questions. Although this worry over the nature and use of a variable may seem like a trivial matter, it is not. It needs to be remembered that the first adequate accounts of formal computation, Schönfinkel's combinator calculus and Church's lambda calculus, both developed out of programmes intended to clarify the concept of a variable, indeed, even to the point of eliminating it altogether as a primitive notion from the basis of mathematical logic (van Heijenoort, 355–366).

The pragmatic theory of sign relations has a part of its purpose in addressing these same questions about the natural utility of variables, and even though its application to computation has not enjoyed the same level of development as these other models, it promises in good time to have a broader scope. Later, I will illustrate its potential by examining a form of the combinator calculus from a sign relational point of view.

§ 15.   There is an order of logical reasoning that is typically described as propositional or sentential and represented in a type of formal system that is commonly known as a propositional calculus or a sentential logic (SL). Any one of these calculi forms an interesting example of a formal language, one that can be used to illustrate all of the preceding issues of style and technique, but one that can also serve this inquiry in a more instrumental fashion. This section presents the elements of a calculus for propositional logic that I described in earlier work (Awbrey, 1989 and 1994). The imminent use of this calculus is to construct and analyze logical representations of sign relations, and the treatment here focuses on the concepts and notation that are most relevant to this task.

The next four sections (§§ 16–19) treat the theme of self-reference that is invoked in the overture to a RIF. To inspire confidence in the feasibility and the utility of well chosen reflective constructions and to allay a suspicion of self-reference in general, it is useful to survey the varieties of self-reference that arise in this work and to distinguish the forms of circular referrals that are likely to vitiate consistent reasoning from those that are relatively innocuous and even beneficial.

§ 16.   Recursive Aspects

§ 17.   Patterns of Self-Reference

§ 18.   Practical Intuitions

§ 19.   Examples of Self-Reference

The intertwined themes of logic and time will occupy center stage for the next eight sections (§§ 20–27).

§ 20.   First, I discuss three distinct ways that the word system is used in this work, reflecting the variety of approaches, aspects, or perspectives that present themselves in dealing with what are often the same underlying objects in reality.

§ 21.   There is a general set of situations where the task arises to “build a bridge” between significantly different types of representation. In these situations, the problem is to translate between the signs and expressions of two formal systems that have radically different levels of interpretation, and to do it in a way that makes appropriate connections between diverse descriptions of the same objects. More to the point of the present project, formal systems for mediating inquiry, if they are intended to remain viable in both empirical and theoretical uses, need the capacity to negotiate between an extensional representation (ER) and an intensional representation (IR) of the same domain of objects. It turns out that a cardinal or pivotal issue in this connection is how to convert between ERs and IRs of the same objective domain, working all the while within the practical constraints of a computational medium and preserving the equivalence of information. To illustrate the kinds of technical issues that are involved in these considerations, I bring them to bear on the topic of representing sign relations and their dyadic projections in various forms.

The next four sections (§§ 22–25) give examples of ERs and IRs, indicate the importance of forming a computational bridge between them, and discuss the conceptual and technical obstacles that will have to be faced in doing so.

§ 22.   For ease of reference, this section collects previous materials that are relevant to discussing the ERs of the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) and explicitly details their dyadic projections.

§ 23.   This section discusses a number of general issues that are associated with the IRs of sign relations. Because of the great degree of freedom there is in selecting among the potentially relevant properties of any real object, especially when the context of relevance to the selection is not known in advance, there are many different ways, perhaps an indefinite multitude of ways, to represent the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) in terms of salient properties of their elementary constituents. In this connection, the next two sections explore a representative sample of these possibilities, and illustrate several different styles of approach that can be used in their presentation.

§ 24.   A transitional case between ERs and IRs of sign relations is found in the concept of a literal intensional representation (LIR).

§ 25.   A fully fledged IR is one that accomplishes some measure of analytic work, bringing to the point of salient notice a selected array of implicit and otherwise hidden features of its object. This section presents a variety of these analytic intensional representations (AIRs) for the sign relations \(L(\text{A})\!\) and \(L(\text{B}).\!\)

Note for future reference. The problem so naturally encountered here, due to the embarrassment of riches that presents itself in choosing a suitable IR, and tracing its origin to the wealth of properties that any real object typically has, is a precursor to one of the deepest issues in the pragmatic theory of inquiry: the problem of abductive reasoning. This topic will be discussed at several later stages of this investigation, where it typically involves the problem of choosing, among the manifold aspects of an objective phenomenon or a problematic objective, only the features that are: (1) relevant to explaining a present fact, or (2) pertinent to achieving a current purpose.

§ 26.   Differential Logic and Directed Graphs

§ 27.   Differential Logic and Group Operations

§ 28.   The Bridge : From Obstruction to Opportunity

§ 29.   Projects of Representation

§ 30.   Connected, Integrated, Reflective Symbols

The next seven sections (§§ 31–37) are designed to motivate the idea that a language as simple as propositional calculus can be used to articulate significant properties of \(n\!\)-place relations. The course of the discussion will proceed as follows:

§ 31.   First, I introduce concepts and notation designed to expand and generalize the orders of relations that are available to be discussed in an adequate fashion.

§ 32.   Second, I elaborate a particular mode of abstraction, that is, a systematic strategy for generalizing the collections of formal objects that are initially given to discussion. This dimension of abstraction or direction of generalization will be described under the thematic heading of partiality.

§ 33.   Third, I present an alternative approach to the issue of defective, degenerate, or fragmentary \(n\!\)-place relations, proceeding by way of generalized objects known as \(n\!\)-place relational complexes. Illustrating these ideas with respect to their bearing on sign relations the discussion arrives at a notion of sign-relational complexes, or sign complexes.

In the next three sections (§§ 34–36) I consider a collection of identification tasks for \(n\!\)-place relations. Of particular interest is the extent to which the determination of an \(n\!\)-place relation is constrained by a particular type of data, namely, by the specification of lower arity relations that occur as its projections. This topic is often treated as a question about a relation's reducibility or irreduciblity with respect to its projections. For instance, if the identity of an \(n\!\)-place relation \(L\!\) is completely determined by the data of its \(k\!\)-place projections, then \(L\!\) is said to be identifiable by, reducible to, or reconstructible from its \(k\!\)-place components, otherwise \(L\!\) is said to be irreducible with respect to its \(k\!\)-place projections.

§ 34.   First, I consider a number of set-theoretic operations that can be utilized in discussing these identification, reducibility, or reconstruction questions. Once a level of general discussion has been surveyed enough to make a start, these tools can be specialized and applied to concrete examples in the realm of sign relations and also applied in the neighborhood of closely associated triadic relations.

§ 35.   This section considers the positive case of reducibility, presenting examples of triadic relations that can be reconstructed from their dyadic projections. In fact, it happens that the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) fall into this category of dyadically reducible triadic relations.

§ 36.   This section considers the negative case of reducibility, presenting examples of irreducibly triadic relations, or triadic relations that cannot be reconstructed from their lower dimensional projections or faces.

§ 37.   Finally, the discussion culminates in an exposition of the so called propositions as types (PAT) analogy, outlining a formal system of type expressions or type formulas that bears a strong resemblance to propositional calculus. Properly interpreted, the resulting calculus of propositional types (COPT) can be used as a language for talking about well-formed types of \(k\!\)-place relations.

§ 38.   Considering the Source

§ 39.   Prospective Indices : Pointers to Future Work

§ 40.   Interlaced with the structural and reflective developments that go into the OF and the IF is a conceptual arrangement called the dynamic evaluative framework (DEF). This utility works to isolate the aspects of process and purpose that are observable on either side of the objective interpretive divide and helps to organize the graded notions of directed change that can be actualized in the RIF.

§ 41.   Elective and Motive Forces

§ 42.   Sign Processes : A Start

§ 43.   Reflective Extensions

§ 44.   Reflections on Closure

§ 45.   Intelligence \(\Rightarrow\) Critical Reflection

§ 46.   Looking Ahead : The “Meta” Issue

§ 47.   Mutually Intelligible Codes

§ 48.   Discourse Analysis : Ways and Means

§ 49.   Combinations of Sign Relations

§ 50.   Revisiting the Source

6.1. The Phenomenology of Reflection

This part of the discussion is fair to cast as the phenomenology of reflection. It aims to amass the kinds of observations that extremely simple reflective agents, as a matter of principal and with a minimal of preparation, can make on the ebb and flow of their own reflective acts. But this is not the kind of phenomenology that pretends it can bracket every assumption of a sophisticated or a theoretical nature off to one side of the observational picture, or thinks it can frame the description of reflection without the use of formal concepts, such as depend on the bracing and support of a technical language.

On the contrary, the brand of phenomenology being wielded here makes the explicit assumption that there are likely to be an untold number of implicit assumptions that contribute to and conspire in the framing of the picture to be observed, while it is precisely the job of reflective observation to detect the influence of these covertly acting assumptions. Further, this style of phenomenology is deliberately set free of prior constraints on the choice of descriptive devices, since it can appeal to any formal means or any technical language that serves to articulate the description of its subject.

Certain things need to be understood about the aims, the scope, and the self-imposed limits of this phenomenology, especially when it comes to the question of what it hopes to explain. It is not the task of this phenomenology to explain consciousness but only to describe its course. This it does by making an inventory of the “contents” that appear in consciousness and by delineating the relationships that appear among these contents. Along the way, it must take into account, of course, that each moment of taking stock and each moment of charting relations needs to have its resulting list or map, respectively, realized as the content of a particular moment of consciousness.

Already, this lone requirement of the descriptive task raises a host of questions about what it means for something to be counted as a content of consciousness, and it leads, according to my present lights and aims, to a closer examination of a critical relationship, the logical relation “content of”, taken abstractly and in general. Since it does not appear that very extensive lists or very detailed maps can be “wholly realized” as contents within a limited field of consciousness, it is necessary to recognize an extended sense of “realization”, where a list or a map can be “partially” or “effectively” realized in a content of consciousness if and when an indication, pointer, or sign of it is present in awareness.

In particular, this tack suggests that some things, that otherwise loom too large to fit within the frame of immediate awareness, can be treated as contents of consciousness, in the extended sense, if only an effective indication of them is present in awareness. For instance, an effective indication of a larger text is a sign that can be followed to the next, and this to the next, and so on, in a way that incrementally leads to a traversal of the whole. By extension, a list of contents of consciousness or a map of relations among these contents is “effectively realized” in a single content of consciousness if that content effectively points to it, and if the object to which it points has the structure of an object that pointedly reveals itself in time. Given the evidence of the sign and the effective analysis of its object, a manifest of contents can be prized for the sake of the items it enumerates or the estates it maps, with each in due proportion to their values. Both parts of this condition are needed, though, since knowing the name alone of a thing, even if it lends itself to knowing the thing, does not itself amount to knowing the thing itself.

The concepts with which a theory operates are not all objectivized in the field which that theory thematizes.

In short, my philosophical working hypothesis is concrete reflection, i.e., the cogito as mediated by the whole universe of signs.

Paul Ricoeur, The Conflict of Interpretations, [Ric, 166, 170]

This understanding of the task of phenomenology bears on three features of the approach to consciousness that I am charting here.

  1. It is under the heading of description, especially as qualified by the adjective effective, that the rationale of using mathematical models and the strategy of seeking computational implementations of these models can be found to successively fall.
  2. As a rule, I find it helps to avoid hypostatizing consciousness or self-awareness as statically constituted entities, but to use the systematic notions of dynamic agency and developing organization. However, in order to make connections with other approaches to phenomenology I need occasionally to mention concepts and even to make use of language that I would otherwise prefer to avoid.
  3. Finally, it is under the cumulative aims of effective description and systematic dynamics that the utility of sign relations is key. Sign relations are the minimal forms of models that are capable of compassing all that goes on in thinking along with whatever it is that thinking relates to in all the domains that it orients toward. The use of sign relations as models, as mathematical descriptions, and as computational simulations of what appears in reflecting on conduct is especially well suited to including in these models a description of what transpires in the conduct of reflection itself.

The type of phenomenology that is being envisioned here depends on no assured power of introspection but only on a modest power to reflect on conduct and thereby to give it a description. These descriptions, all the better if they are inscribed in external media, can be examined with increasing degrees of detachment and have their consequences projected by deductive means. In time, the mass of descriptions that accumulates with continuing experience and persistent reflection on conduct begins to constitute a de facto model of behavior (MOB). In common regard this prescribed code or catalog of procedure (COP) can range from an empirical standard of comparison, through a provisional regulation, to a tentative ideal for future conduct. However, the status that a MOB or a COP has when it starts out is not as important as its ability to test its prescriptions, along with their deductive and pragmatic implications, against the corpus of continuing observation, reflection, and description.

Reflection and consciousness no longer coincide. …

What emerges from this reflection is a wounded cogito, which posits but does not possess itself, which understands its originary truth only in and by the confession of the inadequation, the illusion, and the lie of existing consciousness.

Paul Ricoeur, The Conflict of Interpretations, [Ric, 172, 173]

It is pertinent at this point to draw a distinction between the power of reflection, that is claimed as a capacity crucial to inquiry, and what is likely to be confused with it, the presumptive power of introspection. “Introspection”, in the sole part of its technical meaning that leads to its being excluded from empirical inquiry, refers to an infallible, and thus incorrigible, power of observation that one is supposed to possess with respect to one's private experiences, matters over which there is imagined to be no higher court of appeal than one's own particular and immediate awareness. But the horizon of experience that is plotted with regard to this static standpoint fails to reckon with the dynamic nature of an ongoing circumstance, that subsequent experience continually rides a circuit around its antecedents and ever constitutes a higher court for every proceeding and every precedent that falls within its jurisdiction.

The distinction that marks reflection and sets it apart from introspection is its own acknowledged fallibility, which involves its ability to be seen as false in subsequent reflections. Naturally, this has an import for the status of reflection in empirical inquiry. Paradoxically, its admission of fallibility is actually a virtue from the standpoint of making reflection useful in science. If reflection on conduct leads to a description that cannot be falsified by any contingency of conduct, then that description is insufficient to specify any particular conduct at all. This means one of several things about the description, either (1) it remains a condition of conduct in general, or (2) it resides as a part of a necessary logic at the bounds of all experience, or (3) it rests in a realm of metaphysics that abides, if anywhere, beyond the bounds of purely human experience and thus abscounds altogether from the sphere of empirical inquiry.

In this way the psyche is itself a technique practiced on itself, a technique of disguise and misunderstanding. The soul of this technique is the pursuit of the lost archaic object which is constantly displaced and replaced by substitute, fantastic, illusory, delirious, and idealized objects.

Paul Ricoeur, The Conflict of Interpretations, [Ric, 185]

One of the most difficult problems that arises for the phenomenology of reflection, and one that falls under the heading of “fallibility” in a markedly strong way, is the issue of systematic distortion. Aside from the false idols that are deliberately constructed, there is another host of false images whose generation is so thoroughly systematic that only their lack of consciousness prevents them from being called “deliberate”. All the more naive projects of enlightenment, capitalized or not, are brought down by a failure to recognize this category of human frailty.

If the phenomenology of reflection that is developed and justified from this point on is not to be naive about this brand of fallibility, then it needs to constitute safeguards, a system of checks and balances, if you will, against it. If no method of remediation can permanently arrest the perpetrator of these schemes from generating distractions in perpetuity, then at least one can hope for ways to arraign the forms of fallibility under various recognizable themes, so that their dangers can be avoided in the future. In this vein, it is necessary to institute the study of those more opaque obstructions that limit the medium of investigation and to facilitate the analysis of those more refractory resistances to clear reflection, whose names are legion, but whose characters can be diversely noted under the themes of obstruction, resistance, the shadow, the unconscious, the “dark side of the enlightenment”, or even better yet, the “underbrush of the clearing”.

In the general scheme of things, the forms of distortion that remain peculiar to particular agents of reflection need garner to themselves nothing outside the incidental degrees of interest. The best check to counter this species of distortion, to which the isolated individual is likely to fall prey, is the balance of cultural wisdom that is commonly stored up and invested in the living praxis of a reflective community.

It is only when the incidence of singular distortions is not damped out by the collective incitement of countermeasures, when the aggregation of local distortions is overlooked by the powers of a general reflection, when the flaws in the individual lights and mirrors of the scientific organon are not taken into account and duly compensated in the shape of the social “panopticon”, or when the grinding accumulation and the precipitous mounting up of infinitesimal but significant deviations from accurate reflection are not met with an adequate power of oversight, one that can maintain solely the interests of community integrity at heart, that a truly false ideal begins to hold sway over the very perceptions of every specialized agent of reflection.

When these aberrations and astigmatisms develop unchecked, and when the strain to see things clearly reaches the point of breaking all the instruments thereof, then the most circumscribed faults, the distorted reflections of individual hypocrisy, the strange lack of insight and the missing sense of mutual reciprocity that manifest themselves in the most parochial forms of self interest, then all of these defects, and ills, and shocks begin to “pass through” to the collective strata, to be inherited and propagated by the highest levels of social organization, and then a systematic and widespread falsification of the whole conduct of society begins to pervade its view of itself.

On macroscopic scales of organization, with medium sized bodies and bodies of media that extend over considerable distances, with masses of activity that successfully propagate their own forms through vastening expanses of time, the general condition of thoughtfulness cancels out and compensates for all but the most singular of disturbances, namely, those that are peculiar to the microscopic realm of observation. If the matter is regarded on this grander scale, then it is not hard to find a sufficient reason for the stubborn persistence of the cosmic order, and thus the desirable necessity of doing just this is never far from mind. In the case of whole societies, a like reason is often enough to explain their inertia, their resistance, and their overall slowness to change.

If there is felt a need to devise an object explanation, a presumptive sources of troubles that is already compact, concrete, and thus confined enough to accuse, apprehend, and hopefully imprison on account of the mass's retarded potential, then resorting to a hypostasis posed in the form of an “archaic object” is a prototypical way of controlling anxiety, and it frequently, if not infallibly, can serve just as well as any other device on which to pin the common blame. This highlights the question: What sort of archaic object would account for the general malaise in a community whose dedication to inquiry has become root-bound?

I wish to apply a determinate philosophical method to a determinate problem, that of the constitution of the symbol, which I described as an expression with a double meaning. I had already applied this method to the symbols of art and the ethics of religion. But the reason behind it is neither in the domains considered nor in the objects which are proper to them. It resides in the overdetermination of the symbol, which cannot be understood outside the dialecticity of the reflection which I propose.

Paul Ricoeur, The Conflict of Interpretations, [Ric, 175]

The archaic object of this global aimlessness, that informs the course of the general drift, that the total condition and the specific culture of inquiry revolve about in their orbits, as if they aim to be constantly accelerated toward it, but never quite manage to resolve their situation toward it, as if they fear to dissolve into it, is very likely nothing more than the whole community of interpretation itself, effectively realized as an object of its own devising.

The community of interpretation, whose currency funds the community of inquiry as a going enterprise within its fold, has sufficient reason to preserve itself in its present form as a valuable object, commodity, or resource. But the dialectical nature of the process that is currently conducted between them, due in part to the dialectrical charges of the “-ionized” terms that pass for information between them. A term of this charge splits the action from the end and shares it between the parties to an ambiguity, the active and passive objects that together comprise its full denomination. This division of denotation forces interpretation to vacillate between the two extremes of meaning in a vain and eternal effort to rejoin their senses of value to the realm of the rendered and misspent coin, in hopes of regaining the meaning what was mint in their original condition. The stowing away of one portion or the other drives the potential that drives both themselves and all the actions that they are meant to convey toward their designate and their destinate ends, but the unstable equilibrium that is their due, especially when it is permitted to be waged by uncontrolled forms of oppositional attraction, does not permit the dialogue to rest. It continues to remain in doubt and does not fail to renew its ambivalence regarding the maintenance of any fixed form it happens to take, always wondering whether its present form is literally necessary, precisely sufficient, or whether it is but transiently and contingently convenient. Accordingly and otherwise the whirl of dialogue, for all its own reasons, is always in imminent danger of wasting away into the echo of its own narcissism.

The problem arises of how to bring these systematic distortions under systematic control. It helps to stand back a bit from the problem and to cast a somewhat wider net. Accordingly, let the whole category of phenomena that are gathered around this issue be thematized under the family name of an obstruction to inquiry (OTI). This includes as a subordinate genus the panoply of systematic distortions, generated by disingenuous reflections, that can be hypothesized to have their source in protecting the favored assumptions and defending the implicit claims of a particular status quo, no matter whether the implicated propositions are held to be the prerogatives of a privileged POV or whether they are delivered up to indictment as the prejudices of a more widely sanctioned world view. The archetype of this behavior is appropriately addressed under the mythological or the psychological category of narcissism.

It is important to note that the family OTI and the genus narcissus differ in the levels of hypothesis that are involved in their concepts, both in their speculative formation and in their provisional attribution. The presence of an OTI is fairly easy to surmise from its distinguishing traits: the dissipative conduct and the rambling course that affect the inquiry in question. To the degree that the suspicion of its effect and the verification of its force can be assembled from superficial traces, this makes its maintenance supportable on circumstantial evidence alone. In a phrase, one says that the wider hypothesis lies “nearer to nature” than the narrower construction, or that it makes its appearance closer to the purely phenomenal sphere. In contrast, unraveling the precise nature of the obstruction requires a deeper investigation. There is an additional hypothesis involved in guessing the source of the resistance, no matter how prevalent a particular genus of distortion is found and no matter how likely an individual species of explanation is in fact.

Within this wider setting it may be possible to focus more clearly on the species of threats to accurate reflection that need to be clarified here. Already, besides the stigma of stubborn error that hangs over the whole refractory horde, there is a germ of paradox that hides within the very folds of this classification. Namely, it is that the first obstacle one finds to reflection, and hence to every form of reflective inquiry, is a kind of narcissism or self love. It begins naturally enough, ensconced in the not unnatural desire of every form of life to preserve itself in its present form. But the simple desire to remain as is can be diverted into a blinded esteem of the self, one that admires its present condition only as reflected in the array of disingenuous reflections and contrived presentations that make up a fixed, idealized, and very selective image. Finally and strangely enough, this unreflective form of narcissism even comes to prefer the simplistic and beautiful lies to the realistic forms that a veritable mirror would show.

The danger of narcissism, with respect to the prospects of a reflective inquiry, is not in the dynamic attractions and the realistic affections that a person or a society bears toward its truer self, and that in turn inform their respective bearings toward the selves they are meant to be, but in the static character of its attachment to a fixed, idealized, and partial image of that self.

Once again, the quality that distinguishes reflection from introspection, its fallibility, is a trait that sufficiently reflective agents can find reflected in their own conduct of reflection, and needless to say, their conduct in general. This quality of fallibility, thus cognized and thus converted, that is, once its application to oneself is acknowledged and its consequences for one's experience are recognized, becomes a type of self-recognizant character, an internalized trait that leads reflective agents to become more corrigible, more docile, and thus more educatable. This makes it possible for reflective agents to build up their images of reality from scratch materials, to proceed through steps that are always revisable and edifiable, and to leave the finishing of their forms to the work of future editions. In the final analysis, while this mannerism of aesthetic distance and tempered discretion prevents any affection or any impression from becoming too “immediate”, in the strictest sense of that word, it is just this mode of detachment that assures the sensible image of its eventual remediation.

The nature and use of reflection in inquiry, as it currently appears, can be described as follows. Reflection on conduct leads to a description of that conduct, posed in terms of a reflective image. Over an interval of time or an extended period of investigation, these descriptive images are accumulated into exhaustive theories and compiled into compact models of the conduct in question. To be useful in science, or empirical inquiry, these theories and models must be capable of being false with respect to their intentions, amenable to being tested in further experience, and subject to being amended on subsequent reflection.

In sum, the very feature of reflection that seems to be its chief defect, the fact that it can generate false images, casting reflections that are false to the actions they intend to represent and even leading to wholly distorted perspectives on the objectified scene of activity, is the very characteristic that saves its appearance in experience and the very trait that permits it to show its face at the court of inquiry, which all along admits that distortions acknowledged to be imperfect images can still be disclosed to subsequent experience and remedied in future reflections.

6.2. A Candid Point of View

This section discusses, in a general and informal way, the objectives inspiring and the requirements surrounding the elaboration of a RIF. This is approached, in part, by taking up the intuitive notions of a point of view (POV) and a point of development (POD), as they stake out, respectively, the intellectual repertoire and history of a typical agent of inquiry. Initially, these ideas serve in a familiar manner to characterize the intellectual skills and growth of agents, in particular, as they bear on the cultivation of the agents' reflective resources. Increasingly, these concepts are subjected to formalization, partly by analyzing their relations to each other and gradually by relating their inherent structures and referent involvements to the already formalized concepts of objective frameworks, genres, and motifs.

As I reflect on signs and texts, I am led to enumerate more and more phenomena associated with the process of interpretation and with the models of it that I find in sign relations. Some of the deepest and subtlest of these phenomena, at least, that I am able to observe and recount, take their theme from a certain “intermingling of categories” that is found at the basis of every real phenomenon. This issue comes to prominence and makes itself evident as topic of inquiry whenever one tries to organize the original chaos of phenomena through the imposition of a suitable scheme of categories.

What is the typical outcome of setting out such a scheme for nature? No sooner does one institute a provisional scheme of categories for organizing phenomena than one discovers every system with a stamp of reality to it steadfastly ignoring the lines of one's naive imagination. And yet it soon becomes clear that this seeming “perversity of nature” arises from an error of attribution on the part of the mind that casts the aspersion. Ultimately, it stems from the fact that every scheme of categories that the mind can forge and foist on nature, for instance, sign and object, self and other, remains, after all, the scene of a mere abstraction, implicating the pallid and the shadowy sides of the same dissention, but all the while circling about and turning on the complex but unitary reality that underlies the phenomenon in question.

In view of these complexities, that interfere with applying even the simplest of organizational paradigms to the material of signs and texts, it is necessary for me to pause a while and carefully contemplate how I can rehabilitate their use, at least, for the ends of this investigation. First, I examine the distinction between sign and object. Then, I consider the duality between self and other, or what amounts to the same thing, the relation between a first person and a second person POV. In each case, the task is to discover how a distinction that seems so easy to subvert can ultimately be developed into a useful instrument of analysis and articulation.

There's nought but care on ev'ry han',
  In every hour that passes, O;
What signifies the life o man,
  An 'twere na for the lasses, O.
— Robert Burns, Green Grow the Rashes, O

Any object, anything grasped as a whole, can be a sign. Indeed, the entire life of a person or a people can serve as sign unto itself or others and take on a significance all its own. In converse fashion, every sign token is an object in the world. In this role, a sign is forced to obey the ruling and relevant natural laws and empowered to take on a dynamics all its own.

In the contention between signs and objects, the answer initially given by the pragmatic theory of signs is that anything can potentially serve in any role of a sign relation. In particular, the distinction between sign and object is a pragmatic distinction, a mark of use, not an essential distinction, a mark of substance. This is the right answer as far as the beginning of the question goes, where it is the possible character of everything that is at issue. The pragmatic approach makes it possible to begin an investigation that would otherwise be obstructed by a futile search for non-existent essentials, as if it were necessary to divine them from prior considerations before any experience has been ventured and before a bit of empirical evidence has been collected.

Reason alone teaches us to know good and evil. Therefore conscience, which makes us love the one and hate the other, though it is independent of reason, cannot develop without it.

Rousseau, Emile, or On Education

But the form of answer that is sufficient to begin a study is not the form of answer that is necessary to end it. Even though it is useful for a general theory of signs to provide a patently indifferent form of answer at the preliminary phases of its investigation, this style of response is ultimately judged to be facile when it comes to questions about the good of a sign, the end of an inquiry, or the suitability of each thing to the role it is assigned. In the end, an all purpose brand of conceptual scheme, allowing for the equipotential coverage of every conceivable option, however useful or necessary to the task, is likely to be found insufficient for wrapping up these goods and delivering them into the service of the mind. Thus, by the round about way of this objection, one brings to mind the other meaning, the underlying nuance and the ultimate sense, of the word object, which suggests the end, the goal, or the good of something.

Questions about the good of something, and what must be done to get it, and what shows the way to do it, belong to the normative sciences of aesthetics, ethics, and logic, respectively. Aesthetic knowledge is a creature's most basic sense of what is good or bad for it, as signaled by the experiential features of pleasure or pain, respectively. Ethical knowledge deals with the courses of action and patterns of conduct that lead to these ends. Logical knowledge begins from the remoter signs of what actions are true and false to their ends, and derives the necessary consequences indicated by combinations of signs.

In pragmatic thought, the normative disciplines can be imagined as three concentric cylinders resting on their bases, increasing in height as they narrow, from aesthetics to ethics to logic, in that order. Considered with regard to the plane of their experiential bases, logic is subsumed by ethics, which is subsumed by aesthetics. And yet, in another sense, logic affords a perspective on ethics, while ethics affords a perspective on aesthetics.

That is about all I can say about normative considerations at this point. Further discussion is put off until this text has developed either the intuitive insight or the theoretical power to say something more definite.

Because a sign, so far as it can tell in the time it passes, addresses an unknown future interpretant, that is, an indefinite futurity of potential responses, there is always an aspect of dialogue about the sign relation, especially insofar as it is subject to extension. This is true no matter who, whether self or other, is ostensibly addressed by the sign or text at issue, and never mind what the chances are of a literal return in the communication. In this regard, it is recognizance enough for a sign to be issued or a text to be written in anticipation of its future result. And though it is never certain, it is always possible that the author of a text partially anticipates the use that others make of what is signed.

It is one of the rules of my system of general harmony, that the present is big with the future, and that he who sees all sees in that which is that which shall be.

G.W. Leibniz, Theodicy, paragraph 360
  When these prodigies
Do so conjointly meet, let not men say
“These are their reasons”, “they are natural”,
For I believe they are portentous things
Unto the climate that they point upon.
Julius Caesar: Casca—1.3.28–32
Indeed it is a strange-disposed time;
But men may construe things after their fashion,
Clean from the purpose of the things themselves.
Julius Caesar: Cicero—1.3.33–35

In order to recover the faculties supported by one's favorite categories and to maintain the proper use of their organizational schemes, it is incumbent on the part of the wary, conscientious, and duly circumspect schemer to recognize in every case how each part of the contention is implicated in the action of the other. In this connection, a triumvirate of closely related aspects of sign relations comes to the fore:

  1. There is an aspect of futurity, marking the openness of signs to interpretation and the extensibility of sign relations in multitudes of novel but meaningful ways. This dimension of regard is staked out in anticipation of the possibility that perfectly fitting but previously unsuspected interpretants can be discovered within or added to any given sign relation, whether passed or present.
  2. There is a factor that contemporary theorists call alterity, noting the quality of radical and reciprocal otherness that is involved in the dialogue of one self with another. Besides its invocation of the wholly other, this term subsumes all the ways that one being can be alien and unknown to itself, and it even suggests the host of alterations, deviations, distortions, errors, and transmutations that accompany all acts of record-keeping and interpretation.
  3. There is a feature that C.S. Peirce called tuity, acknowledging the aspect of thouness or the prospect of a second person POV that is brought into play whenever one self addresses another. Along with the perspective of a genuine other, this recognizes all the referrals and deferrals that an interpretive agent can make to a past, present, or potential self.

All of these dimensions of concern focus on the circumstance that signs, especially written or recorded signs, moderate a complexly integrated sort of relationship between self and other, or between first person and second person POVs, in such a way that they render the paired categories of each scheme inextricably involved in one another.

There are well-known dangers of paradox, but not so well acknowledged risks of distortion, that arise in the interrogation of any reflection. Although its outward signs are obvious, the source of the difficulty is remarkably difficult to trace. Perhaps it can be approached as follows. Without trying to say what consciousness is, I can still speak sensibly of its contents, and talk of their structures in relation to each other. These contents, whether percepts or concepts or whatever, are all signs. And so I can study the effects of reflection in the medium of its texts and develop a model of reflection as a process that evolves these texts.

What generally happens when one tries to model reflective consciousness and to formalize the reflective discourses that signify its public life? In reaching for the available languages of logic and set theory, one is likely to use them as reductively as possible on the first attempt, and thus to state the relation of anything to awareness directly in terms of membership, in sum, by means of a globally overarching dyadic relation. What does this picture of reflection pretend about the relation of the world to the mind, or conversely, the relation of awareness to anything? Although it confuses the relation of content to consciousness with the relation of object to concept, this degree of play in the imagery is a forgivable, occasionally useful, and a probably inescapable analogy. In any case, it does not amount to the most serious distortion in the picture as a whole.

What is really wrong with the dyadic picture of reflection is the fact that it treats both of the relations it surveys, of minds to ideas and of ideas to things, on the model of a consummation and a containment, as if to place everything being related in an all embracing hierarchy and in opposition to all forms of reciprocal participation among its entities. This image renders a consciousness of contents and a concept of objects each in the likeness of a set and its elements, rather than presenting them as they most likely are, a relationship of systems or agents and of texts or signs to the ideals or objects that motivate them, constituting mutually embracing forms of participation in a unified textual activity. In all, the initial attempt at explaining reflection lays it out according to a conception that grasps it prey, and loses the creature in the process, rather than a conception that releases the potential of what it imprisons.

One of the reasons for bringing the pragmatic theory of signs to bear on this discussion is to deal with just these problems, constellated by the need for reflection and made acute by the defects of the dyadic picture. By means of triadic sign relations, and given a capacity to create and modify the interpretant signs that fill out its original set of semantic equivalence classes, an interpretive agent has the “elbow room” needed to stand aside from the ongoing process of interpretation, to reflect on its present determinants, and to consider its possible developments.

An inquiry that cannot clearly and completely comprehend itself as an object can at least inquire into the succession of signs that record its progress. The writer of a text can use that text to describe, at least partially, the process of writing and using it so. The reader of a text can understand that text to describe, at least partially, the process of reading and understanding it so. Further, a discussion can generate a record that describes, more than just the transient proceedings of that discussion, the principles and parameters that determine its creation. In each of these ways, a text can address the qualities that determine its intended character, comment on the context in which it takes a part, and act on behalf of its pretended objectives.

The procedural distinction just recognized, between the passing traces of a process and the permanent determinants of its generic character, informs a significant issue, on which is staked nothing less than the empirical feasibility of an inquiry into inquiry. From this point on, a certain figure of speech can be used to mark this distinction, when it is relevant to the course of discussion, and to signal a deliberate turn in the direction of consideration, when the corresponding exchange of its dialectical roles is intended. According to the nuances of this paradigm, one can distinguish a process intended in the substantive generative sense from a process intended in the genitive gerundive sense, and address oneself selectively, at turns, to the process that achieves versus the process of achieving any contemplated activity or result.

An inquiry at such a point of development that it cannot entirely grasp its ongoing process of inquiry as an object of thought, namely, as the process that inquires, can at least try to capture a representative sample of the signs that record its process of inquiring. Speaking metaphorically and with the proper apology, every thus generated and thus collected text of inquiry (TOI) can be addressed as a partial reflection of the generative process of inquiry. Moreover, it is not irredeemably illegitimate to say that a TOI can partly describe itself, since this merely personifies the circumstance that a process of inquiry can describe itself partly in the form of a TOI.

O jest unseen, inscrutable, invisible
As a nose on a man's face or a weathercock on a steeple.
My master sues to her, and she hath taught her suitor,
He being her pupil, to become her tutor.
O excellent device! Was there ever heard a better? —
That my master, being scribe, to himself should write the letter.
Two Gentlemen of Verona: Speed—2.1.127–132

When I write out my thinking in the form of a text, a critical thing happens: It faces me as the thought of another, and I start to think of what it says as though another person had said it. Almost unwittingly, a critical process comes into play. In regarding the text as expressing the thought of another, I begin to see it from different POVs than the one that led to its writing. As I find my own inquiry reflected in one or another TOI, it addresses me afresh as the question of another and I encounter it again as a novel line of investigation. This time around, though, the topic of concern and the style of expression become subject to directions of criticism that would probably not occur to me otherwise, since the angles of attack permitting them do not open up on their own, neither on first thinking nor ever, most likely, while merely speaking. This can be the beginning of critical reflection, but it can also stir up destructive forms of interference that inhibit and obstruct the very flow of thought itself.

If I can be granted the license to continue saying that a text says this or that about itself when what I really mean is that a person or process employs its text to say the corresponding thing about itself or its text, then I can begin to introduce a variety of descriptive terms and logical tools into this text that can be used to talk about what this or another TOI “thinks” or “believes” at various points in its development, that is, in order to detail what I or its proper author thinks or believes at the corresponding points of discussion.

Fourteen, a sonneteer thy praises sings;
What magic myst'ries in that number lie!
Your hen hath fourteen eggs beneath her wings
That fourteen chickens to the roost may fly.
Fourteen full pounds the jockey's stone must be;
His age fourteen — a horse's prime is past.
Fourteen long hours too oft the Bard must fast;
Fourteen bright bumpers — bliss he ne'er must see!
Before fourteen, a dozen yields the strife;
Before fourteen — e'en thirteen's strength is vain.
Fourteen good years — a woman gives us life;
Fourteen good men — we lose that life again.
What lucubrations can be more upon it?
Fourteen good measur'd verses make a sonnet.
Robert Burns, A Sonnet Upon Sonnets

One of the main problems that the present TOI has to address is how a TOI can address the problems of self-reference that an inquiry into inquiry involves. If a sonnet can say something true about sonnets, then a TOI, far less limited in the number and measure of its lines, ought to be able to say something true about TOIs in general, unless the removal of these limitations takes away the only things whereof and whereby it has to speak, the ends and means of its own form of speech.

Using the pragmatic theory of signs, the forms of self-reference that have to be addressed in this project can be divided into two kinds, or classified in accord with two dimensions of referential involvement. Roughly speaking, reference in the broader sense can suggest either a denotative reference to an object or a connotative reference to a sense. Therefore, a projected self-reference can be classified according to the ways that its components of reference propose to recur on themselves: how much pretends to be a self description along denotative lines and how much purports to be a self address in the connotative direction.

Under suitably liberalized conditions of interpretation, then, what is meant by “a self-referent text”, whether one that denotatively describes or connotatively addresses itself? Apparently, it can mean a text that addresses, describes, refers to, or speaks to either one of two issues: (1) the outwardly passing features of its own succession of signs, or (2) the inwardly relied on properties of its own regenerative sources.

It is one thing for a text to be generated according to the laws laid down in another. This takes place, for example, in devising or following a proof according to the axioms and rules of inference that are recorded in a proof system. It is another thing entirely for a text or a corpus of texts to derive or induce the very principles of their own generation and then return to disclose the process of derivation or induction itself according to which the whole text or corpus is divined or drafted.

What the discussion of reflection has so far been leading up to, if I stop to reflect on what might be the implicit project behind its scheme of development, is tantamount to a monadology, a project of a complete and total provision for a system of perfect but virtual self-reflections. But I suspect that such a project is unsupportable in reality outside the realm of infinite resources and pre-established harmonies, while my present aim is to see what can be done with finite and empirical means. A monadology, if it entertains itself with any form of investigation at all, addresses the task as a sheer masquerade, styling its inquiry after the fashion of a complete logical analysis (CLA).

On principle, there is nothing inherently the matter with the form of the CLA itself, but it does not embody all by itself the spirit of suspense that accompanies a genuine human inquiry. A real inquiry cannot know before it starts what the answer is and how the end will be achieved, and it cannot, if it wishes, merely trick out the foils of an already completed and pre-arranged survey, parading them as a passing series of plotted and transient complications in the guise of an honest quest. Some types of completeness are far more complete than others, however. Taken with respect to a properly limited and workably modest context, and treated as relatively complete rather than absolutely complete, the ideal substrate of the CLA forms a suitably plastic material for modeling many forms of concretely reasonable inquiry.

Invoked with a spirit of moderation, the idealized model envisioned in the CLA can nevertheless serve as virtual guide for practical inquiries, highlighting the space of conceivable models and projecting a standard against which to measure every approximate, likely, and partial result. An inquiry of this self controlled kind, that considers in addition the logical alternative to every hypothesis it finds itself making, if it is addressed appropriately to the conditions of its constraining resources, can achieve complete success only within a tightly circumscribed sphere of action. Thus, the ideal of CLA informs a workable genre of inquiry, but the experimental variations that it enables and permits an agent to contemplate are bound up with the experiences that can be expressed in a language of finite and discrete signs, and exactly to the extent that they are in fact expressible.

In principle and in effect, an inquiry pushing the envelope of CLA is restricted to a “universe of limited marks”. For all practical purposes, it must keep its remarks to a finite universe of discourse, and a small one at that. Beyond these bounds, every inquiry is forced to take its chances on a pure hypothesis, unmitigated by any consideration of the opposite case. Communities of inquiry, however, are likely to embody a distribution of individual inquiries that have placed their bets on opposing options. Diversity of interpretation leads to disjunctions of opinion that can render many heads much smarter than one, but it also engenders forms of disagreement, discord, and duplicity that, for all their practical inevitability, are not essentially necessary.

Engaging in practical inquiry in a community of partially informed and presumptively constrained reasoners, then, is a task that leads to the recognition of several critical needs, not only for ways of synthesizing fragmentary interpretations of the presumptive truth and for reconciling divergent accounts of the objective world, but also for strategies that make these methods of negotiating differences and resolving conflicts more commonly available to all the inquirers in a putative community. Finally, an agent attempting to be reasonable under these conditions needs to be permitted to exercise a number of editorial prerogatives. For example, there needs to be a way to retract projections, that is, to recognize the alienated aspects of oneself that appear to crop up in others and to reconsider the rejected options for thought and action that nevertheless are capable of leading to bona fide values.

In a striking analogy with visual perception, it is the reflections in the ambient flow of energy that make it possible for one complication in the medium, a living being, to see another variation in the density of the medium, animate or otherwise, as an object. Reflection permits one to render an experience as due to a separate entity, to regard its occasion as the appearance of an object, and to respond to its cause as a reality. The analogy is broken at the junctures where an agent attributes these reflections to the passive “reflectances” of the object itself rather than perceiving them as the active responsibility of every participant in the process as a whole.

In accord with this visual analogy, two factors frustrate the prospects of indefinitely extending and smoothly finishing any project of inquiry that works in a medium of CLA:

  1. The transparent obstruction (TO), or obstacle of transparency, is due to an initial inability to discover and to render visible every assumption, category, or distinction that one automatically and implicitly acts according to.
  2. The opaque obstruction (OO), whether it presents itself in the guise of an obvious or an obscure obstacle, arises on account of a final incapacity to consider both sides of every question posed. This can amount to either one of two shortcomings: (a) failing to identify a logical alternative to every presumption or thesis that one identifies with, or (b) failing to evaluate a logical alternative to every assumption or hypothesis that one does in fact identify.

In short, a finite information creature (FIC) is required to keep the contents of its forms within the range of a definite set of figures and to rest the forms of its contents within the scope of a certain cast of characters. To be sure, these are precisely the characters that can be modeled and the figures that can be cut within a circumscribed theater of operations that everyone calls a partial logical analysis (PLA).

6.3. A Projective Point of View

A necessary connection between signs and reflection gives the TOI its critical function as a transitional object in the development of inquiry. In the form of a TOI, I address my reflection as if it were the reflection of another. On the off chance that it renders me a bit more critical, as I eye both its sources of authority and its styles of presentation, I can regard the record of this reflection as a partially alienated object, an artifact of unknown origin, or a work of uncertain provenance. And so the very existence of a sign, that takes after another in a search for its meaning and ultimately takes its place in tracing the traces of that process of inquiry, is intimately bound up with the act of reflection.

There is, moreover, a connection between the act of reflection and the psychological mechanism called projection that is useful to notice here. As it happens in practice, the effect of reflection is frequently achieved, not directly, by means of a deliberate effort to observe and to evaluate one's own conduct, but more indirectly, through the initial observation and the subsequent criticism of another's behavior, finally followed up by the often delayed afterthought and usually reluctant insight that the properties ascribed to the other's behavior can also apply to one's own.

The relationship between the isolated components of behavior in this sort of projective situation amounts to a familiar kind of sign relation. In regard to the properties possessed in common, the other's pattern of behavior is an icon, at first unrecognized, of one's own form of conduct. The introspective act of recognizing and assimilating the significance of such a relationship is referred to as retracting or re-owning the projected attributes and descriptions. To sum things up in these terms, the retraction of a projection can bring about, in its composite fashion, the ultimate effect of a critical reflection, namely, the elicitation and application of a valid description to one's own conduct.

Before the usefulness of this insight can be appreciated, it is necessary to resolve an interdisciplinary conflict over the use of the term projection and to sort out the relationship between the psychological and the mathematical concepts of projection.

O time, thou must untangle this, not I.
It is too hard a knot for me t'untie.
Twelfth Night: Viola—2.3.39–40

There are a couple of contingencies surrounding the trials of learning from one's own experience, issuing from and bearing on the complexity of that very experience, that appear to be tangled up with each other. Echoing the mythology of the Gordian knot, the Herculean Hydra, the Laocoonian serpent, and the Persean Medusa, each of which accounts of perverse polymorphism seems to reflect a variant aspect but to capture a sheer fragment of the underlying archetype, these two factors can be addressed by means of the following allegory:

  1. The Knot. It is frequently difficult to learn anything at all from the encounter with one's own experience, especially while one is still faced with the full complexity of that experience.
  2. The Knife. One tends to establish a personal array of mental or conceptual frames, planes, or sections on which one can reliably and reductively chart, map", or project one's experience.

The relationship between these two factors is such that the Knot leads to the Knife as its adaptive or expedient remedy, but that the Knife affords only a transitory relief for the problems bound up in the Knot, and further, an excessive reliance on any fixed array of armaments and stratagems under the emblem of the Knife has the contrary tendency to worsen the troubles experienced under the category of the Knot.

Thus, it is fair to say that the difficulty of learning from the full complexity of one's own experience is a problem condition that partly leads to and partly arises from the very configurations of artificial sections and arbitrary coordinates that one contrives to project one's experience on and to judge one's experience by, respectively. Although one's idealizations, simplifications, and other pet schemes of reductive representation can serve to render one's experience initially manageable, they can ultimately and adversely interfere with seeing the obvious.

In this setting, it is possible to bring about an accommodation between the mathematical and the psychological concepts of projection and to reconcile their discordant uses of the term within a concerted paradigm. For example, in dealing with the joint configuration space of a multiple agent system, one considers this yoked extension space (YES) to fall within a common extension (CE) of all the single agent state spaces. Each agent involved in such a system projects, in a geometric sense, the total action of the system on its own section of the whole CE, its local outlook, mental plane, personal frame of reference (FOR), or point of view (POV).

What does the POV of an agent consist in? Generally speaking, agents are not dumb. They are not limited to a single view of their situation, nor are they restricted to a single scenario for its ongoing development. They can entertain many different possibilities as candidates for the so-called and partly self-describing “objective situation” and they can envision many different ways that these potential situations might be developing, both before and after their passage through the moment in question. Furthermore, under circumstances favorable to reflection, agents can invoke POVs that help them to contemplate many different possible developments in the constitution of these very same POVs.

Now, it is conceivable that all the POVs entertained by a single agent are predetermined as having the same collection of generic characters, and thus that this invariant constitution is what really limits the range of all possible POVs for the agent in question. If so, it leads to the idea that this invariant constitution defines a uniquely general POV, a highest order meta-POV, or a consummate POV of the agent involved. Still, the only points of access and the only paths of approach that an agent can have to its own consummate POV, if indeed such a goal does make sense, are through the agency and the medium of whatever POVs it happens to have at each passing moment in its developmental history. Consequently, a persistent enough search for a good POV opens up the investigation of each agent's prevailing point of development (POD).

In the best of all possible worlds, then, being under the influence of one POV does not render an agent incapacitated for considering others. Of course, there are practical limitations that affect both the capacity and the flexibility of a particular POV, and there can be found in force both logical constraints and resource constraints that leave a POV with a narrowly fixed and impoverished character, one that the agent opting for it can fail to represent reflectively enough within the scope of this POV itself. In particular, the finite information constructions (FICs) that are accessible from a computational standpoint are especially limited in the kinds of POVs they are able to attain.

This means that POVs and PODs have recursive constitutions and recursive involvements with one another, calling on and referring to other POVs and PODs, both for the exact definitions that are needed and also for the more illuminating elaborations that might be possible, both those belonging to the same agent, reflexively, and those possessed by other agents, vicariously. A large part of the task of building a RIF is taken up with formalizing POVs and PODs, in part by analyzing their intuitive notions in terms of their implicit recursive structures and their referential involvements with each other, and in part by exploring their potential relationships with the previously formalized concepts of objective concerns (OCs).

In settings where recursion is contemplated, it is possible to conceive of a distinction between well-founded recursions, that lead to determinate definitions of the entities in question, and buck-passing recursions, that lead one down the “garden path” to an interminable “run-around”. The catch, of course, is that it is not always possible to implement an effective procedure that can accomplish what it is possible to conceive. Thus, there are cases where the imagined distinction does not apply and times when the putative difference is not always detectable in practice.

In this connection, there are two or three fundamental questions that need to be addressed by this project:

  1. What makes a POV or a POD well-founded?
  2. Can buck-passing POVs and PODs be tolerated?
  3. How should they be treated and regulated, if tolerated?

A tentative approach to these questions is tendered by the pragmatic theory of sign relations, where the definitive and the elaborative aspects of recursion correspond to the denotative and the connotative components of reference, respectively. Although it is always useful to organize the connotative realm in the species of a determinate ordering or a well-founded hierarchy, there is found in these parts generally a greater tolerance for the baroque proliferation of circuitous references and a broader acceptance of provincial, dialectic, and private coinages.

If all thought takes place in signs, as a tenet of pragmatism holds, then mental space is a space of signs and their interpretants, in other words, it is a connotative realm.

In this perspective, that is to say, in the POV of the present project and in the current opinion of its author, a POV is associated with an abstractly defined, but concretely embodied and frequently distributed, section of memory (SOM), where the signs constituting it are stored. In this rendition, a SOM is a curve, surface, volume, or more general subspace of the total memory space, in other words, a subset of memory that can be treated, under the appropriate change of coordinates, as being swept out by a set of variables, and ultimately addressed as being generated by a list of binary variables or bits. Working under the assumption that agents can engage in non-trivial developments, it must be granted that they have the ability to change their POVs in significant ways between the successive PODs in their progress, and thus to move or jump from one SOM to another, as dictated by will or as constrained by habit.

In this comparison, what is visualized as the geometric structure of a cone is commonly implemented through the data structure of a tree, that is, a set of memory addresses (along with their associated contents) that are accessible from a single location, namely, the root of the tree, or the literal point of the POV.

Typically, but not infallibly, an agent can reduce the complexity of what is projected on its personal POV by employing a reductive hypothesis or a simplifying assumption. Often, but not always, this idealization is arrived at by picking one agent to treat as nominal, in other words, whose actions and perceptions are regarded as natural, normal, or otherwise unproblematic. Usually, especially if one is a mature agent, this nominal agent is just oneself, but a novice agent, unsure of what to do in a novel situation, can chose another agent to fill the role of a nominal guide and to serve as a reference point.

It would be nice if one could ignore the sharper edge of knowledge that is brought to light at this point, and fret but lightly over the smooth and middling courses that gloss the conformal plateaus of established knowledge. However, it is the nature of the inquiry into inquiry that one cannot forever restrict one's attention to the generic, nominal, or unexceptional case, well away from the initial conditions of learning and the boundary conditions of reasoning. Still, for the purposes of a first discussion of POVs and PODs, I limit my concern to the nominal case, where the reductive strategy indicated is useful to some degree and where the nominal agent of choice is none other than oneself.

Under default conditions of operation, then, each one's POV embodies the reductive assumption that one's own particular actions and perceptions are nominal, that is, natural, normal, or otherwise “not a problem”. Relative to this ordinary setting, each one's POV is normally configured for tracking the more problematic courses of other agents and the drift of the residual system as a whole. Therefore, the natural setting of a POV can be pictured in terms of the perceptual gestalt it facilitates.

In unexceptional circumstances one always takes one's own agency and one's own experience for granted. This is tantamount to assuming that a synthetic balance is already in effect between the claims of conduct and the trials of bearing. Given this much free reign in arranging the play of forces, the artificial state of accord that results can present itself to be a neutral context of interpretation and the superimposed scene of rapport that prevails can pretend itself to be the unquestioned background of instrumental activity that is implicated in every notable objective contemplated or observation performed. Cast in the role of a stationary stage for the action, there is a whole body of interactions that reside in dynamic equilibrium with each other and that make this proving ground appear to be at rest, but the whole contrivance merely acts to place in relief and to render more obvious whatever else in the way of phenomenal experience is thereby permitted to figure against it as representing an object worthy of inquiry.

Loosely speaking, and operating under the usual anthropomorphism, one can say that an agent projects the joint state trajectory, the course that the whole system takes through a sufficiently well defined CE, onto a trajectory through its own proper space, the residual state space that is encompassed by its chosen POV. Strictly speaking, in another sense, all that is known of an agent is just what is projected on its space, and thus one can say that an agent is wholly constituted by this projection.

The difference between the two senses of projection can now be rationalized as follows. A psychological projection begins when a mathematical projection is employed to deal with a complex experience, that is, an overwhelmingly complicated trajectory of the total system. But the default assumption that one's own actions are not significantly implicated in what happens can occasionally turn out to be unjustified.

In a case of psychological or transverse projection, the significant aspects or motivating features of a problematic situation are attributed to the other actors, while one's own collusion in the relevant character of the total situation is ignored, denied, or otherwise relegated to the peripheral background of the configuration kept in focal awareness, the figure that is currently being attended as a content of consciousness. This form of strategic reorganization usually occurs reflexively among the automatic processes of perception, in spite one's full knowledge or token recognition of the times when it is just as likely that the salient quality of the situation is due to one's own conduct, and even when it is equally possible that the complexion of the moment cannot be resolved into separate components and rendered accountable to individuals at all.

6.4. A Formal Point of View

In this section the concept of a point of view (POV) is taken up in greater detail and subjected to the first few steps of a formalization process. This makes it possible to explore the wider implications of the idea, to pursue the lines of inquiry it suggests to greater lengths, and to apply the tentative formalism to an issue of pressing concern, namely, the question of what kind of distinction ought to be posed between the dynamic and the symbolic aspects of intelligent systems.

If there were nothing but a single POV to entertain, a diversion of attention to matters of perspective would hardly be worth the candle. Accordingly, the dimensions of change and diversity are intrinsic to the worth of the whole idea.

One of the reasons for trying to formalize the concept of a POV is so that this TOI, along with others on its model, can reflectively comment on its own POV, as it progresses from moment to moment, and critically examine it as it develops.

When it comes to the subject of systems theory, a particular POV is so widely propagated that it might as well be regarded as the established, received, or traditional POV. The POV in question says that there are dynamic systems and symbolic systems, and never the twain shall meet. I naturally intend to challenge this assumption, preferring to suggest that dynamic and symbolic attributes are better regarded as different aspects of a single underlying system, as “two sides of the same coin”. But first I have to express the assumption well enough to question it.

Beyond the dim inkling of an underlying influence, a sufficiently critical level of reflection on a POV requires a language that is articulate and analytic enough to transform each thesis posed in it into the form of a question. A deliberately reflective technology is needed to bring the prevailing, prejudicial, and hypocritical underpinnings of a POV to light, since biases due to assumptions obscurely held are seldom automatically revealed. This highlights the need for a critical apparatus that can be applied to the typical TOI, supplying its interpreter with the technical means to take up a critical POV with respect to it.

A logical calculus cannot initiate reflection on a text, but it can help to support and maintain it. The raw ability to perceive selected features of an ongoing text and the basic language of primitive terms, that allow one to mark the presence and note the passing of these features, have to be supplied from outside the calculus at the outset of its calculations. In the present text, the means to support critical reflection on its own POV and others are implemented in the form of a propositional calculus. Given the raw ability of a perceptive interpreter to form glosses on the text and to reflect on the contents of its current POV, a logical calculus can serve to augment the text and assist its critique by catalyzing the consideration of alternative POVs and facilitating reasoning about the wider implications of any particular POV.

The discussion so far has dwelt at length on a particular scene, returning periodically to the fragmentary but concrete situation of a dialogue between \(\text{A}\) and \(\text{B},\) poring over the formal setting and teasing out the casual surroundings of a circumscribed pair of sign relations. If the larger inquiry into inquiry is ever to lift itself off from these concrete and isolated grounds, then there is need for a way to extract the lessons of this exercise for reuse on other occasions. If items of knowledge with enduring value are to be found in this arena, then they ought to be capable of application to broader areas of interest and to richer domains of inquiry, and this demands ways to test their tentative findings in analogous and alternative situations of a more significant stripe. One way to do this is to identify properties and details of the selected examples that can be varied within the bounds of a common theme and treated as parameters whose momentary values convey the appearance of complete individuality to each particular case.

Typically, a movement from reduced examples to realistic exercises takes a definite but gradual progression of steps, moving forward through the paces of abstraction, generalization, transformation, and re application. The prospects of success in these stages of development are associated with the introduction of certain formal devices. Principal among these are the explicit recognition of sets of parameters and their expression in terms of lists of variables.

As I understand them, variables are a class of beneficially ambiguous or usefully equivocal signs. In effect, variables are just signs, but signs possessed of a more adaptive constitution or affected by a more flexible interpretation than signs of the usual, more constant variety. These forms of employment turn variables into a class of reusable signs, converting them into sustainable resources for meaning that can be used in a plurality of ways and deployed to articulate different choices at different times from among the available points of thematic variation.

The next major task of this discussion, while continuing to take its bearings from examples as concrete as \(L(\text{A})\!\) and \(L(\text{B}),\!\) is to develop systematic methods for divining the bearing of such isolated examples on issues of real concern. This involves two stages:

  1. One needs to detect the invariant features of the currently known examples, in other words, the dimensions along which their values are, knowingly or unknowingly, held to be constant.
  2. One needs to try varying the features that are presently held to be constant by imagining new examples that are able to realize alternative features.

The larger issue at stake throughout these stages is how the agent of inquiry can find ways to express the lessons of individual exercises in ways that persist through and rise above their individual attachments to experience, thereby living through detailed experiences while remaining undiverted by their peculiar distractions.

There appears to be a practical necessity in drawing at least a tentative distinction between the role of an object and the role of an interpreter, even if a moment of reflection occasionally requires a single entity to fill both roles, and even though a mass of experience with systems that try to draw hard and fast distinctions between things, once and for all, leads one to see that a need exists for ways to withdraw every pretense of any distinction, redrawing it anew if possible, and drawing on new grounds if necessary. There is never anything initially or immediately obvious about a sign itself that says it destined to represent an object of a particular type, and this makes it necessary to infer the type that ought to be specified from the pattern of references in which the sign is actually observed to be engaged.

A distinction that one is initially tempted to treat as substantial but is later bound to discover as purely interpretive, like that between objects and signs, subjects and predicates, particles and waves, or dynamic and symbolic aspects of systems, can frequently bedevil sensible inquiry for quite some period of time. To deal with this problem, there needs to be a standardly available mechanism for introducing these staple but still provisional distinctions, accepting them on a par with axioms at first, but without precluding the opportunities to later revise the substantive imports of their interpretations.

On the way to integrating dynamic and symbolic approaches to systems there are several different sorts of things that can happen. It can happen that a certain distinction, a natural or artificial feature that separates the outlooks of the dynamic and symbolic perspectives, or the sheer appearance of a distinction, a suggestion of a line that leads an observer to see a difference between the two views in the first place, merely gets erased. Or it can happen that the ostensible distinction between the two standpoints marks in reality a naturally useful border, one that is well worth preserving, and yet a wealth of connections that constitutes the true relationship between the two realms can be marked and remarked with increasing visibility in the meantime. In any case, there are lines of pretended distinction and potential difference that must be crossed, and then recrossed, time after time, until their exact form and precise nature have become marked in their clarity or else transparent in their obliteration.

I would like to detach, for a moment, from the particular contrast of interest here, the one posed between dynamic and symbolic orientations, to examine the general question of relating contrasting aspects or views. In this connection, two distinct but correlated efforts at classification and organization arise in tandem with each other. One concern seeks to classify the attributes, categories, features, properties, or qualities that are used to describe the object observed, while the other project tries to organize the approaches, instruments, methods, perspectives, or views that are used to observe the object described.

To invoke the traditional terminology, natural classes of predicates are referred to as categories or predicaments, making it natural to call the classification and study of predicates by the name of categoric, while the classification and study of methods is classically referred to as heuristic or methodeutic (Peirce, CP 2.105–110 and 2.207).

Now the discovery of ideas as general as these is chiefly the willingness to make a brash or speculative abstraction, in this case supported by the pleasure of purloining words from the philosophers: “Category” from Aristotle and Kant, “Functor” from Carnap …, and “natural transformation” from then current informal parlance.

Saunders Mac Lane, Cat.Work.Math. 29–30

Categoric. Although this subject is historically referred to as the “theory of categories”, in modern times it is necessary to distinguish it from the mathematical subject of category theory, whose claim to the title is confessedly derived by stealth. By way of suffering unto the older discipline the freshness of the younger subject, the original study and more general classification of predicates can be referred to as the doctrine of categories (DOC). This is a fair description, given that optional schemes of basic categories are commonly taken up, maintained, and transmitted in decidedly catechismic and rigidly dogmatic fashions.

Perhaps it is the mind's reluctance to revive the uncertainties and to relive the struggles that these schemes were made to resolve, but once the fundamental categories are settled it is nearly impossible to revise them, however poorly they come to fit the current circumstances of life. No matter how original the thinking that leads up to a site where a stable foundation can be poured, the foundation itself is typically laid down as if it were cut from unalterable stone.

I even hope that what I have done may prove a first step toward the resolution of one of the main problems of logic, that of producing a method for the discovery of methods in mathematics.

Charles S. Peirce, CP 3.364, “On the Algebra of Logic : A Contribution to the Philosophy of Notation”,
American Journal of Mathematics, 7(2), 180–202, (1885).

Methodeutic. This subject, that C.S. Peirce gave the alternate titles of “speculative rhetoric” or “formal rhetoric”, because it is a science that “would treat of the formal conditions of the force of symbols, or their power of appealing to a mind, that is, of their reference in general to interpretants” (CP 1.444 and 1.559), and that he assigned the task to find “a method of discovering methods” (CP 2.108 and 3.364), is one that clearly has a special relevance to the pursuit of an inquiry into inquiry.

In an effort to gradually begin formalizing these issues, I introduce the concept of a point of development (POD). This notion is intended to capture a particular moment in the history of a system or its agent, as it is reflected in the systems of propositions associated with each POD. Relative to a particular POD there can be distinguished, though neither exclusively nor exhaustively, two types of propositions that are said to be “associated” with it. Roughly speaking, these types of propositions reflect the thoughts that are applied to a POD and the thoughts that are attached to a POD, respectively.

  1. A proposition that applies to a POD can be formulated in more detail as a proposition about or on a POD. This describes the corresponding POD as though observed from an outside perspective, stating features that locate it within a space of dynamic configurations or that place it in relation to some other medium of common description. This manner of associating propositions with PODs is tantamount to adopting a third person POV on the system or its agent, and it is commonly used to convey an impression of objectivity, no matter whether this standpoint is well taken or not.
  2. A proposition that attaches to a POD can be formalized in more detail as a proposition at or in a POD. This represents what an agent thinks or believes, entertains or maintains, in sum, what an agent is aware of or willing to assert at a particular POD. By way of filling out the formula, this type of proposition expresses thoughts and is expressed in signs that are likewise regarded as attached to the POD in question. In general, propositions at a POD can be formed to express every conceivable modality. Collectively, they can state anything that an agent notes or thinks, observes or imagines at a given moment of its developmental history. They can reflect any aspect of an agent's awareness, belief, conjecture, doubt, expectation, intention, observation, or any other latitude of thought that is actively considered or faithfully preserved throughout the moment in question, and in this sense they are considered to be attached to, bound to, contained in, or localized at a particular POD.

In one sense, propositions about a POD are potentially the general case, since propositions at a POD can be incorporated within their formulation. That is, a proposition about a POD is allowed to make assertions about the propositions at that POD, plus assertions about their relation to propositions at other PODs. But propositions whose references are this involved, articulated as propositions about propositions at a POD, for instance, are classed as higher order propositions and need to be inferred through processes of hypothesis and experiment, conjecture and confirmation, instead of being observed outright. In another sense, propositions at a POD are intrinsically the prototype, since it is from their data that every other type must be constructed.

Propositions about PODs naturally collect into theories about PODs, and at the next level of aggregation these constitute the familiar sorts of dynamic theories that are used to describe the state spaces of systems and the trajectories of agents through them. Concentrating on these types of propositions leads to the kinds of theories about systems where a “neutral observer”, not involved in the system itself, is postulated or fancied to stand outside the dynamics of the “observable object system”: where this “objective reasoner” is supposedly able to theorize about the observable system without essentially becoming a part of its operations or necessarily being involved as a participant in its actual workings, and where the same “passive agent” never finds itself forced to interact in an irreversible or irrevocable manner with the autonomous course of the object system's action.

The thoughts attached to a POD, the things an agent thinks or believes, entertains or maintains at one POD, in relation to what the agent thinks or believes, is aware of or willing to assert at another POD, is the very form of subject matter that is bound to come to light and bound to fall into play whenever one studies the development of a reflective system, whether the focus of interest is the course of a particular inquiry or the emergence of a generic intelligence.

From a pragmatic point of view, a belief is a proposition that an agent is prepared to act on. In practice, this means that information about beliefs can be obtained from observations of action, as long as one remembers that this information is almost always partial information, contingent on the sample of actions that are actually observed and limited by the circumstance that not all preparations result in action.

It may be thought that there is an important distinction between belief and knowledge that ought to be recognized in the modes of maintaining propositions at or in a POD. Given the pragmatic definition of belief, however, there is no local mark that can tell belief and knowledge apart. That is, there is no practical difference that can be sustained, in the propositions attached to a single POD, between those that reflect items of contingent belief and those that reflect items of certain knowledge. Even if the propositions at or in a POD are artificially marked in ways that can later be reliably detected, the problem of constantly updating so fleeting a form of distinction makes the accumulating profusion of ephemeral distinctions as immaterial and unenlightening as every other genre of eracist obliterature.

A distinction between belief and knowledge appears to arise only in the interactions and comparisons that can be made between different PODs, either those enjoyed by a single agent in the history of a single system or those passed through by ostensibly different agents and systems. The sense of the distinction can be sustained only if the order of its relational context continues to be recognized, which means that the mark of the distinction cannot be strained to the point of being an absolute. In this context, different systems and their agents are said to be at comparable PODs precisely to the degree and exactly to the extent that the propositions at and about them, respectively, can be compared. In many respects, the comparison of propositions at different PODs is equally complex and problematic whether it is one agent or several that is being considered.

With all this in mind, I can give a formulation of what the practical difference between belief and knowledge consists in. Roughly speaking, an agent says that an agent knows something if and only if the one believes what the other believes. More precisely, an agent at one POD has reason to say that an agent at another POD (possibly a former self) knows something about something (or knew something about something) if and only if the one believes what the other believes about it, all things being relative to the PODs that the agents are at.

Propositions associated with a POD are often found in organized bodies, forming more or less logical systems of more or less logical statements. Whatever their type or modality with respect to a POD, “propositions of a feather gather together”. That is, they tend to collect into organized bodies of propositions that share compatible types of association and comparable modes of assertion. In logic, an arbitrary collection of propositions is called a theory, no matter how coherent, complete, or consistent it turns out to be when subjected in time to critical review. Taking up this liberalized notion of a theory in the present setting, a bunch of propositions at or in a POD forms a theory at or in a POD, while a bunch of propositions about or on a POD forms a theory about or on a POD.

A reasonably organized system is amenable to having its propositions sorted further, forming collections of propositions that are intended to be interpreted in the same light, and constellating theories that bear on single modes of contemplation or declaration among their propositions.

With respect to the propositions at a POD, the present inquiry into inquiry is mainly concerned with the modalities of expectation, intention, and observation. This is due to a couple of differential modalities, derived in pairs from among these three, that appear to drive every form of inquiry, at least, to some degree.

  1. There is the moment of doubt or uncertainty that is encountered in a surprising phenomenon, providing an impulse for the component of inquiry that seeks an explanation to relieve the shock. This factor driving inquiry can be analyzed as deriving from the differences that occur between one's expectations and one's observations.
  2. There is the moment of desire or difficulty that is countenanced in a problematic situation, providing an impulse for the component of inquiry that seeks a plan of action to resolve the trouble. This factor driving inquiry can be analyzed as deriving from the differences that occur between one's intentions and one's observations.

It should be obvious that these conceptions represent another attempt to formalize the relationship between dynamic and symbolic approaches to intelligent systems. Once again, the paradigms that are established for dealing with propositions at or about PODs are typically specialized to consider one or the other but seldom both. This leads to the familiar sorts of dichotomies being imposed on a subject matter where the types are more complementary and generative than exclusive and exhaustive. Thus, one finds methodologies in the field that can work well either from an “external” (dynamic, model-theoretic, empirical) perspective or from an “internal” (symbolic, proof-theoretic, rational) perspective, but that are seldom able to incorporate both technologies into an integrated methodology.

The concept of a POD in the history of a system, with its associated division of propositions into those that apply exterior to it and those that attach interior to it, is yet another way of approaching a recurring subject, “the being and the role of the interpreter”, that the general concept of an objective concern (OC) broached at an earlier point of development in this text, is also intended to capture. Advancing as if from a pair of complementary and convergent directions, the notion of a POD, in the way it supplies a footing to the propositions about or on it and serves to encapsulate the propositions at or in it, equips a growing SOI with all the pivotal, trophic, and vital functions that the notion of an objective motif (OM) realized in an interpretive moment (IM) is likewise meant to provide.

The relationship between a POD and an OM at an IM can be understood as follows. …

In order to continue formalizing the discussion of POVs and PODs within the text that uses them, I introduce the following notations:

\(\begin{array}{lcccr} j : x \Delta y & , & x \Delta_j y & , & x \Delta y : j \\ j : x \ne y & , & x \ne_j y & , & x \ne y : j \\ j : (x ~,~ y) & , & (x ~,~ y)_j & , & (x ~,~ y) : j \end{array}\)

All these expressions are intended to indicate a set of circumstances that could otherwise be described as follows:

\(j ~\text{appears to see a distinction between}~ x ~\text{and}~ y.\)
\(j ~\text{partitions a dimension of discourse between}~ x ~\text{and}~ y.\)
\(j ~\text{sees}~ x ~\text{and}~ y ~\text{as mutually exclusive and exhaustive possibilities.}\)

In this scheme, \({}^{\backprime\backprime} x {}^{\prime\prime}\) and \({}^{\backprime\backprime} y {}^{\prime\prime}\) indicate logical dimensions of variation or propositional features of description that govern an agent's possibilities of action and perception. Used as primitive logical terms they denote the distinctive features that determine an agent's spaces of performance and experience. In combination with logical operators they generate a descriptive framework that encompasses both: (1) the methodological approaches or perspectives toward objects that an agent can adopt, and (2) the categorical aspects of objects, the independently coherent systems of properties and qualities that characterize the hypothetically unified object system.

In practice, it does not matter whether one regards \(x\!\) and \(y\!\) as logical features or as boolean variables, so long as the full set of positive and negative features \(\{ x, (x), y, (y) \}\!\) is initially available to classify the relevant space of object perceptions or interpretive actions. Analogous to its role in the staging relations { \(\{ \lessdot ~,~ \gtrdot \},\) the label \({}^{\backprime\backprime} j {}^{\prime\prime}\) indicates the active interpreter, that is, the system and moment of interpretation or the state of the interpretive system that is held to be responsible for finding, making, testing, or following through the consequences of posing the contemplated distinctions.

Dual to the statements of momentary interpretive distinctions (MIDs) are the respective statements of momentary interpretive coincidences (MICs):

\(\begin{array}{lcccr} j : x = y & , & x =_j y & , & x = y : j \\ j : x \Leftrightarrow y & , & x \Leftrightarrow_j y & , & x \Leftrightarrow y : j \\ j : ((x ~,~ y)) & , & ((x ~,~ y))_j & , & ((x ~,~ y)) : j \end{array}\)

Each of these expressions is intended to indicate a set of circumstances that could otherwise be rendered by any one of the following, logically equivalent statements:

\(j ~\text{appears to see a coincidence between}~ x ~\text{and}~ y.\)
\(j ~\text{draws no distinction between the dimensions}~ x ~\text{and}~ y.\)
\(j ~\text{sees}~ x ~\text{and}~ y ~\text{as manifestly equivalent ranges of possibilities.}\)

The introduction of explicit names for systems of interpretation, as well as for their interpretive moments, models of interpretation, objective concerns, points of development, and situations of use, is intended to flesh out the lifeless idiom or insipid brand of assignment statements that are currently found in CL settings, which are typically rendered so abstractly as to constitute a entire style of anonymous, passive, or unattributed excuses for fully executable commands.

In a related usage, one is permitted to reparse the anonymous or passive form of assignment statement,

\({}^{\backprime\backprime} x := y {}^{\prime\prime},\) read as \({}^{\backprime\backprime} x ~\text{is set equal to}~ y {}^{\prime\prime},\)

converting it into the corresponding attributive or active form of assignment statement,

\({}^{\backprime\backprime} j : x = y {}^{\prime\prime},\) read as \({}^{\backprime\backprime} j ~\text{sets}~ x ~\text{equal to}~ y {}^{\prime\prime}.\)

Returning to the present application, the categorical project leads one to seek something in the object itself, some factor that divides up its dynamic and symbolic aspects, some plane of cleavage that explains the natural divisions between different types of object system, while the methodeutic outlook leads one to wonder whether the specialized mode of being that is beheld in the object is not in fact due to something in the style and direction of approach, some artifact of method that is being cast on the object system from the eye of the beholder.

I would like to articulate a systematic hypothesis that prevails over the scene of this work, tacitly imposing the deliberately hopeful assumption that there is always some sort of hypostatic unity to be found beneath the manifold diversity of phenomena. It is not just my own presumption or personal preference to say this. I find it to be a likely and common assumption, constantly being used to address all sorts of interesting phenomena and almost every process of note, whether or not it is ever expressly enunciated.

This hypothesis is probably implicit in the very idea of a system, that is to say, in the notion of things standing together, and it is central to the very conception of a systematic universe or a universal system. Nevertheless, I will have to take responsibility for the particular way that this premiss is expressed and developed in this text. Because it amounts to the underlying hope that there is always a unified system, some one thing that subsists beneath every form of phenomenal process and that remains available to substantiate and explain whatever manner of diversity in appearances is encountered, something or other that is always ready to be explicated but seldom necessary to declare, I call this assumption the hypostatically unified system hypothesis (HUSH).

In accord with this tacit assumption, that rules the entire realm of systems theory, it can be presumed that there is an integral system, prior in its real status to the manifold of observable appearances, that is somehow able to manifest itself in the severally projected roles of a dynamic process and a symbolic purpose. But to harvest any practical consequences from the employment of this inchoative precept, the twin yoke of questions, categorical and methodeutic, must now be taken up:

  1. What constitutes the differences between the dynamic and symbolic aspects of the hypothetically unified intelligent system?
  2. What features divide the two perspectives that find these aspects respectively salient?

The integration of symbolic and dynamic approaches to systems thinking requires a significant level of reconstructive effort, one that is capable of extending its energies in both the analytic and synthetic directions. It may be nothing more than a metaphor to describe it this way, but there is something like a dynamic economy of energy exchanges that goes on in facilitating the required “metaboly of symbols” (Peirce).

In this vein, there seem to be laws analogous to conservation principles that govern the transactions between subordinate processes, determining the interactions that are most likely to occur between the breaking down of old conceptual bonds and the creation of new configurations of ideas at higher and lower levels of conceptual equilibrium. Brought to bear on the present task, the specific manifestations of “mental energy” that are called on to accomplish the current work of integration have a potential for raising questions about the relation of “logic” to “time”, and thus revive an issue that goes back to the very birth of thought.

The relation of logical and temporal realms, of rational ideas to real experiences, is an ancient and fundamental question, one whose initial answers were laid down in their present form at the very beginnings of reflective inquiry and whose sedimented contents now lie metamorphosed into the deepest bedrocks of our native and systematic philosophies. The distribution of current opinion on the matter regards the question as being (1) “previously settled” or (2) “incapable of solution”, with little thought given to a tertium quid, or a more fluid medium that could moderate between the extremes of these fixed alternatives.

Unfortunately, the customary and habitual classification of a problem as “insoluble”, even when justified, can work against the recognition of methods that are available to ameliorate its more objectionable impacts. When it comes to the relationship of logic to time, I believe that the resources are currently available that could advance our understanding of this issue in new directions. All it would take is the will to reconfigure those resources in the appropriate ways.

To expand the formula: The realm of logic is typified by rational concepts regarding invariant patterns, virtually, by ideas about forms, while the rule of time is filled out by realistic experiences with changing qualities, ultimately, by feelings of content and discontent. The application of the integrative effort to intelligent systems in general and to inquiry driven systems (IDSs) in particular only sharpens the question of logic and time to the point of self-application.

Considerations like these, as old and as constant as the hills, and as much over our heads as the eternally renewed and inconstant weather, are deserving of occasional notice, yet their relevance to the work of the moment is doomed by their very quality of necessity to fade into the background of present concerns, and their saliency as problematic phenomena quickly recedes from the scope of any perspective so bent on immediate application as that falling within my present focus.

6.5. Three Styles of Linguistic Usage

The theory of sign relations, in general, and the construction of a RIF, in particular, demands that this discussion strike a compromise among several styles of usage that are not normally brought together in the same forum or comprehended in the same frame. Under the rubric of a notion of style or a norm of significance (NOS) this text recognizes a collective need for three distinctive styles of linguistic usage, or three different attitudes toward the intentions of language.

These styles of usage, along with their correlated perspectives on usage and their appropriate contexts of usage, can be put into a graded series by noticing how the more finely grained perspectives on the matter of language use correspond to the more narrowly scoped areas of content that are swept out by their roughly concentric contexts of discussion. Accordingly, the styles, perspectives, and contexts of usage that I need to relate can be distinguished as follows, proceeding in order of their increasing formality:

  1. Broadest of all is the informal language (IL) context, which incorporates the ordinary mathematical context within its compass. Relative to the aims of the present work, which are largely mathematical, these two contexts are roughly coextensive and can be treated as one. All of the more usual contexts are marked by the operation of a working assumption about the interpretation of formal symbols that I call the object convention. Loosely speaking, this takes it for granted that signs always refer to objects, not because of any credible guarantees that they do, but mostly due to a lack of interest in the cases where they do not. Failures of meaning, logical inconsistencies, and doubts about the foundations of the whole enterprise are treated as incidental problems to be discussed and corrected off line.
  2. Next in order is the formal language (FL) context, where the syntax of expressions needs to be specified explicitly and where the semantics of expressions does not usually permit every combination of signs to have a meaning. All of the more formal contexts are marked by the operation of a working assumption about the interpretation of formal symbols that I call the sign convention. Roughly speaking, this views a sign primarily as a mere sign, putting it in question whether any sign has an object. In styles of usage at this or greater degrees of formality, the reception of signs is marked by a heightened suspicion, where the benefit of the doubt and the burden of proof in the matter of signs having meaning are critically reversed from their natural defaults. Signs are assumed to be innocent of meaning until shown otherwise.
  3. Most constrained of all is the computational language (CL) context, which incorporates the interests of computational linguistics along with the aims of implementing and using programming languages. There are many styles of programming languages and many more styles of putting them to use. I concentrate here on a particular version of the Pascal language and describe the particular ways I have chosen to implement the concepts I need with the constructs it makes available.

Next I need to consider the complex of relationships that exists among these three styles of usage, along with the corresponding relationships that exist among their associated perspectives and contexts. In regard to the questions raised by these three norms of significance, the pragmatic theory of sign relations is intended to help reflective interpreters, and other students of language, maintain all the advantages of taking up abstract and isolated perspectives on language use, but to achieve this without losing a sense of the connection that each peculiar outlook has to the richly interwoven pattern of a larger unity.

In many places these variegated styles of usage express themselves not so much in isolated domains of influence or distinctive layers of context as in different perspectives on the same text. But different lights on a developing picture can cause different figures and patterns to emerge, and different ways of treating a developing text can lead it to grow in different directions. Thus, discrepant points of view on the emergence of a literature can stimulate different works to vie for its canon, and discriminating angles of approach to what seems like a level plain and a unified field of language can harvest a wealth of alternate appreciations. And so different styles of writing arise in correspondence with different styles of reading, and each rising style of readership engenders a new style of authorship in its wake.

At other times these degrees of formality play themselves out in a temporal process. Consider a typical scenario for solving problems through formalization:

  1. One begins by approaching the problem informally, in other words, in IL-posed terms, drawing on the common resources of technical notions and mathematical methods that are available, familiar, intuitively understood, and that suggest themselves as possibly being relevant to the problem.
  2. Next, the problem of interest and the array of methods selected for addressing it are both reformulated in FL terms, a process that requires many obscurities and omissions of the original problem statement to be weeded out and filled in, respectively.
  3. Finally, the formalized version of the problem method constellation is reconstructed to the extent possible in a CL framework.

At any stage of this procedure one may discover, or begin to suspect, that the current representation of the problem or the present selection of methods is inadequate to the task or unlikely to lead to a solution. In this event one is forced to back track to an earlier stage of the problem's formulation and to look for ways of changing one's grasp of the situation.

Even though the styles of usage at the three degrees of formalization use overlapping vocabularies of technical terms, the interpretations that they put on some of these terms, together with the working attitudes that they promote toward the corresponding concepts, are tantamount in practice to the possession of distinct concepts for the very same terms.

Three issues of linguistic usage on which the three norms of significance get most out of joint are on the questions of (1) signs and their significance, (2) the utilization of set theory and set-theoretic constructions, and (3) the ontological or pragmatic status of variables. The rest of this section makes a cursory survey of the bearings that the three norms take toward these issues, in preparation for more detailed treatments in later sections.

In each perspective that an observer takes up, the natural attitude is to focus on a particular class of objects, to remain less aware of the signs being used to denote them, and to remain even less aware that these objects and signs can take up other roles in the same or other sign relations. In constantly shifting from one perspective to another, however, the transparent uses of signs and the ulterior circumstances that determine how objects and signs are cast start to become visible. Altogether, the interaction between casual and formal styles of usage is like an exchange carried on between radically different economies, where commodities and utilities that are freely traded in one kind of market are severely taxed in the other.

The IL perspective, along with its specialization to ordinary mathematical discourse, thinks itself to have a grasp of the unitary object itself, conveniently forgetting the multiplicity of abstract, arbitrary, and artificial constructions that are needed to make this impression possible. In particular, the ordinary mathematical attitude thinks itself to have a grasp of the one idea while its puts the many appearances out of mind, and it constantly exerts itself to neglect all the labor that goes into taking up this stance. It ignores the circumstance that numbers, however intuited, can only be indicated and rationalized to others as equivalence classes of constructions formed on the matter of numerals.

The FL perspective, along with its implementations in CL contexts, allows one to treat signs as objects, and thus to study syntactic domains as objective languages. This creates what seems like a higher order of discussion, but the designation of these objects as signs is purely token if their use as signs is forgotten in the process. Consequently, the FL perspective, together with the CL attitude that realizes it, has the job of recovering and reconstructing exactly what has been taken out of consideration in the IL context, namely, the details of actual usage that are taken for granted, abstracted away, and conveniently ignored.

Although the mathematical structures developed under the informal norm of significance can become incredibly sophisticated in their orders of complexity and degrees of formalization, from a pragmatic standpoint they are still construed under naive assumptions about language use. This is because discussions carried out under the IL perspective do not make it their business to reflect on the relations between objects and signs, but presume that these matters can be separated from their subject proper and relegated to preliminary stages of the ultimately refined treatment.

In order to make this discussion of styles and issues and more concrete, the next several sections examine the practical bearings of the three styles of usage as they work out with regard to each of the identified issues of usage. This will be done by choosing a theoretical subject to illustrate the ideals of each style of usage, and then by developing the bearing of this subject on each of the three issues mentioned.

In accord with this plan, the next three sections present the basic ideas of three subjects: group theory, formal language theory, and computation theory. The presentation of these subjects is intended to serve both illustrative and instrumental purposes, exemplifying the ideals of the IL, FL, and CL styles of usage, respectively, but also equipping subsequent discussion with a supply of ready tools that can be used in its further development. After the treatment of these three subjects, and following the introduction of higher order sign relations, the next three sections after that are finally able to take up the three issues mentioned above, concerning the theoretical standings of signs, sets, and variables, respectively, and to consider how each of these issues appears in the light of each style of usage.

6.6. Basic Notions of Group Theory

Many of the most salient themes that have a call to be played out in this work — the application of generic forms of operation to themselves and to each other, the relationship of invariant forms to their variant presentations, and the relationship of abstract forms to their concrete representations — all of these topics arise in a very instructive way within the mathematical subject of group theory. This is most likely due to the fact that group theory, as a mathematical tool, got its start and much of its later sharpening in the process of trying to clarify the physical and formal phenomena that involve these very same issues.

In group theory, fortunately, these themes arise in a slightly plainer fashion, and the otherwise mystifying questions they involve have been studied to the point that their original mysteries are barely observed. Thus, a good way to approach the construction of a RIF is to study the well understood versions of self-application and self-explanation that turn up in group theory. Given the simpler character and the familiar condition of these topics in that area, they supply a convenient basis for subsequent extensions and help to arrange a staging ground for the types of sign theoretic generalizations that are ultimately desired.

This section develops the aspects of group theory that are needed in this work, bringing together a fundamental selection of abstract ideas and concrete examples that are used repeatedly throughout the rest of the project. To start, I present an abstract formulation of the basic concepts of group theory, beginning from a very general setting in the theory of relations and proceeding in quick order to the definitions of groups and their representations. After that, I describe a couple of concrete examples that are designed mainly to illustrate the abstract features of groups, but that also appear in different guises at later stages of this discussion.

A sequence of domains (SOD) is a nonempty sequence of nonempty sets. A declarative indication of a sequence of sets, typically offered in staking out the grounds of a discussion, is taken for granted as a SOD. Thus, the notation \({}^{\backprime\backprime}(X_i){}^{\prime\prime}\) is assumed by default to refer to a SOD \((X_i)_{i \in I},\!\) where each \(X_i\!\) is assumed to be a nonempty set.

Given a SOD \((X_i),\!\) its cartesian product, notated as \(\textstyle\prod_i (X_i)\) or \(\textstyle\prod_i X_i,\) is defined as follows:

\(\prod_i (X_i) = \prod_i X_i = \{ (x_i) : x_i \in X_i \}.\)

A relation is defined on a SOD as a subset of its cartesian product. In symbols, \(L\!\) is a relation on \((X_i),\!\) if and only if \(L \subseteq \textstyle\prod_i X_i.\)

A \(k\!\)-ary relation or a \(k\!\)-place relation is a relation on an ordered \(k\!\)-tuple of nonempty sets. Thus, \(L\!\) is a \(k\!\)-place relation relation on the SOD \((X_1, \ldots, X_k)\!\) if and only if \(L \subseteq X_1 \times \ldots \times X_k.\!\) In various applications, the \(k\!\)-tuple elements \((x_1, \ldots, x_k)\!\) of \(L\!\) are called its elementary relations, individual transactions, ingredients, or effects.

Before continuing with the chain of definitions, a slight digression is needed at this point to loosen up the interpretation of relation symbols in what follows. Exercising a certain amount of flexibility with notation, and relying on a discerning interpretation of equivocal expressions, one can use the name \({}^{\backprime\backprime} L {}^{\prime\prime}\) or any other indication of a \(k\!\)-place relation \(L\!\) in a wide variety of different fashions, both logical and operational.

First, \(L\!\) can be associated with a logical predicate or a proposition that says something about the space of effects, being true of certain effects and false of all others. In this way, \({}^{\backprime\backprime} L {}^{\prime\prime}\) can be interpreted as naming a function from \(\textstyle\prod_i X_i\) to the domain of truth values \(\mathbb{B} = \{ 0, 1 \}.\) With the appropriate understanding, it is permissible to let the notation \({}^{\backprime\backprime} L : X_1 \times \ldots \times X_k \to \mathbb{B} {}^{\prime\prime}\) indicate this interpretation.

Second, \(L\!\) can be associated with a piece of information that allows one to complete various sorts of partial data sets in the space of effects. In particular, if one is given a partial effect or an incomplete \(k\!\)-tuple, say, one that is missing a value in the \(j^\text{th}\!\) place, as indicated by the notation \({}^{\backprime\backprime} (x_1, \ldots, \hat{j}, \ldots, x_k) {}^{\prime\prime},\) then \({}^{\backprime\backprime} L {}^{\prime\prime}\) can be interpreted as naming a function from the cartesian product of the domains at the filled places to the power set of the domain at the missing place. With this in mind, it is permissible to let \({}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to \mathrm{Pow}(X_j) {}^{\prime\prime}\) indicate this use of \({}^{\backprime\backprime} L {}^{\prime\prime}.\) If the sets in the range of this function are all singletons, then it is permissible to let \({}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to X_j {}^{\prime\prime}\) specify the corresponding use of \({}^{\backprime\backprime} L {}^{\prime\prime}.\)

In general, the indicated degrees of freedom in the interpretation of relation symbols can be exploited properly only if one understands the consequences of this interpretive liberality and is prepared to deal with the problems that arise from its “polymorphic” practices — from using the same sign in different contexts to refer to different types of objects. For example, one should consider what happens, and what sort of IF is demanded to deal with it, when the name \({}^{\backprime\backprime} L {}^{\prime\prime}\) is used equivocally in a statement like \(L = L^{-1}(1),\!\) where a sensible reading requires it to denote the relational set \(L \subseteq \textstyle\prod_i X_i\) on the first appearance and the propositional function \(L : \textstyle\prod_i X_i \to \mathbb{B}\) on the second appearance.

A triadic relation is a relation on an ordered triple of nonempty sets. Thus, \(L\!\) is a triadic relation on \((X, Y, Z)\!\) if and only if \(L \subseteq X \times Y \times Z.\!\) Exercising a proper degree of flexibility with notation, one can use the name of a triadic relation \(L \subseteq X \times Y \times Z\!\) to refer to a logical predicate or a propositional function, of the type \(X \times Y \times Z \to \mathbb{B},\!\) or any one of the derived binary operations, of the three types \(X \times Y \to \mathrm{Pow}(Z),\!\) \(X \times Z \to \mathrm{Pow}(Y),\!\) and \(Y \times Z \to \mathrm{Pow}(X).\!\)

A binary operation or law of composition (LOC) on a nonempty set \(X\!\) is a triadic relation \(* \subseteq X \times X \times X\!\) that is also a function \(* : X \times X \to X.\!\) The notation \({}^{\backprime\backprime} x * y {}^{\prime\prime}\!\) is used to indicate the functional value \(*(x, y) \in X,~\!\) which is also referred to as the product of \(x\!\) and \(y\!\) under \(*.\!\)

A binary operation or LOC \(*\!\) on \(X\!\) is associative if and only if \((x*y)*z = x*(y*z)\!\) for every \(x, y, z \in X.\!\)

A binary operation or LOC \(*\!\) on \(X\!\) is commutative if and only if \(x*y = y*x\!\) for every \(x, y \in X.\!\)

A semigroup consists of a nonempty set with an associative LOC on it. On formal occasions, a semigroup is introduced by means a formula like \(\underline{X} = (X, *),\!\) read to say that \(\underline{X}\!\) is the ordered pair \((X, *).\!\) This form specifies \(X\!\) as the nonempty set and \(*\!\) as the associative LOC. By way of recalling the extra structure, this specification underscores the name of the set \(X\!\) to form the name of the semigroup \(\underline{X}.\!\) In contexts where there is only one semigroup being discussed, or where the additional structure is otherwise understood, it is common practice to call the semigroup by the same name as the underlying set. In contexts where more than one semigroup is formed on the same set, indexed notations like \(\underline{X}_i = (X, *_i)\!\) may be used to distinguish them.

A unit element in a semigroup \(\underline{X} = (X, *)\!\) is an element \(e\!\) in \(X\!\) such that \(x*e = x = e*x\!\) for all \(x \in X.\!\) In other words, a unit element is a two-sided identity element. If a semigroup \(\underline{X}\!\) has a unit element, then it is unique, since if \(e'\!\) is also a unit element, then \(e' = e'*e = e.\!\)

A monoid is a semigroup with a unit element. Formally, a monoid \(\underline{X}\!\) is an ordered triple \((X, *, e),\!\) where \(X\!\) is a set, \(*\!\) is an associative LOC on the set \(X,\!\) and \(e\!\) is the unit element in the semigroup \((X, *).\!\)

An inverse of an element \(x\!\) in a monoid \(\underline{X} = (X, *, e)\!\) is an element \(y \in X\!\) such that \(x*y = e = y*x.\!\) An element that has an inverse in \(\underline{X}\!\) is said to be invertible (relative to \(*\!\) and \(e\!\)). If \(x\!\) has an inverse in \({\underline{X}},\!\) then it is unique to \(x.\!\) To see this, suppose that \(y'\!\) is also an inverse of \(x.\!\) Then it follows that:

\(y' ~=~ y'*e ~=~ y'*(x*y) ~=~ (y'*x)*y ~=~ e*y ~=~ y.\!\)

A group is a monoid all of whose elements are invertible. That is, a group is a semigroup with a unit element in which every element has an inverse. Putting all the pieces together, then, a group \(\underline{X} = (X, *, e)\!\) is a set \(X\!\) with a binary operation \(* : X \times X \to X\!\) and a designated element \(e\!\) that is subject to the following three axioms:

G1. (associative) \(x*(y*z) ~=~ (x*y)*z,\!\) for all \(x, y, z \in X.\!\)
G2. (identity) \(e*x ~=~ x ~=~ x*e,\!\) for some \(e \in X.\!\)
G3. (inverses) \(x*y ~=~ e ~=~ y*x,\!\) for some \(y \in X,\!\) for all \(x \in X.\!\)

It is customary to use a number of abbreviations and conventions in discussing semigroups, monoids, and groups. A system \(\underline{X} = (X, *)\!\) is given the adjective commutative if and only if \(*\!\) is commutative. Commutative groups, however, are traditionally called abelian groups. By way of making comparisons with familiar systems and operations, the following usages are also common.

One says that \(\underline{X}\!\) is written multiplicatively to mean that a raised dot \({(\cdot)}\!\) or concatenation is used instead of a star for the LOC. In this case, the unit element is commonly written as an ordinary algebraic one, \(1,\!\) while the inverse of an element \(x\!\) is written as \(x^{-1}.\!\) The multiplicative manner of presentation is the one that is usually taken by default in the most general types of situations. In the multiplicative idiom, the following definitions of powers, cyclic groups, and generators are also common.

In a semigroup, the \(n^\text{th}\!\) power of an element \(x\!\) is notated as \(x^n\!\) and defined for every positive integer \(n\!\) in the following manner. Proceeding recursively, let \(x^1 = x\!\) and let \(x^n = x^{n-1} \cdot x\!\) for all \(n > 1.\!\)
In a monoid, \(x^n\!\) is defined for every non-negative integer \(n\!\) by letting \(x^0 = 1\!\) and proceeding the same way for \(n > 0.\!\)
In a group, \(x^n\!\) is defined for every integer \(n\!\) by letting \(x^n = (x^{-1})^{-n}\!\) for \(n < 0\!\) and proceeding the same way for \(n \ge 0.\!\)
A group \(\underline{X}\!\) is cyclic if and only if there is an element \(g \in X\!\) such that every \(x \in X\!\) can be written as \(x = g^n\!\) for some \(n \in \mathbb{Z}.\!\) In this case, an element such as \(g\!\) is called a generator of the group.

One says that \(\underline{X}\!\) is written additively to mean that a plus sign \((+)\!\) is used instead of a star for the LOC. In this case, the notation \(x + y\!\) indicates a value in \(X\!\) called the sum of \(x\!\) and \(y.\!\) This involves the further conventions that the unit element is written as a zero, \(0,\!\) and may be called the zero element, while the inverse of an element \(x\!\) is written as \(-x,\!\) and may be called the negative of \(x.\!\) Usually, but not always, this manner of presentation is reserved for commutative systems and abelian groups. In the additive idiom, the following definitions of multiples, cyclic groups, and generators are also common.

In a semigroup written additively, the \(n^\text{th}\!\) multiple of an element \(x\!\) is notated as \(nx\!\) and defined for every positive integer \(n\!\) in the following manner. Proceeding recursively, let \(1x = x\!\) and let \(nx = (n-1)x + x\!\) for all \(n > 1.\!\)
In a monoid written additively, the multiple \(nx\!\) is defined for every non-negative integer \(n\!\) by letting \(0x = 0\!\) and proceeding the same way for \(n > 0.\!\)
In a group written additively, the multiple \(nx\!\) is defined for every integer \(n\!\) by letting \(nx = (-n)(-x)\!\) for \(n < 0\!\) and proceeding the same way for \(n \ge 0.\!\)
A group \(\underline{X} = (X, +, 0)\!\) is cyclic if and only if there is an element \(g \in X\!\) such that every \(x \in X\!\) can be written as \(x = ng\!\) for some \(n \in \mathbb{Z}.\!\) In this case, an element such as \(g\!\) is called a generator of the group.

Mathematical systems, like the relations \(L\!\) and operational structures \(\underline{X}\!\) encountered above, are seldom comprehended in perfect isolation, but need to be viewed in relation to each other, as belonging to families of comparable systems. Systems are compared by finding or making correspondences between them, and this can be formalized as a task of setting up and probing various types of mappings between the sundry appearances of their objective structures. This requires techniques for exploring the spaces of mappings that exist between families of systems, for inquiring into and demonstrating the existence of specified types of functions between them, plus technical concepts for classifying and comparing their diverse representations. Therefore, in order to compare the structures of different objective systems and to recognize the same objective structure when it appears in different phenomenal or syntactic disguises, it helps to develop general forms of comparison that can organize the welter of possible associations between systems and single out those that represent a preservation of the designated forms.

The next series of definitions develops the mathematical concepts of homomorphism and isomorphism, special types of mappings between systems that serve to formalize the intuitive notions of structural analogy and abstract identity, respectively. In very rough terms, a homomorphism is a structure-preserving mapping between systems, but only in the sense that it preserves some part or some aspect of the structure mapped, whereas an isomorphism is a correspondence that preserves all of the relevant structure.

The induced action of a function \(f : X\to Y\!\) on the cartesian power \(X^k\!\) is the function \(f' : X^k \to Y^k\!\) defined by:

\(f'((x_1, \ldots, x_k)) ~=~ (f(x_1), \ldots, f(x_k)).\!\)

Usually, \(f'\!\) is regarded as the natural, obvious, tacit, or trivial extension that \(f : X \to Y\!\) possesses in the space of functions \(X^k \to Y^k,\!\) and is thus allowed to go by the same name as \(f.\!\) This convention, assumed by default, is expressed by the formula:

\(f((x_1, \ldots, x_k)) ~=~ (f(x_1), \ldots, f(x_k)).\!\)

A relation homomorphism from a \(k\!\)-place relation \(P \subseteq X^k\!\) to a \(k\!\)-place relation \(Q \subseteq Y^k\!\) is a mapping between the underlying sets, \(h : X \to Y,\!\) whose induced action \(h : X^k \to Y^k\!\) preserves the indicated relations, taking every element of \(P\!\) to an element of \(Q.\!\) In other words:

\((x_1, \ldots, x_k) \in P ~\Rightarrow~ h((x_1, \ldots, x_k)) \in Q.\!\)

Applying this definition to the case of two binary operations, say \(*_1\!\) on \(X_1\!\) and \(*_2\!\) on \(X_2,\!\) which are special kinds of triadic relations, say \(*_1 \subseteq X_1^3\!\) and \(*_2 \subseteq X_2^3,\!\) one obtains:

\((x, y, z) \in *_1 ~\Rightarrow~ h((x, y, z)) \in *_2.\!\)

Under the induced action of \(h : X_1 \to X_2,\!\) or its tacit extension as a mapping \(h : X_1^3 \to X_2^3,\!\) this implication yields the following:

\((x, y, z) \in *_1 ~\Rightarrow~ (h(x), h(y), h(z)) \in *_2.\!\)

The left hand side of this implication is expressed more commonly as:

\(x *_1 y = z.\!\)

The right hand side of the implication is expressed more commonly as:

\(h(x) *_2 h(y) = h(z).\!\)

From these two equations one derives, by substituting \(x *_1 y\!\) for \(z\!\) in \(h(z),\!\) a succinct formulation of the condition for a mapping \(h : X_1 \to X_2\!\) to be a relation homomorphism from a system \((X_1, *_1)\!\) to a system \(X_2, *_2,\!\) expressed in the form of a distributive law or linearity condition:

\(h(x *_1 y) ~=~ h(x) *_2 h(y).\!\)

To sum up the development so far in a general way: A homomorphism is a mapping from a system to a system that preserves an aspect of systematic structure, usually one that is relevant to an understood purpose or context. When the pertinent aspect of structure for both the source and the target system is a binary operation or a LOC, then the condition that the LOCs be preserved in passing from the pre-image to the image of the mapping is frequently expressed by stating that the image of the product is the product of the images. That is, if \(h : X_1 \to X_2\!\) is a homomorphism from \({\underline{X}_1 = (X_1, *_1)}\!\) to \({\underline{X}_2 = (X_2, *_2)},\!\) then for every \(x, y \in X_1\!\) the following condition holds:

\(h(x *_1 y) ~=~ h(x) *_2 h(y).\!\)

Next, the concept of a homomorphism or structure-preserving map is specialized to the different kinds of structure of interest here.

A semigroup homomorphism from a semigroup \({\underline{X}_1 = (X_1, *_1)}\!\) to a semigroup \({\underline{X}_2 = (X_2, *_2)}\!\) is a mapping between the underlying sets that preserves the structure appropriate to semigroups, namely, the LOCs. This makes it a map \(h : X_1 \to X_2\!\) whose induced action on the LOCs is such that it takes every element of \(*_1\!\) to an element of \(*_2.\!\) That is:

\((x, y, z) \in *_1 ~\Rightarrow~ h((x, y, z)) = (h(x), h(y), h(z)) \in *_2.\!\)

A monoid homomorphism from a monoid \(\underline{X}_1 = (X_1, *_1, e_1)\!\) to a monoid \(\underline{X}_2 = (X_2, *_2, e_2)\!\) is a mapping between the underlying sets, \(h : X_1 \to X_2,\!\) that preserves the structure appropriate to monoids, namely, the LOCs and the identity elements. This means that the map \(h\!\) is a semigroup homomorphism from \(\underline{X}_1\!\) to \(\underline{X}_2,\!\) where these are considered as semigroups, but with the extra condition that \(h\!\) takes \(e_1\!\) to \(e_2.\!\)

A group homomorphism from a group \(\underline{X}_1 = (X_1, *_1, e_1)\!\) to a group \(\underline{X}_2 = (X_2, *_2, e_2)\!\) is a mapping between the underlying sets, \(h : X_1 \to X_2,\!\) that preserves the structure appropriate to groups, namely, the LOCs, the identity elements, and the inverse elements. This means that the map \(h\!\) is a monoid homomorphism from \(X_1\!\) to \(X_2,\!\) where these are viewed as monoids, with the extra condition that \(h(x^{-1}) = h(x)^{-1}\!\) for all \(x \in X_1.\!\) As it happens, the inverse elements are automatically preserved if the LOCs and the identity elements are, so a monoid homomorphism suffices to constitute a group homomorphism for a monoid that is also a group. To see why this is so, consider the following chain of equalities:

\(h(x) *_2 h(x^{-1}) ~=~ h(x *_1 x^{-1}) ~=~ h(e_1) ~=~ e_2.\!\)

An isomorphism is a homomorphism that is one to one and onto, or bijective. Systems that have an isomorphism between them are called isomorphic to each other and belong to the same isomorphism class. From an abstract point of view, isomorphic systems are tantamount to the same mathematical object, differing at most in their manner of presentation and the details of their representation. Usually these differences are regarded as purely notational, a mere change of names. Thus, they are seen as accidental or accessory features of the object, corresponding to different ways of grasping the objective structure that is the main interest of the study but not considered as essential parts of its ultimate constitution or even necessary to its final comprehension.

Finally, to introduce two pieces of language that are often useful: an endomorphism is a homomorphism from a system into itself, while an automorphism is an isomorphism from a system onto itself.

If nothing more succinct is available, a group can be specified by means of its operation table, usually styled either as a multiplication table or an addition table. Table 32.1 illustrates the general scheme of a group operation table. In this case the group operation, treated as a “multiplication”, is formally symbolized by a star \((*),\!\) as in \(x * y = z.\!\) In contexts where only algebraic operations are formalized it is common practice to omit the star, but when logical conjunctions (symbolized by a raised dot \({(\cdot)}\!\) or by concatenation) appear in the same context, then the star is retained for the group operation.

Another way of approaching the study or presenting the structure of a group is by means of a group representation, in particular, one that represents the group in the special form of a transformation group. This is a set of transformations acting on a concrete space of “points” or a designated set of “objects”. In providing an abstractly given group with a representation as a transformation group, one is seeking to know the group by its effects, that is, in terms of the action it induces, through the representation, on a concrete domain of objects. In the type of representation known as a regular representation, one is seeking to know the group by its effects on itself.

Tables 32.2 and 32.3 illustrate the two conceivable ways of forming a regular representation of a group \(G.\!\)

The ante-representation of \(x_i\!\) in \(G\!\) is a function from \(G\!\) to \(G\!\) that is formed by considering the effects of \(x_i\!\) on the elements of \(G\!\) when \(x_i\!\) acts in the role of the first operand of the group operation. Notating this function as \(h_1(x_i) : G \to G,\!\) the regular ante-representation of \(G\!\) is a map \(h_1 : G \to (G \to G)\!\) that is schematized in Table 32.2. Here, each of the functions \(h_1(x_i) : G \to G\!\) is represented as a set of ordered pairs of the form \((x_j ~,~ x_i * x_j).\!\)

The post-representation of \(x_i\!\) in \(G\!\) is a function from \(G\!\) to \(G\!\) that is formed by considering the effects of \(x_i\!\) on the elements of \(G\!\) when \(x_i\!\) acts in the role of the second operand of the group operation. Notating this function as \(h_2(x_i) : G \to G,\!\) the regular post-representation of \(G\!\) is a map \(h_2 : G \to (G \to G)\!\) that is schematized in Table 32.3. Here, each of the functions \(h_2(x_i) : G \to G\!\) is represented as a set of ordered pairs of the form \((x_j ~,~ x_j * x_i).\!\)


\(\text{Table 32.1} ~~ \text{Scheme of a Group Operation Table}\!\)
\(*\!\) \(x_0\!\) \(\cdots\!\) \(x_j\!\) \(\cdots\!\)
\(x_0\!\) \(x_0 * x_0\!\) \(\cdots\!\) \(x_0 * x_j\!\) \(\cdots\!\)
\(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\)
\(x_i\!\) \(x_i * x_0\!\) \(\cdots\!\) \(x_i * x_j\!\) \(\cdots\!\)
\(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\)


\(\text{Table 32.2} ~~ \text{Scheme of the Regular Ante-Representation}\!\)
\(\text{Element}\!\) \(\text{Function as Set of Ordered Pairs of Elements}\!\)
\(x_0\!\) \(\{\!\) \((x_0 ~,~ x_0 * x_0),\!\) \(\cdots\!\) \((x_j ~,~ x_0 * x_j),\!\) \(\cdots\!\) \(\}\!\)
\(\cdots\!\) \(\{\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\}\!\)
\(x_i\!\) \(\{\!\) \((x_0 ~,~ x_i * x_0),\!\) \(\cdots\!\) \((x_j ~,~ x_i * x_j),\!\) \(\cdots\!\) \(\}\!\)
\(\cdots\!\) \(\{\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\}\!\)


\(\text{Table 32.3} ~~ \text{Scheme of the Regular Post-Representation}\!\)
\(\text{Element}\!\) \(\text{Function as Set of Ordered Pairs of Elements}\!\)
\(x_0\!\) \(\{\!\) \((x_0 ~,~ x_0 * x_0),\!\) \(\cdots\!\) \((x_j ~,~ x_j * x_0),\!\) \(\cdots\!\) \(\}\!\)
\(\cdots\!\) \(\{\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\}\!\)
\(x_i\!\) \(\{\!\) \((x_0 ~,~ x_0 * x_i),\!\) \(\cdots\!\) \((x_j ~,~ x_j * x_i),\!\) \(\cdots\!\) \(\}\!\)
\(\cdots\!\) \(\{\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\cdots\!\) \(\}\!\)


In following these maps, notice how closely one is treading in these representations to defining each element in terms of itself, but without quite going that far. There are a couple of catches that save this form of representation from falling into a “vicious circle”, that is, into a pattern of self-reference that would beg the question of a definition and vitiate its usefulness as an explanation of each group element's action. First, the regular representations do not represent that a group element is literally equal to a set of ordered pairs involving that very same group element, but only that it is mapped to something like this set. Second, careful usage would dictate that the something like that one finds in the image of a representation, being something that is specified only up to its isomorphism class, is a transformation that really acts, not on the group elements \(x_j\!\) themselves, but only on their inert tokens, inactive images, partial symbols, passing names, or transitory signs of the form \({}^{\backprime\backprime} x_j {}^{\prime\prime}.\!\)

These reservations are crucial to understanding the form of explanation that a regular representation provides, that is, what it explains and what it does not. If one is seeking an ontological explanation of what a group and its elements are, then one would have reason to object that it does no good to represent a group and its elements in terms of their actions on the group elements themselves, since one still does not know what the latter entities are. Notice that the form of this objection is reminiscent of a dilemma that is often thought to obstruct the beginning of an inquiry into inquiry. A similar pattern of knots occurs when one tries to explain the process of formalization in terms of its effects on the term formalization. In each case, the resolution of the difficulty turns on recognizing a distinction between the active and passive modes of existence that go with each nameable objective.

In order to have concrete materials available for future discussions of group theoretic issues, the remainder of this section takes up a pair of small examples, the groups of order \(4,\!\) and uses them to illustrate the chain of definitions and the forms of representation given above.

There are just two groups of order \(4.\!\) Both are abelian (commutative), but one is cyclic and the other is not. The cyclic group on \(4\!\) elements is commonly referred to as \(Z_4.\!\) (The German words Zahl for “number” and Zyklus for “cycle” together make the notation \(Z_n\!\) suggestive of the integers modulo \(n,\!\) which form a cyclic group of order \(n\!\) under the addition operation.) The acyclic group on \(4\!\) elements is usually called the Klein 4 group and notated as \(V_4.\!\) (The German word Vierbein is the substantive form of an adjective that means “four-legged”.)

For the sake of comparison, I give a discussion of both these groups.

The next series of Tables presents the group operations and regular representations for the groups \(V_4\!\) and \(Z_4.\!\) If a group is abelian, as both of these groups are, then its \(h_1\!\) and \(h_2\!\) representations are indistinguishable, and a single form of regular representation \({h : G \to (G \to G)}\!\) will do for both.

Table 33.1 shows the multiplication table of the group \(V_4,\!\) while Tables 33.2 and 33.3 present two versions of its regular representation. The first version, somewhat hastily, gives the functional representation of each group element as a set of ordered pairs of group elements. The second version, more circumspectly, gives the functional representative of each group element as a set of ordered pairs of element names, also referred to as objects, points, letters, or symbols.


\(\text{Table 33.1} ~~ \text{Multiplication Operation of the Group} ~ V_4\!\)
\(\cdot\!\) \(\mathrm{e}\!\) \(\mathrm{f}\!\) \(\mathrm{g}\!\) \(\mathrm{h}\!\)
\(\mathrm{e}\!\) \(\mathrm{e}\!\) \(\mathrm{f}\!\) \(\mathrm{g}\!\) \(\mathrm{h}\!\)
\(\mathrm{f}\!\) \(\mathrm{f}\!\) \(\mathrm{e}\!\) \(\mathrm{h}\!\) \(\mathrm{g}\!\)
\(\mathrm{g}\!\) \(\mathrm{g}\!\) \(\mathrm{h}\!\) \(\mathrm{e}\!\) \(\mathrm{f}\!\)
\(\mathrm{h}\!\) \(\mathrm{h}\!\) \(\mathrm{g}\!\) \(\mathrm{f}\!\) \(\mathrm{e}\!\)


\(\text{Table 33.2} ~~ \text{Regular Representation of the Group} ~ V_4\!\)
\(\text{Element}\!\) \(\text{Function as Set of Ordered Pairs of Elements}\!\)
\(\mathrm{e}\!\) \(\{\!\) \((\mathrm{e}, \mathrm{e}),\!\) \((\mathrm{f}, \mathrm{f}),\!\) \((\mathrm{g}, \mathrm{g}),\!\) \((\mathrm{h}, \mathrm{h})\!\) \(\}\!\)
\(\mathrm{f}\!\) \(\{\!\) \((\mathrm{e}, \mathrm{f}),\!\) \((\mathrm{f}, \mathrm{e}),\!\) \((\mathrm{g}, \mathrm{h}),\!\) \((\mathrm{h}, \mathrm{g})\!\) \(\}\!\)
\(\mathrm{g}\!\) \(\{\!\) \((\mathrm{e}, \mathrm{g}),\!\) \((\mathrm{f}, \mathrm{h}),\!\) \((\mathrm{g}, \mathrm{e}),\!\) \((\mathrm{h}, \mathrm{f})\!\) \(\}\!\)
\(\mathrm{h}\!\) \(\{\!\) \((\mathrm{e}, \mathrm{h}),\!\) \((\mathrm{f}, \mathrm{g}),\!\) \((\mathrm{g}, \mathrm{f}),\!\) \((\mathrm{h}, \mathrm{e})\!\) \(\}\!\)


\(\text{Table 33.3} ~~ \text{Regular Representation of the Group} ~ V_4\!\)
\(\text{Element}\!\) \(\text{Function as Set of Ordered Pairs of Symbols}\!\)
\(\mathrm{e}\!\) \(\{\!\) \(({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime})\!\) \(\}\!\)
\(\mathrm{f}\!\) \(\{\!\) \(({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime})\!\) \(\}\!\)
\(\mathrm{g}\!\) \(\{\!\) \(({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime})\!\) \(\}\!\)
\(\mathrm{h}\!\) \(\{\!\) \(({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!\) \(({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime})\!\) \(\}\!\)


Tables 34.1 and 35.1 show two forms of operation table for the group \(Z_4,\!\) presenting the group, for the sake of contrast, in multiplicative and additive forms, respectively. Tables 34.2 and 35.2 give the corresponding forms of the regular representation.

The multiplicative and additive versions of what is abstractly the same group, \(Z_4,\!\) can be used to illustrate the concept of a group isomorphism.

Let the multiplicative version of \(Z_4\!\) be formalized as follows:

\(Z_4(\cdot) ~=~ \underline{X}_1 ~=~ (X_1, *_1, e_1) ~=~ ( \{1, a, b, c \}, \cdot, 1),\!\)

where \({}^{\backprime\backprime} \cdot {}^{\prime\prime}\!\) denotes the operation in Table 34.1.

Let the additive version of \(Z_4\!\) be formalized as follows:

\(Z_4(+) ~=~ \underline{X}_2 ~=~ (X_2, *_2, e_2) ~=~ ( \{0, 1, 2, 3 \}, +, 0),\!\)

where \({}^{\backprime\backprime} + {}^{\prime\prime}\!\) denotes the operation in Table 35.1.

Then the mapping \(h : X_1 \to X_2\!\) whose ordered pairs are given by:

\(h ~=~ \{ (1, 0), (a, 1), (b, 2), (c, 3) \}\!\)

constitutes an isomorphism from \(Z_4(\cdot)\!\) to \(Z_4(+).\!\)

This fact can be verified in several ways: (1) by checking that the map \(h\!\) is bijective and that \(h(x \cdot y) = h(x) + h(y)\!\) for every \(x\!\) and \(y\!\) in \(Z_4(\cdot),\!\) (2) by noting that \(h\!\) transforms the whole multiplication table for \(Z_4(\cdot)\!\) into the whole addition table for \(Z_4(+)\!\) in a one-to-one and onto fashion, or (3) by finding that both systems share some collection of properties that are definitive of the abstract group, for example, being cyclic of order \(4.\!\)


\(\text{Table 34.1} ~~ \text{Multiplicative Presentation of the Group} ~ Z_4(\cdot)~\!\)
\(\cdot\!\) \(\mathrm{1}\) \(\mathrm{a}\) \(\mathrm{b}\) \(\mathrm{c}\)
\(\mathrm{1}\) \(\mathrm{1}\) \(\mathrm{a}\) \(\mathrm{b}\) \(\mathrm{c}\)
\(\mathrm{a}\) \(\mathrm{a}\) \(\mathrm{b}\) \(\mathrm{c}\) \(\mathrm{1}\)
\(\mathrm{b}\) \(\mathrm{b}\) \(\mathrm{c}\) \(\mathrm{1}\) \(\mathrm{a}\)
\(\mathrm{c}\) \(\mathrm{c}\) \(\mathrm{1}\) \(\mathrm{a}\) \(\mathrm{b}\)


\(\text{Table 34.2} ~~ \text{Regular Representation of the Group} ~ Z_4(\cdot)\!\)
\(\text{Element}\!\) \(\text{Function as Set of Ordered Pairs of Elements}\!\)
\(\mathrm{1}\!\) \(\{\!\) \((\mathrm{1}, \mathrm{1}),\!\) \((\mathrm{a}, \mathrm{a}),\!\) \((\mathrm{b}, \mathrm{b}),\!\) \((\mathrm{c}, \mathrm{c})\!\) \(\}\!\)
\(\mathrm{a}\!\) \(\{\!\) \((\mathrm{1}, \mathrm{a}),\!\) \((\mathrm{a}, \mathrm{b}),\!\) \((\mathrm{b}, \mathrm{c}),\!\) \((\mathrm{c}, \mathrm{1})\!\) \(\}\!\)
\(\mathrm{b}\!\) \(\{\!\) \((\mathrm{1}, \mathrm{b}),\!\) \((\mathrm{a}, \mathrm{c}),\!\) \((\mathrm{b}, \mathrm{1}),\!\) \((\mathrm{c}, \mathrm{a})\!\) \(\}\!\)
\(\mathrm{c}\!\) \(\{\!\) \((\mathrm{1}, \mathrm{c}),\!\) \((\mathrm{a}, \mathrm{1}),\!\) \((\mathrm{b}, \mathrm{a}),\!\) \((\mathrm{c}, \mathrm{b})\!\) \(\}\!\)


\(\text{Table 35.1} ~~ \text{Additive Presentation of the Group} ~ Z_4(+)\!\)
\(+\!\) \(\mathrm{0}\!\) \(\mathrm{1}\!\) \(\mathrm{2}\!\) \(\mathrm{3}\!\)
\(\mathrm{0}\!\) \(\mathrm{0}\!\) \(\mathrm{1}\!\) \(\mathrm{2}\!\) \(\mathrm{3}\!\)
\(\mathrm{1}\!\) \(\mathrm{1}\!\) \(\mathrm{2}\!\) \(\mathrm{3}\!\) \(\mathrm{0}\!\)
\(\mathrm{2}\!\) \(\mathrm{2}\!\) \(\mathrm{3}\!\) \(\mathrm{0}\!\) \(\mathrm{1}\!\)
\(\mathrm{3}\!\) \(\mathrm{3}\!\) \(\mathrm{0}\!\) \(\mathrm{1}\!\) \(\mathrm{2}\!\)


\(\text{Table 35.2} ~~ \text{Regular Representation of the Group} ~ Z_4(+)\!\)
\(\text{Element}\!\) \(\text{Function as Set of Ordered Pairs of Elements}\!\)
\(\mathrm{0}\!\) \(\{\!\) \((\mathrm{0}, \mathrm{0}),\!\) \((\mathrm{1}, \mathrm{1}),\!\) \((\mathrm{2}, \mathrm{2}),\!\) \((\mathrm{3}, \mathrm{3})~\!\) \(\}\!\)
\(\mathrm{1}\!\) \(\{\!\) \((\mathrm{0}, \mathrm{1}),\!\) \((\mathrm{1}, \mathrm{2}),\!\) \((\mathrm{2}, \mathrm{3}),\!\) \((\mathrm{3}, \mathrm{0})\!\) \(\}\!\)
\(\mathrm{2}\!\) \(\{\!\) \((\mathrm{0}, \mathrm{2}),\!\) \((\mathrm{1}, \mathrm{3}),\!\) \((\mathrm{2}, \mathrm{0}),\!\) \((\mathrm{3}, \mathrm{1})\!\) \(\}\!\)
\(\mathrm{3}\!\) \(\{\!\) \((\mathrm{0}, \mathrm{3}),\!\) \((\mathrm{1}, \mathrm{0}),\!\) \((\mathrm{2}, \mathrm{1}),\!\) \((\mathrm{3}, \mathrm{2})\!\) \(\}\!\)


Standard references for the above material are:

  1. Jacobson, N., Basic Algebra I, W.H. Freeman, San Francisco, CA, 1974.
  2. Lang, S., Algebra, 2nd ed., Addison Wesley, Menlo Park, CA, 1984.
  3. Rotman, J.J., An Introduction to the Theory of Groups, 3rd ed., Allyn & Bacon, Boston, MA, 1984.

6.7. Basic Notions of Formal Language Theory

This section collects the material on formal language theory that is needed for the rest of this work.

A formal language is a countable set of expressions, each of which is a finite sequence of elements taken from a finite set of symbols. The primitive symbols that are used to generate the expressions of a formal language are collectively called its alphabet or its lexicon, depending on whether the expressions of the language are regarded on analogy with words or sentences, respectively.

So long as one considers only words or only sentences, that is, only one level of finite sequences of symbols, it does not matter essentially what the sequences are called. Unless otherwise specified, a formal language is taken by default to be a one-level formal language, containing only a single level of sequences. If one wants to consider both words and sentences, that is, finite sequences of symbols and then finite sequences of these lower level sequences, all in the same context of discussion, then one has to move up to an essentially more powerful concept, that of a two-level formal language.

Until further notice, the next part of this discussion applies only to one-level formal languages. When this project reaches the stage of dealing with higher-level formal languages, a few of the following definitions and default assumptions will need to be adjusted slightly.

It is convenient to have a general term for referring to alphabets and lexicons, indifferently, without concern for their level of construction. Therefore, any finite set \(\underline{\underline{X}}\) is described as a syntactic resource for the syntactic domain \(\underline{X},\) provided its elements can be used as syntactic primitives to construct the signs and expressions in \(\underline{X}.\) If the primitive signs in a syntactic resource are interpreted to denote primitive objects or primitive operations, then a collection of such objects or operations is described as an objective or an operational resource, as the case may be.

It is always tempting to seek analogies between formal languages and algebraic structures, and it is often very useful to do so. But if one tries to forge an analogy between the relation \(\underline{\underline{X}} ~\text{is a resource for}~ \underline{X},\) in the formal language sense, and the relation \(\underline{\underline{X}} ~\text{is a basis for}~ \underline{X},\) in the algebraic sense, then it becomes necessary to observe important differences between the two perspectives, as they are currently applied.

In formal language theory one typically fixes the syntactic resource \(\underline{\underline{X}}\) as the primary reality, that is, as the ruling parameter of discussion, and then considers each formal language \(\underline{X}\) that can be generated on \(\underline{\underline{X}}\) as a particular subset of the maximal language that is possible on \(\underline{\underline{X}}.\) This direction of approach can be contrasted with what is more usual in algebraic studies, where the generated object \(\underline{X}\) is taken as the primary reality, and a basis \(\underline{\underline{X}}\) is defined secondarily as a minimal or independent spanning set, but generally serves as only one of many possible bases.

The linguistic relation \(\underline{\underline{X}} ~\text{is a resource for}~ \underline{X}\) is thus exploited in the opposite direction from the algebraic relation \(\underline{\underline{X}} ~\text{is a basis for}~ \underline{X}.\) There does not appear to be any reason in principle why either study cannot be cast the other way around, but it has to be noted that the current practices, and the preferences that support them, dictate otherwise.

By way of a general notation, I use doubly underlined capital letters to denote finite sets taken as the syntactic resources of formal languages, and I use doubly underlined lower case letters to denote their symbols. Schematically, this appears as follows:

\(\underline{\underline{X}} ~=~ \{ \underline{\underline{x}}_1, \ldots, \underline{\underline{x}}_k \}.\)

In a formal language context, I use singly underlined capital letters to indicate the various formal languages being considered, that is, the countable sets of sequences over a given syntactic resource that are being singled out for attention, and I use singly underlined lower case letters to indicate various individual sequences in these languages. Schematically, this appears as follows:

\(\underline{X} ~=~ \{ \underline{x}_1, \ldots, \underline{x}_\ell, \ldots \}.\)

Usually, one compares different formal languages over a fixed resource, but since resources are finite it is no trouble to unite a finite number of them into a common resource. Without loss of generality, then, one typically has a fixed set \(\underline{\underline{X}}\) in mind throughout a given discussion and has to consider a variety of different formal languages that can be generated from the symbols of \(\underline{\underline{X}}.\) These sorts of considerations are aided by defining a number of formal operations on the resources \(\underline{\underline{X}}\) and the languages \(\underline{X}.\)

The \(k^\text{th}\!\) power of \(\underline{\underline{X}},\) written as \(\underline{\underline{X}}^k,\) is defined as the set of all sequences of length \(k\!\) over \(\underline{\underline{X}}.\)

\(\underline{\underline{X}}^k ~=~ \{ (u_1, \ldots, u_k) : u_i \in \underline{\underline{X}}, i = 1 ~\text{to}~ k \}.\)

By convention for the case where \(k = 0,\!\) this gives \(\underline{\underline{X}}^0 = \{ () \},\) that is, the singleton set consisting of the empty sequence. Depending on the setting, the empty sequence is referred to as the empty word or the empty sentence, and is commonly denoted by an epsilon \({}^{\backprime\backprime} \varepsilon {}^{\prime\prime}\) or a lambda \({}^{\backprime\backprime} \lambda {}^{\prime\prime}.\) In this text a variant epsilon symbol will be used for the empty sequence, \({\varepsilon = ()}.\!\) In addition, a singly underlined epsilon will be used for the language that consists of a single empty sequence, \(\underline\varepsilon = \{ \varepsilon \} = \{ () \}.\)

It is probably worth remarking at this point that all empty sequences are indistinguishable (in a one-level formal language, that is), and thus all sets that consist of a single empty sequence are identical. Consequently, \(\underline{\underline{X}}^0 = \{ () \} = \underline{\varepsilon} = \underline{\underline{Y}}^0,\) for all resources \(\underline{\underline{X}}\) and \(\underline{\underline{Y}}.\) However, the empty language \(\varnothing = \{ \}\) and the language that consists of a single empty sequence \(\underline\varepsilon = \{ \varepsilon \} = \{ () \}\) need to be distinguished from each other.

The surplus of \(\underline{\underline{X}},\) written as \(\underline{\underline{X}}^+,\) is defined as the set of all positive length sequences over \(\underline{\underline{X}}.\)

\(\underline{\underline{X}}^+ ~=~ \bigcup_{j = 1}^\infty \underline{\underline{X}}^j ~=~ \underline{\underline{X}}^1 \cup \ldots \cup \underline{\underline{X}}^k \cup \ldots\)

The kleene star of \(\underline{\underline{X}},\) written as \(\underline{\underline{X}}^*,\) is defined as the set of all finite sequences over \(\underline{\underline{X}}.\)

\(\underline{\underline{X}}^* ~=~ \bigcup_{j = 0}^\infty \underline{\underline{X}}^j ~=~ \underline{\underline{X}}^0 \cup \underline{\underline{X}}^+ ~=~ \underline{\underline{X}}^0 \cup \ldots \cup \underline{\underline{X}}^k \cup \ldots\)

A standard reference for the above material is:

  • Denning, P.J., Dennis, J.B., and Qualitz, J.E., Machines, Languages, and Computation, Prentice-Hall, Englewood Cliffs, NJ, 1978.

6.8. A Perspective on Computation

In this section, instead of presenting a standard foundation for computation theory, I focus on a single idea that captures the essence of the computational approach, given that the background assumptions of a formal approach are already in place, in others words, amounting to the specific difference that the CL style adds to the FL perspective.

The notion of computation that makes sense in this setting conceives it as a process that replaces signs with better signs of the same objects. For instance, a computation replaces arbitrary indications of numerical values and other formal entities with clearer and more concise signs of the same objects, ultimately resulting in the clearest and most concise signs of them, called their canonical interpretants or normal forms.

Viewed from a standpoint in the pragmatic theory of signs, computation is a process that trades a sign for a better sign of the same object. Thus, a computation is an interpretive process whose passage from sign to interpretant sign improves the indication of the object in some way. The dimensions along which signs can be compared are various, usually being described as measures of clarity, distinctness, or usability of the information conveyed, but all such measures are interpretive in character. That is, the sense in which a computation improves its signs is relative to the purpose actualized in a given moment of interpretation.

It is probably worth emphasizing this point. There need be nothing intrinsic to a sign itself that makes it better or worse than another. This is apparent from examples as simple as the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) where nothing intrinsic to the grammatical categories of signs makes either the nouns or the pronouns essentially better than the others in every situation. In general, a preference defined on signs need reflect nothing more than the purpose or caprice of a particular interpreter at a given moment of interpretation. Of course, one is usually interested in cases where a measure of aptness, quality, or utility can be justified on more stable and substantial grounds.

Computation adds to the bare conception of a sign relation a notion of progress, which implies in turn: (1) the dynamic notion of a temporal process taking place between signs, and (2) the evaluative notion of a utility measure rating each sign's relative virtue as a sign.

A sign process or interpretive process is hypothesized to take place in the connotative plane of a particular sign relation, constituting a temporal process or a dynamic system that is responsible for changing signs into their interpretant signs. A sign utility is a comparative measure of sign quality, rating each sign's relative virtues as a sign of a given object. Progress in a sign process means that a change taking place between signs is one that acts in concert with increasing the sign's quality of indication.

6.9. Higher Order Sign Relations : Introduction

When interpreters reflect on their own use of signs they require an appropriate technical language in which to pursue these reflections. For this they need signs that refer to sign relations, signs that refer to elements and components of sign relations, and signs that refer to properties and classes of sign relations. The orders of signs that develop as reflection evolves can be placed under the description of higher order signs, and the extended sign relations that involve them can be referred to as higher order sign relations.

Whether any forms of observation and reflection can be conducted outside the medium of language is not a question I can address here. It is apparent as a practical matter, however, that stable and sharable forms of knowledge depend on the availability of an adequate language. Accordingly, there is a relationship of practical necessity that binds the conditions for reflective interpretation to the possibility of extending sign relations through higher orders. At minimum, in addition to the signs of objects originally given, there must be signs of signs and signs of their interpretants, and each of these higher order signs requires a further occurrence of higher order interpretants to continue and complete its meaning within a higher order sign relation. In general, higher order signs can arise in a number of independent fashions, but one of the most common derivations is through the specialized devices of quotation. This establishes a contingent relation between reflection and quotation.

This entire topic, involving the relationship of reflective interpreters to the realm of higher order sign relations and the available operators for quotation, forms the subject of a recurring investigation that extends throughout the rest of this work. This section introduces only enough of the basic concepts, terminology, and technical machinery that is necessary to get the theory of higher order signs off the ground.

By way of a first definition, a higher order sign relation is a sign relation, some of whose signs are higher order signs. If an extra degree of precision is needed, higher order signs can be distinguished in a variety of different species or types, to be taken up next.

In devising a nomenclature for the required species of higher order signs, it is a good idea to generalize slightly, designing an analytic terminology that can be adapted to classify the higher order signs of arbitrary relations, not just the higher order signs of sign relations. The work of developing a more powerful vocabulary can be put to good account at a later stage of this project, when it is necessary to discuss the structural constituents of arbitrary relations and to reflect on the language that is used to discuss them. However, by way of making a gradual approach, it nonetheless helps to take up the classification of higher order signs in a couple of passes, first considering the categories of higher order signs as they apply to sign relations and then discussing how the same ideas are relevant to arbitrary relations.

Here are the species of higher order signs that can be used to discuss the structural constituents and intensional genera of sign relations:

  1. Signs that denote signs, that is, signs whose objects are signs in the same sign relation, are called higher ascent (HA) signs.
  2. Signs that denote dyadic components of elementary sign relations, that is, signs whose objects are elemental pairs or dyadic actions having any one of the forms \((o, s),\!\) \((o, i),\!\) or \((s, i),\!\) are called higher employ (HE) signs.
  3. Signs that denote elementary sign relations, that is, signs whose objects are elemental triples or triadic transactions having the form \((o, s, i),\!\) are called higher import (HI) signs.
  4. Signs that denote sign relations, that is, signs whose objects are themselves sign relations, are called higher upshot (HU) signs.
  5. Signs that denote intensional genera of sign relations, that is, signs whose objects are properties or classes of sign relations, are called higher yclept (HY) signs.

Analogous species of higher order signs can be used to discuss the structural constituents and intensional genera of arbitrary relations. In order to describe them, it is necessary to introduce a few extra notions from the theory of relations. This, in turn, occasions a recurring difficulty with the exposition that needs to be noted at this point.

The subject matters of relations, types, and functions enjoy a form of recursive involvement with one another that makes it difficult to know where to get on and where to get off the circle of explanation. As I currently understand their relationship, it can be approached in the following order:

Relations have types.
Types are functions.
Functions are relations.

In this setting, a type is a function from the places of a relation, that is, from the index set of its components, to a collection of sets that are called the domains of the relation.

When a relation is given an extensional representation as a collection of elements, these elements are called its elementary relations or its individual transactions. The type of an elementary relation is a function from an index set whose elements are called the places of the relation to a set of sets whose elements are called the domains of the relation. The arity or adicity of an elementary relation is the cardinality of this index set. In general, these cardinalities can be ranked as finite, denumerably infinite, or non-denumerable.

Elementary relations are also called the effects of a relation, more specifically, as its maximal or total effects, which are the kinds of effects that one usually intends in the absence of further qualification. More generally, a component relation or a partial transaction of a relation is a projection of one of its elementary relations on a subset of its places.

A homogeneous relation is a relation, all of whose elementary relations have the same type. In this case, the type and the arity are properties that are defined for the relation itself. The rest of this discussion is specialized to homogeneous relations.

When the arity of a relation is a finite number \(k,\!\) then the relation is called a \(k\!\)-place relation. In this case, the elementary relations are just the \(k\!\)-tuples belonging to the relation. In the finite case, for example, a non-trivial properly partial transaction is a \(j\!\)-tuple extracted from a \(k\!\)-tuple of the relation, where \(1 < j < k.\!\) The first element of an elementary relation is called its object or relate, while the remaining elements are called its correlates.

  1. Signs that denote single correlates of an object in a relation are called higher ascent (HA) signs.
  2. Signs that denote moderate effects in a relation, that is, signs whose objects are partial transactions or \(k\!\)-tuples involving more than one place but less than the full set of places in a relation, are called higher employ (HE) signs.
  3. Signs that denote elementary relations involving all the places of a relation are called higher import (HI) signs.
  4. Signs that denote relations are called higher upshot (HU) signs.
  5. Signs that denote properties or classes of relations are called higher yclept (HY) signs.

Whenever the sense is clear, it is usually convenient to stick with the more generic terms for higher order signs and higher order sign relations, letting context determine the appropriate meaning. For the rest of this section, it is mainly the categories of higher ascent signs and higher import signs that come into play.

Inquiry into inquiry is necessary because it is an unavoidable part of the inquiry into anything else, since critical reflection on the methods employed is implicit in the task. This means that inquiry into inquiry must be able to formulate and critique alternative descriptions of inquiry in general, including itself. Thus, there are notions of entelechy, of a self-referent objective, a completion in self-description, or an end to self-actualization, that are intrinsic to the conception of inquiry, whether or not its ends-in-view are ever achieved. If inquiry, as a manner of thinking, is carried on in sign relations and is ever to be supported by computational means, then these reflections raise the issue of self-describing sign relations and self-documenting data structures.

This is where higher order sign relations come in, making it possible to formalize sign relations that describe themselves and other sign relations, and thus enabling one to conceive of inquiries that inquire into themselves and other inquiries, at least in part. It is useful to approach these topics in a couple of stages, at first, by describing sign relations that describe other sign relations, and then, by describing sign relations that describe themselves. Although the implicit aim, or naive hope, is always to make these descriptions as complete as possible, it has to be recognized that partial success is all that is likely to be realized in practice. It seems to be something between rare and impossible that a non-trivial sign relation could completely describe itself with respect to every facet of its being and in all the ways that it does in fact exist.

Nevertheless, partially self-describing sign relations and partially self-documenting data structures do arise in practice, and so it is incumbent on this inquiry to look into the question of how they usually develop. That is, how does a sign get itself interpreted in a sign relation in such a way that it acts as a partial self-description of that selfsame sign relation? There appear to be two main ways that this can happen. Occasionally, it develops through the reflective operation or insightful turn of retracting projections, that is, by recognizing that a feature attributed to others is also (or primarily) an aspect of oneself. More commonly, partially self-describing sign relations are encountered already in place, as when a higher order sign relation has signs that describe lower orders, partial aspects, or previous stages of itself.

A further reduction in the number of different kinds of signs to worry about can be achieved by means of a special technique — some may call it an “artful dodge” — for referring indifferently to the elements of a set without referring to the set itself. Under the designation of a plural indefinite reference (PIR) is included all the various ways of dealing with denominations, multiple denotations, collective references, or objective multitudes that avail themselves of this trick.

By way of definition, a sign \(q\!\) in a sign relation \(L \subseteq O \times S \times I\!\) is said to be, to constitute, or to make a plural indefinite reference (PIR) to (every element in) a set of objects, \(X \subseteq O,\!\) if and only if \(q\!\) denotes every element of \(X.\!\) This relationship can be expressed in a succinct formula by making use of one additional definition.

The denotation of \(q\!\) in \(L,\!\) written \(\mathrm{De}(q, L),\!\) is defined as follows:

\(\mathrm{De}(q, L) ~=~ \mathrm{Den}(L) \cdot q ~=~ L_{OS} \cdot q ~=~ \{ o \in O : (o, q, i) \in L, ~\text{for some}~ i \in I \}.\)

Then \(q\!\) makes a PIR to \(X\!\) in \(L\!\) if and only if \(X \subseteq \mathrm{De}(q, L).\!\) Of course, this includes the limiting case where \(X\!\) is a singleton, say \(X = \{ o \}.\!\) In this case the reference is neither plural nor indefinite, properly speaking, but \(q\!\) denotes \(o\!\) uniquely.

The proper exploitation of PIRs in sign relations makes it possible to finesse the distinction between HI signs and HU signs, in other words, to provide a ready means of translating between the two kinds of signs that preserves all the relevant information, at least, for many purposes. This is accomplished by connecting the sides of the distinction in two directions. First, a HI sign that makes a PIR to many triples of the form \((o, s, i)\!\) can be taken as tantamount to a HU sign that denotes the corresponding sign relation. Second, a HU sign that denotes a singleton sign relation can be taken as tantamount to a HI sign that denotes its single triple. The relation of one sign being “tantamount to” another is not exactly a full-fledged semantic equivalence, but it is a reasonable approximation to it, and one that serves a number of practical purposes.

In particular, it is not absolutely necessary for a sign relation to contain a HU sign in order for it to contain a description of itself or another sign relation. As long the sign relation is “content” to maintain its reference to the object sign relation in the form of a constant name, then it suffices to use a HI sign that makes a PIR to all of its triples.

In the theory of sign relations, as in formal language theory, one tends to spend a lot of the time talking about signs as objects. Doing this requires one to have signs for denoting signs and ways of telling when a sign is being used as a sign or is just being mentioned as an object. Generally speaking, reflection on the usage of an established order of signs recruits another order of signs to denote them, and then another, and another, until a limit on one's powers of reflection is ultimately reached, and finally one is forced to conduct one's meaning in forms of interpretive practice that fail to be fully reflective in one critical respect or another. In the last resort one resigns oneself to letting the recourse of signs be guided by casually intuited inklings of their potential senses.

In this text a number of linguistic devices are used to assist the faculty of reflection, hopefully forestalling the relegation of its powers to its own natural resources for a long enough spell to observe its action. A discussion of these techniques and strategies follows.

In the declaration of higher order signs and the specification of their uses, one can employ the same terminology and technical distinctions that are found to be effective in describing sign relations. This turns the established terms for significant properties of world elements and the provisional terms for their relationships to each other to the ends of prescribing the relative orders of higher order signs and their objects. In short, the received theory of signs, however transient it may be at any given moment of inquiry, allows one to declare the absolute types and the relative roles that all of these entities are meant to take up.

For example, if I say that \(x ~\text{connotes}~ y\!\) and that \(y ~\text{denotes}~ z,\!\) then it means I imagine myself to have an interpretation or a sign relation in mind where \(x\!\) and \(y\!\) are both signs belonging to a single order of signs and \(y\!\) is a sign belonging to the next higher order of signs up from \(z,\!\) everything being relative to that particular moment of interpretation. Of course, as far as wholly arbitrary sign relations go, there is nothing to guarantee that the interpretation I think myself to have in mind at one moment can be integrated with the interpretation I think myself to have in mind at another moment, or that a just order can be founded in the end by any manner of interpretation that “just follows orders” in this way.

Ordinary quotation marks \(( {}^{\backprime\backprime} ~ {}^{\prime\prime} )\) function as an operator on pieces of text to create names for the signs or expressions enclosed in them. In doing this the quotation marks delay, defer, or interrupt the normal use of their subtended contents, interfering with the referential use of a sign or the evaluation of an expression in order to create a new sign. The use of this constructed sign is to mention the immediate contents of the quotation marks in a way that can serve thereafter to indicate these contents directly or allude to them indirectly.

In the informal context, however, quotation marks are used equivocally for several other purposes. In particular, they are frequently used to call attention to the immediate use of a sign, to stress it or redress it for a definitive, emphatic, or skeptical service, but without necessarily intending to interrupt or seriously alter its ongoing use. Furthermore, ordinary quotation marks are commonly taken so literally that they can inadvertently pose an obstacle to functional abstraction. For instance, if I try to refer to the effect of quotation as a mapping that takes signs to higher order signs, thereby attempting to define its action by means of a lambda abstraction\[x \mapsto {}^{\backprime\backprime} x {}^{\prime\prime},\] then there are modes of IL interpretation that would read this literally as a constant map, one that sends every element of the functional domain into the single code for the letter \({}^{\backprime\backprime} x {}^{\prime\prime}.\)

For these reasons I introduce the use of raised angle brackets \(( {}^{\langle} ~ {}^{\rangle} ),\) also called “arches” or “supercilia”, to configure a form of quotation marker, but one that is subject to a more definite set of understandings about its interpretation. Namely, the arch marker denotes a function on signs that takes (the name of) a syntactic element located within it as (the name of) a functional argument and returns as its functional value the name of that syntactic element. The parenthetical operators in this statement reflect the optional readings that prevail in some cases, where the simple act of noticing a syntactic element as a functional argument is already tantamount to having a name for it. As a result, a quoting function that is designed to operate on the signs denoting and not on the objects denoted seems to do nothing at all, but merely uses up a moment of time to do it.

In IL contexts the arch quotes are construed together with their syntactic contents as forming a certain kind of term, one that achieves a naming function on syntactic elements by taking the enclosed text as a functional argument and giving a directly embedded indication of it. In this type of setting the name of a string of length \(k\!\) is a string of length \(k + 2.\!\)

In FL contexts the arch marker denotes a function that takes the literal syntactic element bounded by it as its argument and returns the name, code, annotation, gödel number, or unique numerical identifier of that syntactic element. In this setting there need be no straightforward relationship between the size or complexity of the syntactic element and the magnitude of its numerical code or the form of its symbolic code.

In CL implementations the arch operation is intended to do exactly what the principal uses of ordinary quotes are supposed to do, except that it obeys restrictions that are necessary to make it work as a notation for a computable function on the identified syntactic domain.

One further remark on the uses of quotation marks is pertinent here. When using HA signs with high orders of complexity and depth, it is often convenient to revert to the use of ordinary quotes at the outer boundary of a quotational expression, in this way marking a return to the ordinary context of interpretation. For example, one observes the colloquial equivalence\[{}^{\langle\langle\langle} x {}^{\rangle\rangle\rangle} ~=~ {}^{\backprime\backprime\langle\langle} x {}^{\rangle\rangle\rangle\prime\prime}.\]

In general, a good way to specify the meaning of a new notation is by means of a semantic equation, or a system of semantic equations, that expresses the function of the new signs in terms of familiar operations. If it is merely a matter of introducing new signs for old meanings, then this method is sufficient. In this vein, the intention and use of the “supercilious notation” for reflecting on signs could have its definition approximated in the following way.

Let \({}^{\langle} x {}^{\rangle} ~=~ {}^{\backprime\backprime} x {}^{\prime\prime}\) as signs for the object \(x,\!\) and let \({}^{\langle\langle} x {}^{\rangle\rangle} ~=~ {}^{\langle\backprime\backprime} x {}^{\prime\prime\rangle} ~=~ {}^{\backprime\backprime\langle} x {}^{\rangle\prime\prime}\) as signs for the object \({}^{\backprime\backprime} x {}^{\prime\prime},\) an object that incidentally happens to be sign. An alternative way of putting this is to say that the members of the set \(\{ {}^{\langle} x {}^{\rangle}, {}^{\backprime\backprime} x {}^{\prime\prime} \}\) are equivalent as signs for the object \(x,\!\) while the members of the set \(\{ {}^{\langle\langle} x {}^{\rangle\rangle}, {}^{\langle\backprime\backprime} x {}^{\prime\prime\rangle}, {}^{\backprime\backprime\langle} x {}^{\rangle\prime\prime} \}\) are equivalent as signs for the sign \({}^{\backprime\backprime} x {}^{\prime\prime}.\)

6.10. Higher Order Sign Relations : Examples

In considering the higher order sign relations that stem from the examples \(L(\text{A})\!\) and \(L(\text{B}),\!\) it appears that annexing the first level of HA signs is tantamount to adjoining or instituting an auxiliary interpretive framework, one that has the semantic equations shown in Table 36.


\(\text{Table 36.} ~~ \text{Semantics for Higher Order Signs}\!\)
\(\text{Object Denoted}\!\) \(\text{Equivalent Signs}\!\)

\(\begin{matrix} \text{A} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} & = & {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\langle} \text{B} {}^{\rangle} & = & {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} & = & {}^{\langle\backprime\backprime} \text{A} {}^{\prime\prime\rangle} & = & {}^{\backprime\backprime\langle} \text{A} {}^{\rangle\prime\prime} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} & = & {}^{\langle\backprime\backprime} \text{B} {}^{\prime\prime\rangle} & = & {}^{\backprime\backprime\langle} \text{B} {}^{\rangle\prime\prime} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} & = & {}^{\langle\backprime\backprime} \text{i} {}^{\prime\prime\rangle} & = & {}^{\backprime\backprime\langle} \text{i} {}^{\rangle\prime\prime} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} & = & {}^{\langle\backprime\backprime} \text{u} {}^{\prime\prime\rangle} & = & {}^{\backprime\backprime\langle} \text{u} {}^{\rangle\prime\prime} \end{matrix}\)


However, there is an obvious problem with this method of defining new notations. It merely provides alternate signs for the same old uses. But if the original signs are ambiguous, then equating new signs to them cannot remedy the problem. Thus, it is necessary to find ways of selectively reforming the uses of the old notation in the interpretation of the new notation.

The invocation of higher order signs raises an important point, having to do with the typical ways that signs can become the objects of further signs, and the relationship that this type of semantic ascent bears to the interpretive agent's capacity for so-called “reflection”. This is a topic that will recur again as the discussion develops, but a speculative foreshadowing of its character will have to serve for now.

Any object of an interpreter's experience and reasoning, no matter how vaguely and casually it initially appears, up to and including the merest appearance of a sign, is already, by virtue of these very circumstances, on its way to becoming the object of a formalized sign, so long as the signs are made available to denote it. The reason for this is rooted in each agent's capacity for reflection on its own experience and reasoning, and the critical question is only whether these transient reflections can come to constitute signs of a more permanent use.

The immediate purpose of the arch operation is to equip the text with a syntactic mechanism for constructing higher order signs, that is, signs denoting signs. But the step of reflection that the arch device marks corresponds to a definite change on the part of the interpreter, affecting the pragmatic stance or the intentional attitude that the interpreter takes up with respect to the affected signs. Accordingly, because of its connection to the interpreter's capacity for critical reflection, the arch operation, whether signified by arches or quotes, opens up a topic of wide importance to the larger question of inquiry. Unfortunately, there is much to do before this issue can be taken up in detail, and immediate concerns make it necessary to break off further discussion for now.

A general understanding of higher order signs would not depend on the special devices that are used to construct them, but would define them as any signs that behave in certain ways under interpretation, that is, as any signs that are interpreted in a particular manner, yet to be specified. A proper definition of higher order signs, including a generic description of the operations that construct them, cannot be achieved at the present stage of discussion. Doing this correctly depends on carrying out further developments in the theories of formal languages and sign relations. Until this discussion reaches that point, much of what it says about higher order signs will have to be regarded as a provisional compromise.

The development of reflection on interpretation leads to the generation of higher order signs that denote lower order signs as their objects. This process is illustrated by the following sequence of progressively higher order signs, all of which stem from a plain precursor and ultimately refer back to their initial ancestor, in this case, \(x.\!\)

\(x, ~ {}^{\langle} x {}^{\rangle}, ~ {}^{\langle\langle} x {}^{\rangle\rangle}, ~ {}^{\langle\langle\langle} x {}^{\rangle\rangle\rangle}, ~ \ldots\)

The intent of this succession, as interpreted in FL environments, is that \({}^{\langle\langle} x {}^{\rangle\rangle}\!\) denotes or refers to \({}^{\langle} x {}^{\rangle},\!\) which denotes or refers to \(x.\!\) Moreover, its computational realization, as implemented in CL environments, is that \({}^{\langle\langle} x {}^{\rangle\rangle}\!\) addresses or evaluates to \({}^{\langle} x {}^{\rangle},\!\) which addresses or evaluates to \(x.\!\)

The designations higher order and lower order are attributed to signs in a casual, local, and transitory way. At this point they signify nothing beyond the occurrence in a sign relation of a pair of triples having the form shown in Table 37.


\(\text{Table 37.} ~~ \text{Sign Relation Containing a Higher Order Sign}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \ldots \\[2pt] \ldots \\[2pt] \text{s} \end{matrix}\)

\(\begin{matrix} \text{s} \\[2pt] \ldots \\[2pt] \text{t} \end{matrix}\)

\(\begin{matrix} \ldots \\[2pt] \ldots \\[2pt] \ldots \end{matrix}\)


This is all it takes to make \(\text{s}\!\) a lower order sign and \(\text{t}\!\) a higher order sign in relation to each other at the moments in question. Whether a global ordering of a more generally justifiable sort can be constructed from an arbitrary series of such purely local impressions is another matter altogether.

Nevertheless, the preceding observations do show a way to give a definition of higher order signs that does not depend on the peculiarities of quotational devices. For example, consider the previously described sequence of increasingly higher order signs stemming from the object \(x.\!\) Table 38 shows how this succession can be transcribed into the form of a sign relation. But this is formally no different from the sign relation suggested in Table 39, one whose individual signs are not constructed in any special way. Both of these representations of sign relations, if continued in a consistent manner, would have the same abstract structure. If one of them is higher order then so is the other, at least, if the attributes of order are meant to have any formally invariant meaning.


\(\text{Table 38.} ~~ \text{Sign Relation for a Succession of Higher Order Signs (1)}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} x \\[2pt] {}^{\langle} x {}^{\rangle} \\[2pt] {}^{\langle\langle} x {}^{\rangle\rangle} \\[2pt] \ldots \end{matrix}\)

\(\begin{matrix} {}^{\langle} x {}^{\rangle} \\[2pt] {}^{\langle\langle} x {}^{\rangle\rangle} \\[2pt] {}^{\langle\langle\langle} x {}^{\rangle\rangle\rangle} \\[2pt] \ldots \end{matrix}\)

\(\begin{matrix} \ldots \\[2pt] \ldots \\[2pt] \ldots \\[2pt] \ldots \end{matrix}\)


\(\text{Table 39.} ~~ \text{Sign Relation for a Succession of Higher Order Signs (2)}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} x \\[2pt] s_1 \\[2pt] s_2 \\[2pt] \ldots \end{matrix}\)

\(\begin{matrix} s_1 \\[2pt] s_2 \\[2pt] s_3 \\[2pt] \ldots \end{matrix}\)

\(\begin{matrix} \ldots \\[2pt] \ldots \\[2pt] \ldots \\[2pt] \ldots \end{matrix}\)


The rest of this section discusses the relationship between higher order signs and a concept called the reflective extension of a sign relation. Reflective extensions will be subjected to a more detailed study in a later part of this work. For now, just to see how the process works, the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) are taken as starting points to illustrate the more common forms of reflective development.

In the most typical scenario, higher order sign relations come into being as the reflective extensions of simpler, possibly unreflective sign relations. Conversely, the incorporation of higher order signs within a sign relation leads to a larger sign relation that constitutes one of its reflective extensions. In general, there are many different ways that a reflective extension can get started and many different structures that can result.

In the initial slice of semantics presented for the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) the sign domain \(S\!\) is identical to the interpretant domain \(I,\!\) and this set is disjoint from the object domain \(O.\!\) In order for this discussion to develop more interesting examples of sign relations these constraints will need to be generalized. As a start in this direction, one can preserve the identification of the syntactic domain as \(S = I\!\) and contemplate ways of varying the pattern of intersection between \(S\!\) and \(O.\!\)

One direction of generalization is motivated by the desire to give interpreters a measure of “reflective capacity”. This is a property of sign relations that can be associated with the overlap of \(O\!\) and \(S\!\) and gauged by the extent to which \(S\!\) is contained in \(O.\!\) In intuitive terms, interpreters are said to have a reflective capacity to the extent that they can refer to their own signs independently of their denotations. An interpretive system with a sufficient amount of reflective capacity can support the maintenance and manipulation of textual objects like expressions and programs without necessarily having to evaluate the expressions or execute the programs.

In ordinary discourse HA signs are usually generated by means of linguistic devices for quoting pieces of text. In computational frameworks these quoting mechanisms are implemented as functions that map syntactic arguments into numerical or syntactic values. A quoting function, given a sign or expression as its single argument, needs to accomplish two things: first, to defer the reference of that sign, in other words, to inhibit, delay, or prevent the evaluation of its argument expression, and then, to exhibit or produce another sign whose object is precisely that argument expression.

The rest of this section considers the development of sign relations that have moderate capacities to reference their own signs as objects. In each case, these extensions are assumed to begin with sign relations like \(L(\text{A})\!\) and \(L(\text{B})\!\) that have disjoint sets of objects and signs and thus have no reflective capacity at the outset. The status of \(L(\text{A})\!\) and \(L(\text{B})\!\) as the reflective origins of the associated reflective developments is recalled by saying that \(L(\text{A})\!\) and \(L(\text{B})\!\) themselves are the zeroth order reflective extensions of \(L(\text{A})\!\) and \(L(\text{B}),\!\) in symbols, \(L(\text{A}) = \mathrm{Ref}^0 L(\text{A})\!\) and \(L(\text{B}) = \mathrm{Ref}^0 L(\text{B}).\!\)

The next set of Tables illustrates a few the most common ways that sign relations can begin to develop reflective extensions. For ease of reference, Tables 40 and 41 repeat the contents of Tables 1 and 2, respectively, merely replacing ordinary quotes with arch quotes.


\(\text{Table 40.} ~~ \text{Reflective Origin} ~ \mathrm{Ref}^0 L(\text{A})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)


\(\text{Table 41.} ~~ \text{Reflective Origin} ~ \mathrm{Ref}^0 L(\text{B})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)


Tables 42 and 43 show one way that the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) can be extended in a reflective sense through the use of quotational devices, yielding the first order reflective extensions, \(\mathrm{Ref}^1 L(\text{A})\!\) and \(\mathrm{Ref}^1 L(\text{B}).\!\) These extensions add one layer of HA signs and their objects to the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) respectively. The new triples specify that, for each \({}^{\langle} x {}^{\rangle}\!\) in the set \(\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},\!\) the HA sign of the form \({}^{\langle\langle} x {}^{\rangle\rangle}\!\) connotes itself while denoting \({}^{\langle} x {}^{\rangle}.\!\)

Notice that the semantic equivalences of nouns and pronouns referring to each interpreter do not extend to semantic equivalences of their higher order signs, exactly as demanded by the literal character of quotations. Also notice that the reflective extensions of the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) coincide in their reflective parts, since exactly the same triples were added to each set.


\(\text{Table 42.} ~~ \text{Higher Ascent Sign Relation} ~ \mathrm{Ref}^1 L(\text{A})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)


\(\text{Table 43.} ~~ \text{Higher Ascent Sign Relation} ~ \mathrm{Ref}^1 L(\text{B})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)


There are many ways to extend sign relations in an effort to develop their reflective capacities. The implicit goal of a reflective project is to reach a condition of reflective closure, a configuration satisfying the inclusion \(S \subseteq O,\!\) where every sign is an object. It is important to note that not every process of reflective extension can achieve a reflective closure in a finite sign relation. This can only happen if there are additional equivalence relations that keep the effective orders of signs within finite bounds. As long as there are higher order signs that remain distinct from all lower order signs, the sign relation driven by a reflective process is forced to keep expanding. In particular, the process that is freely suggested by the formation of \(\mathrm{Ref}^1 L(\text{A})~\!\) and \(\mathrm{Ref}^1 L(\text{B})~\!\) cannot reach closure if it continues as indicated, without further constraints.

Tables 44 and 45 present higher import extensions of \(L(\text{A})\!\) and \(L(\text{B}),\!\) respectively. These are just higher order sign relations that add selections of higher import signs and their objects to the underlying set of triples in \(L(\text{A})\!\) and \(L(\text{B}).\!\) One way to understand these extensions is as follows. The interpreters \(\text{A}\!\) and \(\text{B}\!\) each use nouns and pronouns just as before, except that the nouns are given additional denotations that refer to the interpretive conduct of the interpreter named. In this form of development, using a noun as a canonical form that refers indifferently to all the \((o, s, i)\!\) triples of a sign relation is a pragmatic way that a sign relation can refer to itself and to other sign relations.


\(\text{Table 44.} ~~ \text{Higher Import Sign Relation} ~ \mathrm{HI}^1 L(\text{A})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)


\(\text{Table 45.} ~~ \text{Higher Import Sign Relation} ~ \mathrm{HI}^1 L(\text{B})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\)


Several important facts about the class of higher order sign relations in general are illustrated by these examples. First, the notations appearing in the object columns of \(\mathrm{HI}^1 L(\text{A})\!\) and \(\mathrm{HI}^1 L(\text{B})\!\) are not the terms that these newly extended interpreters are depicted as using to describe their objects, but the kinds of language that you and I, or other external observers, would typically make available to distinguish them. The sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) as extended by the transactions of \(\mathrm{HI}^1 L(\text{A})\!\) and \(\mathrm{HI}^1 L(\text{B}),\!\) respectively, are still restricted to their original syntactic domain \(\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}.\!\) This means that there need be nothing especially articulate about a HI sign relation just because it qualifies as higher order. Indeed, the sign relations \(\mathrm{HI}^1 L(\text{A})\!\) and \(\mathrm{HI}^1 L(\text{B})\!\) are not very discriminating in their descriptions of the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) referring to many different things under the very same signs that you and I and others would explicitly distinguish, especially in marking the distinction between an interpretive agent and any one of its individual transactions.

In practice, it does an interpreter little good to have the higher import signs for referring to triples of objects, signs, and interpretants if it does not also have the higher ascent signs for referring to each triple's syntactic portions. Consequently, the higher order sign relations that one is likely to observe in practice are typically a mixed bag, having both higher ascent and higher import sections. Moreover, the ambiguity involved in having signs that refer equivocally to simple world elements and also to complex structures formed from these ingredients would most likely be resolved by drawing additional information from context and fashioning more distinctive signs.

These reflections raise the issue of how articulate a higher order sign relation is in its depiction of its object signs and its object sign relations. For now, I can do little more than note the dimension of articulation as a feature of interest, contributing to the scale of aesthetic utility that makes some sign relations better than others for a given purpose, and serving as a drive that motivates their continuing development.

The technique illustrated here represents a general strategy, one that can be exploited to derive certain benefits of set theory without having to pay the overhead that is needed to maintain sets as abstract objects. Using an identified type of a sign as a canonical form that can refer indifferently to all the members of a set is a pragmatic way of making plural reference to the members of a set without invoking the set itself as an abstract object. Of course, it is not that one can get something for nothing by these means. One is merely banking on one's recurring investment in the setting of a certain sign relation, a particular set of elementary transactions that is taken for granted as already funded.

As a rule, it is desirable for the grammatical system that one uses to construct and interpret higher order signs, that is, signs for referring to signs as objects, to mesh in a comfortable fashion with the overall pragmatic system that one uses to assign syntactic codes to objects in general. For future reference, I call this requirement the problem of creating a conformally reflective extension (CRE) for a given sign relation. A good way to think about this task is to imagine oneself beginning with a sign relation \(L \subseteq O \times S \times I,\!\) and to consider its denotative component \(\mathrm{Den}_L = L_{OS} \subseteq O \times S.\!\) Typically one has a naming function, say \(\mathrm{Nom},\!\) that maps objects into signs:

\(\mathrm{Nom} \subseteq \mathrm{Den}_L \subseteq O \times S ~\text{such that}~ \mathrm{Nom} : O \to S.\!\)

Part of the task of making a sign relation more reflective is to extend it in ways that turn more of its signs into objects. This is the reason for creating higher order signs, which are just signs for making objects out of signs. One effect of progressive reflection is to extend the initial naming function \(\mathrm{Nom}\!\) through a succession of new naming functions \(\mathrm{Nom}',\!\) \(\mathrm{Nom}'',\!\) and so on, assigning unique names to larger allotments of the original and subsequent signs. With respect to the difficulties of construction, the hard core or adamant part of creating extended naming functions resides in the initial portion \(\mathrm{Nom}\!\) that maps objects of the “external world” to signs in the “internal world”. The subsequent task of assigning conventional names to signs is supposed to be comparatively natural and easy, perhaps on account of the nominal nature of signs themselves.

The effect of reflection on the original sign relation \(L \subseteq O \times S \times I\!\) can be analyzed as follows. Suppose that a step of reflection creates higher order signs for a subset of \(S.\!\) Then this step involves the construction of a newly extended sign relation:

\(L' \subseteq O' \times S' \times I', ~\text{where}~ O' = O \cup O_1 ~\text{and}~ S' = S \cup S_1.\!\)

In this construction \(O_1 \subseteq S\!\) is that portion of the original signs \(S\!\) for which higher order signs are created in the initial step of reflection, thereby being converted into \(O_1 \subseteq O'.\!\) The sign domain \(S\!\) is extended to a new sign domain \(S'\!\) by the addition of these higher order signs, namely, the set \(S_1.\!\) Using arch quotes, the mapping from \(O_1\!\) to \(S_1\!\) can be defined as follows:

\(\mathrm{Nom}_1 : O_1 \to S_1 ~\text{such that}~ \mathrm{Nom}_1 : x \mapsto {}^{\langle} x {}^{\rangle}.\!\)

Finally, the reflectively extended naming function \(\mathrm{Nom}' : O' \to S'\!\) is defined as \(\mathrm{Nom}' = \mathrm{Nom} \cup \mathrm{Nom}_1.\!\)

A few remarks are necessary to see how this way of defining a CRE can be regarded as legitimate.

In the present context an application of the arch notation, for example, \({}^{\langle} x {}^{\rangle},\!\) is read on analogy with the use of any other functional notation, for example, \(f(x),\!\) where \({}^{\backprime\backprime} f {}^{\prime\prime}\!\) is the name of a function \(f,\!\) \({}^{\backprime\backprime} f(~) {}^{\prime\prime}\!\) is the context of its application, \({}^{\backprime\backprime} x {}^{\prime\prime}\!\) is the name of an argument \(x,\!\) and where the functional abstraction \({}^{\backprime\backprime} x \mapsto f(x) {}^{\prime\prime}\!\) is just another name for the function \(f.\!\)

It is clear that some form of functional abstraction is being invoked in the above definition of \(\mathrm{Nom}_1.\!\) Otherwise, the expression \(x \mapsto {}^{\langle} x {}^{\rangle}\!\) would indicate a constant function, one that maps every \(x\!\) in its domain to the same code or sign for the letter \({}^{\backprime\backprime} x {}^{\prime\prime}.\!\) But if this is allowed, then it appears to pose a dilemma, either to invoke a more powerful concept of functional abstraction than the concept being defined, or else to attempt an improper definition of the naming function in terms of itself.

Although it appears that this form of functional abstraction is being used to define the CRE in terms of itself, trying to extend the definition of the naming function in terms of a definition that is already assumed to be available, in reality this only uses a finite function, a finite table look up, to define the naming function for an unlimited number of higher order signs.

In CL contexts, especially in the Lisp tradition, the quotation operator is recognized as an “evaluation inhibitor” and implemented as a function that maps each syntactic element into its unique numerical identifier or gödel number. Perhaps one should pause to marvel at the fact that a form of delay, deference, and interruption akin to an inhibition should be associated with the creation of signs that refer in meaningful ways.

On reflection, though, the connection between attribution and inhibition, or acknowledgment and deference, begins to appear less remarkable, and in time it can even be understood as natural and necessary. For one thing, psychoanalytic and psychodynamic theories of mental functioning have long recognized that symbol formation and symptom formation are closely akin, being the twin founders of civilization and many of its discontents. For another thing, the following etymology can be rather instructive: The English word memory derives from the Latin memor for mindful, which is akin to the Latin mora for delay, the Greek mermera for care, and the Sanskrit smarati for he remembers. To explore the verbal complex a bit further, it merits remembering that the ideas of merit and membership, besides being connected with the due proportions, earned shares, and just deserts that are parceled out on parchment, are also tied up with the particular kind of care that is needed to take account of things part for part. (The Latin merere for earn or deserve, along with membrana for skin or parchment and memor for mindful, are all akin to the Greek merizein for divide and meros for part.) Although the voices of psychology and etymology are seldom heard at this depth in the wilderness of formal abstraction, I think it is worth heeding them on this point.

In CL environments of the Pascal variety there are several different ways that higher order signs are created. In these settings higher order signs, or signs for referring to signs as objects, can be implemented as the codes that serve as numerical identifiers of characters or the pointers that serve as accessory indices of symbolic expressions.

But not all the signs that are needed for referring to other signs can be constructed by means of quotation. Other forms of higher order signs have to be generated de novo, that is, constructed independently of previous successions and introduced directly into their appropriate orders. Among other things, this obviates the usual strategy for telling the order of a sign by counting its quota of quotation marks. Failing the chances of exploiting such a measure in absolute terms, and in the absence of a natural order for the construction of signs, the relative orders of signs can be assessed only by examining the complex network of denotative and connotative relationships that connect them, or the gaps that arise when they fail to do so.

In a CL context this often occurs when a constant is declared equal or a variable is set equal to a quoted character, as in the following sequence of Pascal expressions:

const comma = ',' ;
var x; x := comma ;

In this passage, the sign “comma” is made to denote whatever it is that sign “','” denotes, and the variable \(x\!\) is then set equal to this value.

6.11. Higher Order Sign Relations : Application

Given the language in which a notation like \({}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}\!\) makes sense, or in prospect of being given such a language, it is instructive to ask: “What must be assumed about the context of interpretation in which this language is supposed to make sense?” According to the theory of signs that is being examined here, the relevant formal aspects of that context are embodied in a particular sign relation, call it \({}^{\backprime\backprime} Q {}^{\prime\prime}.\!\) With respect to the hypothetical sign relation \(Q,\!\) commonly personified as the prospective reader or the ideal interpreter of the intended language, the denotation of the expression \({}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}\!\) is given by:

\(\mathrm{De}( {}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}, Q ).\!\)

If \(Q\!\) follows rules that are typical of many species of interpreters, then the value of this expression will depend on the values of the following three expressions:

\(\begin{array}{lccc} \mathrm{De}( & {}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime} & , & Q) \\[6pt] \mathrm{De}( & {}^{\backprime\backprime} q {}^{\prime\prime} & , & Q) \\[6pt] \mathrm{De}( & {}^{\backprime\backprime} L {}^{\prime\prime} & , & Q) \end{array}\)

What are the roles of the signs \({}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime},\!\) \({}^{\backprime\backprime} q {}^{\prime\prime},\!\) \({}^{\backprime\backprime} L {}^{\prime\prime}\!\) and what are they supposed to mean to \(Q\!\)? Evidently, \({}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime}\!\) is a constant name that refers to a particular function, \({}^{\backprime\backprime} q {}^{\prime\prime}\!\) is a variable name that makes a PIR to a collection of signs, and \({}^{\backprime\backprime} L {}^{\prime\prime}\!\) is a variable name that makes a PIR to a collection of sign relations.

This is not the place to take up the possibility of an ideal, universal, or even a very comprehensive interpreter for the language indicated here, so I specialize the account to consider an interpreter \(Q_{\text{AB}} = Q(\text{A}, \text{B})\!\) that is competent to cover the initial level of reflections that arise from the dialogue of \(\text{A}\!\) and \(\text{B}.\!\)

For the interpreter \(Q_\text{AB},\!\) the sign variable \(q\!\) need only range over the syntactic domain \(S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!\) and the relation variable \(L\!\) need only range over the set of sign relations \(\{ L(\text{A}), L(\text{B}) \}.\!\) These requirements can be accomplished as follows:

  1. The variable name \({}^{\backprime\backprime} q {}^{\prime\prime}\) is a HA sign that makes a PIR to the elements of \(S.~\!\)
  2. The variable name \({}^{\backprime\backprime} L {}^{\prime\prime}\) is a HU sign that makes a PIR to the elements of \(\{ L(\text{A}), L(\text{B}) \}.~\!\)
  3. The constant name \({}^{\backprime\backprime} L(\text{A}) {}^{\prime\prime}\) is a HI sign that makes a PIR to the elements of \(L(\text{A}).~\!\)
  4. The constant name \({}^{\backprime\backprime} L(\text{B}) {}^{\prime\prime}\) is a HI sign that makes a PIR to the elements of \(L(\text{B}).~\!\)

This results in a higher order sign relation for \(Q_\text{AB},\!\) that is shown in Table 46.


\(\text{Table 46.} ~~ \text{Higher Order Sign Relation for} ~ Q(\text{A}, \text{B})\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{B} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} L {}^{\rangle} \\ {}^{\langle} L {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} L {}^{\rangle} \\ {}^{\langle} L {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} q {}^{\rangle} \\ {}^{\langle} q {}^{\rangle} \\ {}^{\langle} q {}^{\rangle} \\ {}^{\langle} q {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} q {}^{\rangle} \\ {}^{\langle} q {}^{\rangle} \\ {}^{\langle} q {}^{\rangle} \\ {}^{\langle} q {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{A} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{A} {}^{\rangle} & ) \\ ( & \text{A} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{B} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{B} {}^{\rangle} & ) \\ ( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & ) \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} (( & {}^{\langle} \text{A} {}^{\rangle} & , & \text{A} & ), & \text{A} & ) \\ (( & {}^{\langle} \text{A} {}^{\rangle} & , & \text{B} & ), & \text{A} & ) \\ (( & {}^{\langle} \text{B} {}^{\rangle} & , & \text{A} & ), & \text{B} & ) \\ (( & {}^{\langle} \text{B} {}^{\rangle} & , & \text{B} & ), & \text{B} & ) \\ (( & {}^{\langle} \text{i} {}^{\rangle} & , & \text{A} & ), & \text{A} & ) \\ (( & {}^{\langle} \text{i} {}^{\rangle} & , & \text{B} & ), & \text{B} & ) \\ (( & {}^{\langle} \text{u} {}^{\rangle} & , & \text{A} & ), & \text{B} & ) \\ (( & {}^{\langle} \text{u} {}^{\rangle} & , & \text{B} & ), & \text{A} & ) \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \end{matrix}\!\)

\(\begin{matrix} {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \\ {}^{\langle} \mathrm{De} {}^{\rangle} \end{matrix}\!\)


Following the manner of construction in this extremely reduced example, it is possible to see how answers to the above questions, concerning the meaning of \({}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime},\!\) might be worked out. In the present instance:

\(\begin{array}{lll} \mathrm{De} ({}^{\backprime\backprime} q {}^{\prime\prime}, Q_{\text{AB}}) & = & S \\[6pt] \mathrm{De} ({}^{\backprime\backprime} L {}^{\prime\prime}, Q_{\text{AB}}) & = & \{ L(\text{A}), L(\text{B}) \} \end{array}\)

6.12. Issue 1. The Status of Signs

This Section considers an issue that affects the status of signs and their mode of significance, as it appears under each of the three norms of significance. The concerns that arise with respect to this issue can be divided into two sets of questions. The first type of question has to do with the default assumptions that are made about the meanings of signs and the strategies that are used to deal with signs that fail to have meanings. The second type of question has to do with higher order signs, or signs that involve signs among their objects.

Only certain types of signs are able to make their appearance in a given medium or a particular style of text, while many others are not. But a sign is a sign by virtue of the fact that it is interpreted as a sign, and thus plays the role of a sign in a sign relation, and not of necessity because it has any special construction other than that of being construed as a sign.

The theory of formal languages, as pursued under the formal language perspective, is closely related to the theory of semigroups, as pursued under the IL perspective, in the sense that arbitrary formal languages can be studied as subsets of the semigroups that embody the primitive concatenation of linguistic symbols within their algebraic laws of composition. Thus, in staging any discussion of formal languages, the theory of semigroups is often taken for a neutral, indifferent, or undifferentiated background, but the wisdom of using this setting is contingent on understanding the distinct outlooks of the casual and formal norms of significance. What divides the two styles and their favorite subjects in practice is a certain difference in attitude toward the status and role of their subject materials. Namely, it turns on the question of whether their primitive and derived elements are valued as terminal objects in and of themselves or whether these syntactic objects and constructions are interpreted as mere signs and sundry expressions whose true value lies elsewhere.

In taking up the informal language attitude toward any mathematical system, semigroups in particular, one assumes that signs are available for denoting a class of formal objects, but the issue of how these notational matters come to be constellated is considered to be peripheral, lacking in a substantive weight of concern and enjoying a purely marginal interest.

In the discussion of formal languages the presumption of significance is shifted in the opposite direction. Signs are presumed to be innocent of meaning until it can be demonstrated otherwise. One begins with a set of primitive objects, formally called “signs”, but treated as meaningless tokens or as objects that are bare of all extraneous semantic trappings. From these simplest signs, a law of composition allows the construction of complex expressions in regular ways, but other than that anything goes, at least, at first.

A first cut taken in the space of expressions divides them into two classes: (a) the grammatical, well-formed, or meaningful, maybe, versus (b) the ungrammatical, ill-formed, or meaningless, for sure. This first bit of semantic information is usually regarded as marking a purely syntactic distinction. Typically one seeks a recursive function that computes this bit of meaningfulness as a property of its argument and thereby decides (or semi-decides) whether an arbitrary expression (string, strand, sequence) constitutes an expressive expression (word, sentence, message), or not. The means of computation is often presented in the form of various grammars or automata that can serve as acceptors or generators for the language.

Depending on one's school of thought, the syntactic bit of computation for interesting cases of natural languages is thought to be either (1) formally independent of all the more properly semantic features, or (2) heavily reliant on the construal of further bits of meaning to make its decision. Accordingly, the semantics proper for such a language ought to begin either (1) serially after or (2) concurrently while the syntactic bit is done. The first standpoint is usually described as a “declaration of syntactic independence”, while the second opinion is often called a “semantic bootstrapping hypothesis”.

Over and above both of these positions the pragmatic theory of signs poses a stronger thesis of irreducibility or non-independence that one might call a “pragmatic bootstrapping hypothesis”. Even though it is a more complex task initially to work with triadic relations themselves instead of their dyadic projections, this hypothesis suggests that the structural integrity of interesting natural languages, when taken over the long haul, may well depend on them. One part of this thesis is not a hypothesis but a fact. There do indeed exist triadic relations that cannot be reconstructed uniquely from their dyadic projections, and thus are called irreducibly triadic. The parts of the thesis that are hypothetical, and that need to be cleared up by empirical inquiry, suggest that many of the most important sign relations are irreducibly triadic, and that interesting cases of natural languages depend heavily on these kinds of sign relations for their salient properties, for example, their relevance and adaptability to the objective world, their structural integrity and internal coherence, and their learnability by human agents and other species of finitely informed creatures.

In practice, this question has little consequence for the present study, on account of the extremely simple and artificial kinds of languages that are needed to carry out its aims. If some reason develops to emulate the properties of interesting natural languages in this microcosm, then a decision about which strategy to use can be made at that time. For now it seems worthwhile to keep exploring all of the above options.

In a formal language context one begins with the imposition of general inhibition against the notion that a specific class of signs has any meaning at all, or at least, that its elements have the meanings one is accustomed to think they do. It is significant that one does not proscribe all signs from having meaning, or else there is no point in having a discussion, and no point from which to carry on a discussion of anything at all. Therefore, the arena of formal discussion is a limited one and, except for the occasional resonance that its action induces in the surrounding discursive universe, most of the signs outside its bounds continue to be used in the habitual ways.

What can be done with the signs in question? Apparently, signs viewed as objects in the formal arena, temporarily cut off from their usual associations, treated as terminal values in themselves, and put under review to suggest explanations for themselves, can still be discussed. Doing this involves the use of other signs for denoting the signs in question. These extra signs, whose sense and use are not in question at the moment in question, are called into play as higher order signs, and it is their very meaningfulness and effectiveness that one must rely on to carry out the investigation of the lower order signs that are in question.

Detailed discussion of a set of signs in question requires the ability to classify the tokens of these signs according to their types. Doing this calls on the use of other higher order signs to denote these tokens, the transient instances of signs, and their types, the propertied classes of tokens that correspond to what is typically valued as a sign. The invocation of higher order signs can be iterated in a succession of higher orders that extends as far as one pleases, but no matter how much of this order is progressively formalized one eventually must resort to signs of such a high order that they are taken for granted as resting, for the moment, in an informal context of interpretation.

What is the sense and use of such a proceeding? Evidently, the signs in question, as a class, must present the inquirer with phenomena that are somehow simpler than, and yet convey instructive information about, the phenomenon known as the whole objective world (WOW). If their orders of complexity and perplexity are just as great as the world at large, then their investigation affords no advantage over the general empirical problem of trying to account for the WOW. If they enjoy no informative connection with the greater wonders of why the world is the way it is, and therefore fail to present a significant representation of the original question, then their isolated inquiry can serve no larger purpose in the world.

In situations like the one just described, where functions and relations on one order of arguments are clarified, defined, or explained in terms of functions and relations on another order of arguments, it is natural to understand the effort at clarification, definition, or explanation as a recursive process. What raises the potential for confusion in the given arrangement of formal and casual contexts is the circumstance that what seems natural to call the lower order arguments are being discussed in terms of what seems natural to call the higher order arguments. What is going on here? As it happens, the ordering of signs from lower order to higher order that seems obvious from the standpoint of their typical construction and their order of appearance on the stage of discussion does not reflect the measure of complexity that is relevant to the effort at recursive exposition.

The measure of complexity that is relevant to the formal exposition is the measure of doubt, uncertainty, or perplexity that one entertains about the sense and use of a sign beset by questions, whether this occurs by force of a voluntary effort to bracket its habitual senses or by dint of a puzzling event that brings its automatic uses to a halt.

It is the language being discussed that is the formal one, to be treated initially as an object, while the language that is used to carry out the discussion tries to maintain its informal viability, expecting in effect to be taken on faith as not undermining or vitiating the effort at inquiry due to unexamined flaws of its own. Nevertheless, if inquiry in general is expected to be self correcting, then a continuing series of failures to conclude inquiries by means of a given arrangement, that is, an inability to resolve uncertainties through a particular division of labor between formal and informal contexts, must lead to the grounds of attack being shifted.

In working out compromises between the formal and informal styles of usage one faces all the problems usually associated with integrating different frameworks of interpretation, but compounded by the additional factors (1) that this conflict of attitudes, or its practical importance, is seldom openly acknowledged and (2) that the frameworks in and of the negotiation to be transacted are rarely capable of being formalized, or even of being made conscious, to the same degree at the same time. These circumstances make the consequences of the underlying conflict difficult to address, and thus they continue to obstruct the desired implementation of a common computational language environment that could serve as a resource for work on both sides of the frame.

6.13. Issue 2. The Status of Sets

That the word “set” is being used indiscriminately for completely different notions and that this is the source of the apparent paradoxes of this young branch of science, that, moreover, set theory itself can no more dispense with axiomatic assumptions than can any other exact science and that these assumptions, just as in other disciplines, are subject to a certain arbitrariness, even if they lie much deeper here — I do not want to represent any of this as something new.

Julius König (1905), “On the Foundations of Set Theory and the Continuum Problem”, in Jean van Heijenoort (ed., 1967)

Set theory is not as young as it used to be, and not half as naive as it was when this statement was originally made, but the statement itself is just as apt in its application to the present scene and just as fresh in its lack of novelty as it was then. In the current setting, though, I am not so concerned with potentially different theoretical notions of a set that are represented by conventionally different axiom systems as I am with the actual diversity of practical notions that are used to deal with sets under each of the three norms of significance identified.

Even though all three norms of significance use set-theoretic constructions, the implicit theories of sets that are involved in their different uses are so varied in their assumptions and intentions that it amounts to a major source of friction between the casual and formal styles to try to pretend that the same subject is being invoked in every case. In particular, it makes a huge difference whether these sets are treated objectively, as belonging to the OF, or treated syntactically, as belonging to the IF.

In practical terms it makes all the difference in the world whether a set is viewed as a set of objects or whether it is viewed as a set of signs. The same set can be contemplated in each type of placement, but it does not always fit as well into both types of role. A set of objects is properly a part of the objective framework, and this is intended in its typical parts to model those realities whose laws and vagaries can extend outside the means of an agent's control. A set of signs is properly part of the interpretive framework, and this is constructed in its typical parts so that its variations and selections are subject to control for the ends of interpretive indication. The relevant variable is one of control, and the measure of it tells how well matched are the proper placements and the typical assignments that a given set is given.

Things referred to the objective world are not things that one expects to have much control over, at least, not at first, even though a reason for developing a language is to gain more control over events in time. Things referred to the realm of signs are things that one thinks oneself to have under control, at least, at first, even though their complexity can evolve in time beyond one's powers of oversight.

In an ordinary mathematical context, when one writes out the expression for a finite set in the form \(\{ x_1, \ldots, x_n \},\!\) one expects to see the names of objects appearing between the braces. Furthermore, even if these additional expectations are hardly ever formalized, these objects are typically expected to be the terminal objects of denotative value in the appropriate context of discussion and to inhabit a single order of objective existence. In other words, it is common to assume that all the objects named have the same type, with no relations of consequence, functional, semantic, or otherwise, obtaining among them. As soon as these assumptions are made explicit, of course, it is obvious that they do not have to be so.

In formal language contexts, when a set is taken as the alphabet or the lexicon of a formal language, then the objects named are themselves signs, but it is still only their names that are subject to appearing between the braces. Often one seeks to handle this case by saying that what really appears between the braces are signs of sort that can suffice to represent themselves, and thus that these signs literally constitute their own names, but this is not ultimately a sensible tactic to try. As always, only the tokens of signs can appear on the page, and these come and go as the pages are turned. Although these tokens, by representing the types that encase them, partly succeed in referring to themselves, what they denote on principle is something much more abstract, general, and invariant than their own concrete, particular, and transient selves. Nevertheless, the expectation that all of the elements in the set reside at the same level of syntactic existence is still in effect.

The construction of a reflective interpretive framework demands a closer examination of these assumptions and requires a single discussion that can refer to mixed types of elements with significant relations among them.

In a formal language context one needs to be more self-conscious about the use of signs, and, after an initially painful period during which critical reflection seems more to interfere with thought more than to facilitate understanding, it is hoped that the extra measure of reflection will pay off when it is time to mediate one's thinking in a computational language framework.

There are numerous devices that one can use to assist with the task of reflection. Rather than trying to divert the customary connections of informal language use and the conventional conduct of its interpretation, it is easier to introduce a collection of markedly novel signs, analogous to those already in use but whose interpretation is both free enough to be changed and controlled through a series of experimental variations and flexible enough to be altered when fitting and repaired when faulty.

If \(X = \{ x_1, \ldots, x_n \}\!\) is a set of objects under discussion, then one needs to consider several sets of signs that might be associated, element by element, with the elements of \(X.\!\)

  1. The nominal resource (nominal alphabet or nominal lexicon) for \(X\!\) is a set of signs that is notated and defined as follows:

    \(X^{\backprime\backprime\prime\prime} = \mathrm{Nom}(X) = \{ {}^{\backprime\backprime} x_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} x_n {}^{\prime\prime} \}.\)

    This concept is intended to capture the ordinary usage of this set of signs in one familiar context or another.

  2. The mediate resource (mediate alphabet or mediate lexicon) for \(X\!\) is a set of signs that is notated and defined as follows:

    \(X^{\langle\rangle} = \mathrm{Med}(X) = \{ {}^{\langle} x_1 {}^{\rangle}, \ldots, {}^{\langle} x_n {}^{\rangle} \}.\)

    This concept provides a middle ground between the nominal resource above and the literal resource described next.

  3. The literal resource (literal alphabet or literal lexicon) for \(X\!\) is a set of signs that is notated and defined as follows:

    \(X = \mathrm{Lit}(X) = \{ x_1, \ldots, x_n \}.\)

    This concept is intended to supply a set of signs that can be used in ways analogous to familiar usages, but which are more subject to free variation and thematic control.

6.14. Issue 3. The Status of Variables

Another issue on which the three styles of usage diverge most severely is with respect to a crucial problem about the status of variables. Often this is posed as a question about the ontological status of variables, what kinds of objects they are, but it is better treated as a question about the pragmatic status of variables, what kinds of signs they are used as. In this section, I try to accommodate common practices in the use of variables in the process of building a bridge to the pragmatic perspective. The goal is to reconstruct customary ways of regarding variables within a overarching framework of sign relations, while disentangling the many confusions about the status of variables that obstruct their clear and consistent formalization.

Variables are the most problematic entities that have to be dealt with in the process of formalization, and this makes it useful to explore several different ways of approaching their treatment, either of accounting for them or explaining them away. The various tactics available for dealing with variables can be organized according to how they respond to two questions: Are variables good or bad, and what kinds of things are variables anyway? In other words:

  1. Are variables good things to have in a purified system of interpretation or a target formal system, or should variables be eliminated by the work of formalization?
  2. What sorts of things should variables be construed as?

The answers given to these questions determine several consequences. If variables are good things, things that ought to be retained in a purified formal system, then it must be possible to account for their valid uses in a sensible fashion. If variables are bad things, things that ought to be eliminated from a purified formal system, then it must be possible to “explain away” their properties and utilities in terms of more basic concepts and operations.

One approach is to eliminate variables altogether from the primitive conceptual basis of one's formalism, replacing every form of substitution with a form of application. In the abstract, this makes applications of constant operators to one another the only type of combination that needs to be considered. This is the strategy of the so-called combinator calculus.

If it is desired to retain a notion of variables in the formalism, and to maintain variables as objects of reference, then there are a couple of partial explanations of variables that still afford them with various measures of objective existence.

In the elemental construal of variables, a variable \(x\!\) is just an existing object \(x\!\) that is an element of a set \(X,\!\) the catch being “which element?” In spite of this lack of information, one is still permitted to write \({}^{\backprime\backprime} x \in X {}^{\prime\prime}\!\) as a syntactically well-formed expression and otherwise treat the variable name \({}^{\backprime\backprime} x {}^{\prime\prime}\!\) as a pronoun on a grammatical par with a noun. Given enough information about the contexts of usage and interpretation, this explanation of the variable \(x\!\) as an unknown object would complete itself in a determinate indication of the element intended, just as if a constant object had always been named by \({}^{\backprime\backprime} x {}^{\prime\prime}.\!\)

In the functional construal of variables, a variable is a function of unknown circumstances that results in a known range of definite values. This tactic pushes the ostensible location of the uncertainty back a bit, into the domain of a named function, but it cannot eliminate it entirely. Thus, a variable is a function \(x : X \to Y\!\) that maps a domain of unknown circumstances, or a sample space \(X,\!\) into a range \(Y\!\) of outcome values. Typically, variables of this sort come in sets of the form \(\{ x_i : X \to Y \},\!\) collectively called coordinate projections and together constituting a basis for a whole class of functions \(x : X \to Y\!\) sharing a similar type. This construal succeeds in giving each variable name \({}^{\backprime\backprime} x_i {}^{\prime\prime}\!\) an objective referent, namely, the coordinate projection \({x_i},\!\) but the explanation is partial to the extent that the domain of unknown circumstances remains to be explained. Completing this explanation of variables, to the extent that it can be accomplished, requires an account of how these unknown circumstances can be known exactly to the extent that they are in fact described, that is, in terms of their effects under the given projections.

As suggested by the whole direction of the present work, the ultimate explanation of variables is to be given by the pragmatic theory of signs, where variables are treated as a special class of signs called indices.

Because it was necessary to begin informally, I started out speaking of things called “variables” as if there really were such things, taking it for granted that a consistent concept of their existence could be formed that would substantiate the ordinary usages carried out in their name, and contemplating judgments of their worth as if it were a matter of judging existing objects rather than the very ideas of their existence, whereas it is precisely the whole question at issue whether any of these presumptions are justified. As concessions to common usage, encounters with these assumptions are probably unavoidable, but a formal approach requires one to backtrack a bit, to treat the descriptive term “variable” as nothing more substantial than a general name in common use, and to examine whether its uses can be maintained in a purely formal system. Further, each of the “variables” that is taken to fall under this term has to allow its various indications to be reconsidered in the guise of mere signs and to permit the question of their objective reference to be examined anew.

At this point, it is worth trying to apply the insights of nominalism to these questions, if only to see where they lead.

It is the general advice of nominalism not to confuse a general name with the name of a general, that is, a universal, or a property possessed in common by many individual things. To this, pragmatism adds the distinct recommendation not to confuse an individual name with the name of an individual, because a particular that seems perfectly determinate for some purposes may not be determinate enough for other purposes.

In the perspective that results from combining these two points of view, general properties and individual instances alike can take on from the start an equally provisional status as objects of discussion and thought, in the meantime treated as interpretive fictions, as mere potentials for meaning, awaiting the settlement of their reality at the end of inquiry. Meanwhile, the individual can be exactly as tentative as the general, and ultimately, the general can be precisely as real as the individual. Still, their provisional treatment as hypothetical objects of reasoning does not affect their yet to be determined status as realities. This is so because it is possible that a hypothesis hits the mark, and it remains so as long as a provisional fiction, something called a likely story on account of its origin, can still succeed in guessing the truth aright.

Unlike generals, individuals, and numerous other forms of logical and mathematical objects, whose treatment as fictions does not affect their status as realities, one way or the other, there does not seem to be any consistent way of treating variables as objects. Although each one of the elemental and the functional construals appears to work well enough when taken by itself in the appropriate context, trying to combine these two notions into a single concept of the variable can lead to the mistake of confusing a function with one of its values.

Whether one tries to account for variables or chooses to explain them away, it is still necessary to say what kinds of entities are really involved when one is using this form of speech and trying to reason with or about its terms, whether one is speaking about things described as “variables” or merely about their terms of description, whether there are really objects to be dealt with or merely signs to be dispensed with.

According to one way of understanding the term, there is no object called a “variable” unless that object is a sign, and so the name “variable name” is redundant. Variables, if they are anything at all, are analogous to numerals, not numbers, and thus they fall within the broad class of signs called identifiers, more specifically, indices. In the case of variables, the advice of nominalism, not to confuse a variable name with the name of a variable, seems to be well taken.

If the world of elements appropriate to this discussion is organized into objective and syntactic domains, then there are fundamentally just two different ways of regarding variables, as objects or as signs. One can say that a variable is a fictional object that is contrived to provide a variable name with a form of objective referent, or one can say that a variable is a sign itself, the same thing as a variable name. In the present setting, it is convenient to arrange these broad approaches to variables according to the respective norms of significance under which one finds them most often pursued.

  1. The informal language approach to the question takes the objective construal of variables as its most commonly chosen default. The informal language style that is used in ordinary mathematical discussion associates a variable with a determinate set, one that the variable is regarded as “ranging over”. As a result, this norm of significance is forced to invoke a version of set theory, usually naive, to account for its use of variables.
  2. The formal language styles are manifestly varied in their explanations of variables, since there are many ways to formalize their ordinary uses. Two of the main alternatives are: (a) formalizing the set theory that is invoked with the use of variables, and (b) formalizing the sign relations in which variables operate as indices. Since an index is a kind of sign that denotes its object by virtue of an actual connection with it, and since the nature and direction of these actual connections can vary immensely from moment to moment, a variable is an extremely flexible and adaptable kind of sign, hence its character as a “reusable sign”.
  3. The computational language styles are also legion in their approaches to variables, but they can be divided into those that eliminate variables as a primitive concept and those that retain a notion of variables in their conceptual basis.
    1. An instructive case is presented by what is the most complete working out of the computational programme, the combinator calculus. Here, the goal is to eliminate the notion of a variable altogether from the conceptual basis of a formal system. In other words, it is projected to reduce its status as a primitive concept, one that applies to symbols in the object language, and to reformulate it as a derived concept, one that is more appropriate to describing constructions in a metalanguage.
    2. In computational language contexts where variables are retained as a primitive notion, there is a form of distinction between variables and variable names, but here it takes on a different sense, being the distinction between a sign and its higher order sign. This is because a variable is conceived as a “store”, a component of state of the interpreting machine, that contains different values from time to time, while the variable name is a symbolic version of that store's address. The store when full, or the state when determinate, constitutes a form of numeral, not a number, and so it is still a sign, not the object itself. This makes the variable name in this setting a type of higher order sign.

It is not just the influence of different conventions about language use that forms the source of so much confusion. Different conventions that prevail in different contexts would generate conceptual turbulence only at their boundaries with each other, and not distribute the disturbance throughout the interiors of these contexts, as is currently the case. But there are higher order differential conventions, in other words, conventions about changing conventions, that apply without warning all throughout what is pretended to be a uniform context.

For example, suppose I make a casual reference to the following set of pronouns:

\(\{ ~ \text{I}, ~ \text{you}, ~ \text{he}, ~ \text{she}, ~ \text{we}, ~ \text{they} ~ \}.\!\)

Chances are that the reader will automatically shift to what I have called the sign convention to interpret this reference. Even without the instruction to expect a set of pronouns, it makes very little sense in this setting to think I am referring to a set of people, and so a charitable assumption about my intentions to make sense will lead to the intended interpretation.

However, suppose I make a similar reference to the following set of variables:

\(\{ x_1, \ldots, x_n \}.\!\)

In this case it is more likely that the reader will take the suggested set of variable names as though they were the names of some fictional objects called “variables”.

The rest of this section deals with the case of boolean variables, soon to be invoked in providing a functional interpretation of propositional calculus.

This discussion draws on concepts from two previous papers (Awbrey, 1989 and 1994), changing notations as needed to fit the current context. Except for a number of special sets like \(\mathbb{B},\!\) \(\mathbb{N},\!\) \(\mathbb{Z},\!\) and \(\mathbb{R},\!\) I use plain capital letters for ordinary sets, singly underlined capitals for coordinate spaces and vector spaces, and doubly underlined capitals for the alphabets and lexicons that generate formal languages and logical universes of discourse.

If \(X = \{ x_1, \ldots, x_n \}\!\) is a set of \(n\!\) elements, it is possible to construct a formal alphabet of \(n\!\) letters or a formal lexicon of \(n\!\) words corresponding to the elements of \(X\!\) and notated as follows:

\(\underline{\underline{X}} = \mathrm{Lit}(X) = \{ \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \}.\!\)

The set \(\underline{\underline{X}}\!\) is known in formal settings as the literal alphabet or the literal lexicon associated with \(X,\!\) but on more familiar grounds it can be called the double of \(X.\!\) Under conditions of careful interpretation, any finite set \(X\!\) can be construed as its own double, but for now it is safest to preserve the apparent distinction in roles until the sense of this double usage has become second nature.

This construction is often useful in situations where has to deal with a set of signs \(\{ {}^{\backprime\backprime} s_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} s_n {}^{\prime\prime} \}\!\) with a fixed or a faulty interpretation. Here one needs a fresh set of signs \(\{ \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \}\!\) that can be used in ways analogous to the original, but free enough to be controlled and flexible enough to be repaired. In other words, the interpretation of the new list is subject to experimental variation, freely controllable in such a way that it can follow or assimilate the original interpretation whenever it makes sense to do so, but critically reflected and flexible enough to have its interpretation amended whenever necessary.

Interpreted on a casual basis, the set \(\underline{\underline{X}}\!\) can be treated as a list of boolean variables, or, according to another reading, as a list of boolean variable names, but both of these choices are subject to the eventual requirement of saying exactly what a “variable” is.

The overall problem about the “ontological status” of variables will also be the subject of an extended study at a later point in this project, but for now I am forced to side-step the whole issue, merely giving notice of a signal distinction that promises to yield a measure of effective advantage in finally disposing of the problem.

If a sign, as accepted and interpreted in a particular setting, has an existentially unique denotation, that is, if there exists a unique object that the sign denotes under the operative sign relation, then the sign is said to possess a EU-denotation, or to have a EU-object. When this is so, the sign is said to be eudenotational, otherwise it is said to be dysdenotational.

Using the distinction accorded to eudenotational signs, the issue about the ontological status of variables can be illustrated as turning on two different acceptations of the list \(X = \{ x_1, \ldots, x_n \}.\!\)

  1. The natural (or naive) acceptation is for a reader to interpret the list as referring to a set of objects, in effect, to pass without hesitation from impressions of the characters \({}^{\backprime\backprime} x_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} x_n {}^{\prime\prime}\!\) to thoughts of their respective EU-objects \(x_1, \ldots, x_n,\!\) all taken for granted to exist uniquely. The whole set of interpretive assumptions that go into this acceptation will be referred to as the object convention.
  2. The reflective (or critical) acceptation is to see the list before all else as a list of signs, each of which may or may not have a EU-object. This is the attitude that must be taken in formal language theory and in any setting where computational constraints on interpretation are being contemplated. In these contexts it cannot be assumed without question that every sign, whose participation in a denotation relation would have to be indicated by a recursive function and implemented by an effective program, does in fact have an existential denotation, much less a unique object. The entire body of implicit assumptions that go to make up this acceptation, although they operate more like interpretive suspicions than automatic dispositions, will be referred to as the sign convention.

In the present context, I can answer questions about the ontology of a “variable” by saying that each variable \(x_i\!\) is a kind of a sign, in the boolean case capable of denoting an element of \({\mathbb{B} = \{ 0, 1 \}}\!\) as its object, with the actual value depending on the interpretation of the moment. Note that \(x_i\!\) is a sign, and that \({}^{\backprime\backprime} x_i {}^{\prime\prime}\!\) is another sign that denotes it. This acceptation of the list \(X = \{ x_i \}\!\) corresponds to what was just called the sign convention.

In a context where all the signs that ought to have EU-objects are in fact safely assured to do so, then it is usually less bothersome to assume the object convention. Otherwise, discussion must resort to the less natural but more careful sign convention. This convention is only “artificial” in the sense that it recalls the artifactual nature and the instrumental purpose of signs, and does nothing more out of the way than to call an implement “an implement”.

I make one more remark to emphasize the importance of this issue, and then return to the main discussion. Even though there is no great difficulty in conceiving the sign \({}^{\backprime\backprime} x_i {}^{\prime\prime}\!\) to be interpreted as denoting different types of objects in different contexts, it is more of a problem to imagine that the same object \(x_i\!\) can literally be both a value (in \(\mathbb{B}\!\)) and a function (from \(\mathbb{B}^n\!\) to \(\mathbb{B}\!\)).

In the customary fashion, the name \({}^{\backprime\backprime} x_i {}^{\prime\prime}\!\) of the variable \(x_i\!\) is flexibly interpreted to serve two additional roles. In algebraic and geometric contexts \({}^{\backprime\backprime} x_i {}^{\prime\prime}\!\) is taken to name the \(i^\text{th}\!\) coordinate function \(\underline{\underline{x_i}} : \mathbb{B}^n \to \mathbb{B}.\!\) In logical contexts \({}^{\backprime\backprime} x_i {}^{\prime\prime}\!\) serves to name the \(i^\text{th}\!\) basic property or simple proposition, also called \({}^{\backprime\backprime} \underline{\underline{x_i}} {}^{\prime\prime},\!\) that goes into the construction of a propositional universe of discourse, in effect, becoming one of the sentence letters of a truth table and being used to label one of the simple enclosures of a venn diagram.

Rationalizing the usage of boolean variables to represent propositional features and functions in this manner, I can now discuss these concepts in greater detail, introducing additional notation along the way.

  1. The sign \({}^{\backprime\backprime} x_i {}^{\prime\prime},\!\) appearing in the contextual frame \({}^{\backprime\backprime} \underline{[[User:Jon Awbrey|Jon Awbrey]] ([[User talk:Jon Awbrey|talk]])} : \mathbb{B}^n \to \mathbb{B} {}^{\prime\prime},\!\) or interpreted as belonging to that frame, denotes the \(i^\text{th}\!\) coordinate function \(\underline{\underline{x_i}} : \mathbb{B}^n \to \mathbb{B}.\) The entire collection of coordinate maps in \({\underline{\underline{X}} = \{ \underline{\underline{x_i}} \}}\!\) contributes to the definition of the coordinate space or vector space \(\underline{X} : \mathbb{B}^n,\!\) notated as follows:

    \(\underline{X} = \langle \underline{\underline{X}} \rangle = \langle \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \rangle = \{ (\underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}}) \} : \mathbb{B}^n.\!\)

    Associated with the coordinate space \(\underline{X}\!\) are various families of boolean-valued functions \(f : \underline{X} \to \mathbb{B}.\!\)

    1. The set of all functions \(f : \underline{X} \to \mathbb{B}\!\) has a cardinality of \(2^{2^n}\!\) and is denoted as follows:

      \(\underline{X}^{\to} = (\underline{X} \to \mathbb{B}) = \{ f : \underline{X} \to \mathbb{B} \}.\!\)

    2. The set of linear functions \(f : \underline{X} \to \mathbb{B}\!\) has a cardinality of \(2^n\!\) and is known as the dual space \(\underline{X}^{*}\!\) in vector space contexts. In formal language contexts, in order to avoid conflicts with the use of the kleene star operator, it needs to be given an alternate notation:

      \(\underline{X}^{\oplus\!\to} = (\underline{X} ~\oplus\!\!\to \mathbb{B}) = \{ f : \underline{X} ~\oplus\!\!\to \mathbb{B} \}.\!\)

    3. The set of positive functions \(f : \underline{X} \to \mathbb{B}\!\) has a cardinality of \(2^n\!\) and is notated as follows:

      \(\underline{X}^{\otimes\!\to} = (\underline{X} ~\otimes\!\!\to \mathbb{B}) = \{ f : \underline{X} ~\otimes\!\!\to \mathbb{B} \}.\!\)

    4. The set of singular functions \(f : \underline{X} \to \mathbb{B}\!\) has a cardinality of \(2^n\!\) and is notated as follows:

      \(\underline{X}^{\odot\!\to} = (\underline{X} ~\odot\!\!\to \mathbb{B}) = \{ f : \underline{X} ~\odot\!\!\to \mathbb{B} \}.\!\)

    5. The set of coordinate functions, also referred to as the set of basic or simple functions, has a cardinality of \(n\!\) and is denoted in the following ways:

      \(\underline{\underline{X}} = (\underline{X} ~\circ\!\!\to \mathbb{B}) = \{ f : \underline{X} ~\circ\!\!\to \mathbb{B} \} = \{ \underline{\underline{x_i}} : \underline{X} \to \mathbb{B} \}.\!\)

  2. The sign \({}^{\backprime\backprime} x_i {}^{\prime\prime},\!\) read or understood in a propositional context, can be interpreted as denoting one of the \(n\!\) features, qualities, basic properties, or simple propositions that go to define the \(n\!\)-dimensional universe of discourse \(X^\circ,\!\) also notated as follows:

    \(X^\circ = [X] = [x_1, \ldots, x_n] = (X, X^\to) : \mathbb{B}^n +\!\to \mathbb{B}.\!\)

6.15. Propositional Calculus

The order of reasoning called propositional logic, as it is pursued from various perspectives, concerns itself with three domains of objects, with all three domains having analogous structures in the relationships of their objects to each other. There is a domain of logical objects called properties or propositions, a domain of functional objects called binary-, boolean-, or truth-valued functions, and a domain of geometric objects called regions or subsets of the relevant universe of discourse. Each domain of objects needs a domain of signs to refer to its elements, but if one's interest lies mainly in referring to the common aspects of structure exhibited by these domains, then it serves to maintain a single notation, variously interpreted for all three domains.

The first order of business is to comment on the logical significance of the rhetorical distinctions that appear to prevail among these objects. My reason for introducing these distinctions is not to multiply the number of entities beyond necessity but merely to summarize the variety of entities that have been used historically, to figure out a series of conversions between them, and to integrate suitable analogues of them within a unified system.

For many purposes the distinction between a property and a proposition does not affect the structural aspects of the domains being considered. Both properties and propositions are tantamount to fictional objects, made up to supply general signs with singular denotations, and serving as indirect ways to explain the plural indefinite references (PIRs) of general signs to the multitudes of their ultimately denoted objects. A property is signified by a sign called a term that achieves by a form of indirection a PIR to all the elements in a class of things. A proposition is signified by a sign called a sentence that achieves by a form of indirection a PIR to all the elements in a class of situations. But things are any objects of discussion and thought, in other words, a perfectly general category, and situations" are just special cases of these things.

There is still something left to the logical distinction between properties and propositions, but it is largely immaterial to the order of reasoning that is found reflected in propositional logic. When it is useful to emphasize their commonalities, properties and propositions can both be referred to as Props. As a handle on the aspects of structure that are shared between these two domains and as a mechanism for ignoring irrelevant distinctions, it also helps to have a single term for a domain of properties (DOP) and a domain of propositions (DOP).

Because a Prop is introduced as an intermediate object of reference for a general sign, it factors a PIR of a general sign across two stages, the first appearing as a reference of a general sign to a singular Prop, and the second appearing as an application of a Prop to its proper objects. This affords a point of articulation that serves to unify and explain the manifold of references involved in a PIR, but it requires a distinction to be fashioned between the intermediate objects, whether real or invented, and the original, further, or ultimate objects of a general sign.

Next, it is necessary to consider the stylistic differences among the logical, functional, and geometric conceptions of propositional logic. Logically, a domain of properties or propositions is known by the axioms it is subject to. Concretely, one thinks of a particular property or proposition as applying to the things or situations it is true of. With the synthesis just indicated, this can be expressed in a unified form: In abstract logical terms, a DOP is known by the axioms to which it is subject. In concrete functional or geometric terms, a particular element of a DOP is known by the things of which it is true.

With the appropriate correspondences between these three domains in mind, the general term proposition can be interpreted in a flexible manner to cover logical, functional, and geometric types of objects. Thus, a locution like \({}^{\backprime\backprime} \text{the proposition}~ F {}^{\prime\prime}\!\) can be interpreted in three ways: (1) literally, to denote a logical proposition, (2) functionally, to denote a mapping from a space \(X\!\) of propertied or proposed objects to the domain \({\mathbb{B} = \{ 0, 1 \}}\!\) of truth values, and (3) geometrically, to denote the so-called fiber of truth \(F^{-1}(1)\!\) as a region or a subset of \(X.\!\) For all of these reasons, it is desirable to set up a suitably flexible interpretive framework for propositional logic, where an object introduced as a logical proposition \(F\!\) can be recast as a boolean function \(F : X \to \mathbb{B},\!\) and understood to indicate the region of the space \(X\!\) that is ruled by \(F.\!\)

Generally speaking, it does not seem possible to disentangle these three domains from each other or to determine which one is more fundamental. In practice, due to its concern with the computational implementations of every concept it uses, the present work is biased toward the functional interpretation of propositions. From this point of view, the abstract intention of a logical proposition \(F\!\) is regarded as being realized only when a program is found that computes the function \(F : X \to \mathbb{B}.\!\)

The functional interpretation of propositional calculus goes hand in hand with an approach to logical reasoning that incorporates semantic or model-theoretic methods, as distinguished from the purely syntactic or proof-theoretic option. Indeed, the functional conception of a proposition is model-theoretic in a double sense, not only because its notations denote functions as their semantic objects, but also because the domains of these functions are spaces of logical interpretations for the propositions, with the points of the domain that lie in the inverse image of truth under the function being the models of the proposition.

One of the reasons for pursuing a pragmatic hybrid of semantic and syntactic approaches, rather than keeping to the purely syntactic ways of manipulating meaningless tokens according to abstract rules of proof, is that the model theoretic strategy preserves the form of connection that exists between an agent's concrete particular experiences and the abstract propositions and general properties that it uses to describe its experience. This makes it more likely that a hybrid approach will serve in the realistic pursuits of inquiry, since these efforts involve the integration of deductive, inductive, and abductive sources of knowledge.

In this approach to propositional logic, with a view toward computational realization, one begins with a space \(X,\!\) called a universe of discourse, whose points can be reasonably well described by means of a finite set of logical features. Since the points of the space \(X\!\) are effectively known only in terms of their computable features, one can assume that there is a finite set of computable coordinate projections \(x_i : X \to \mathbb{B},\!\) for \({i = 1 ~\text{to}~ n,}\!\) for some \(n,\!\) that can serve to describe the points of \(X.\!\) This means that there is a computable coordinate representation for \(X,\!\) in other words, a computable map \(T : X \to \mathbb{B}^n\!\) that describes the points of \(X\!\) insofar as they are known. Thus, each proposition \(F : X \to \mathbb{B}\!\) can be factored through the coordinate representation \(T : X \to \mathbb{B}^n\!\) to yield a related proposition \(f : \mathbb{B}^n \to \mathbb{B},\!\) one that speaks directly about coordinate \(n\!\)-tuples but indirectly about points of \(X.\!\) Composing maps on the right, the mapping \(f\!\) is defined by the equation \(F = T \circ f.\!\) For all practical purposes served by the representation \(T,\!\) the proposition \(f\!\) can be taken as a proxy for the proposition \(F,\!\) saying things about the points of \(X\!\) by means of \(X\!\)'s encoding to \(\mathbb{B}^n.\!\)

Working under the functional perspective, the formal system known as propositional calculus is introduced as a general system of notations for referring to boolean functions. Typically, one takes a space \(X\!\) and a coordinate representation \(T : X \to \mathbb{B}^n\!\) as parameters of a particular system and speaks of the propositional calculus on a finite set of variables \(\{ \underline{\underline{x_i}} \}.\!\) In objective terms, this constitutes the domain of propositions on the basis \(\{ \underline{\underline{x_i}} \},\!\) notated as \(\mathrm{DOP}\{ \underline{\underline{x_i}} \}.\!\) Ideally, one does not want to become too fixed on a particular set of logical features or to let the momentary dimensions of the space be cast in stone. In practice, this means that the formalism and its computational implementation should allow for the automatic embedding of \(\mathrm{DOP}(\underline{\underline{X}})\!\) into \(\mathrm{DOP}(\underline{\underline{Y}})\!\) whenever \(\underline{\underline{X}} \subseteq \underline{\underline{Y}}.\!\)

The rest of this section presents the elements of a particular calculus for propositional logic. First, I establish the basic notations and summarize the axiomatic presentation of the calculus, and then I give special attention to its functional and geometric interpretations.

This section reviews the elements of a calculus for propositional logic that I initially presented in two earlier papers (Awbrey, 1989 and 1994). This calculus belongs to a family of formal systems that hark back to C.S. Peirce's existential graphs (ExG) and it draws on ideas from Spencer Brown's Laws of Form (LOF). A feature that distinguishes the use of these formalisms can be summed up by saying that they treat logical expressions primarily as elements of a language and only secondarily as elements of an algebra. In other words, the most important thing about a logical expression is the logical object it denotes. To the extent that the object can be represented in syntax, this attitude puts the focus on the logical equivalence class (LEC) to which the expression belongs, relegating to the background the whole variety of ways that the expression can be generated from algebraically conceived operations. One of the benefits of this notation is that it facilitates the development of a differential extension for propositional logic that can be used to reason about changing universes of discourse.

A propositional language is a syntactic system that mediates the reasonings of a propositional logic. The objects of the language and the logic, that is, the logical entities denoted by the language and invoked by the operations of the logic, can be conceived to rest at various levels of abstraction, residing in spaces of functions that are basically of the types \(\mathbb{B}^n \to \mathbb{B}\!\) and remaining subject only to suitable choices of the parameter \(n.\!\)

Persistently reflective engagement in logical reasoning about any domain of objects leads to the identification of generic patterns of inference that appear to be universally valid, never disappointing the trust that is placed in them. After a time, a formal system naturally arises that commemorates one's continuing commitment to these patterns of logical conduct, and acknowledges one's conviction that further inquiry into their utility can be safely put beyond the reach of everyday concerns. At this juncture each descriptive pattern becomes a normative template, regulating all future ventures in reasoning until such time as a clearly overwhelming mass of doubtful outcomes cause one to question it anew.

Propositions about a coherent domain of objects tend to gather together and express themselves collectively in organized bodies of statements known as theories. As theories grow in size and complexity, one is faced with massive collections of propositional constraints and complex chains of logical inferences, and it becomes useful to support reasoning with the implementation of a propositional calculator.

At this point, variations in common and technical usage of the term proposition require a few comments on terminology. The heart of the issue is how to maintain a proper distinction between the logical form and the rhetorical style of a proposition, that is, how best to mark the difference between its invariant contents and its variant expressions. There are many ways to draw the required form of distinction between the objective situation and the significant expression in this relation. Here, I outline a compromise strategy that incorporates the advantages of several options and makes them available to intelligent choice as best fits the occasion.

  1. According to a prevailing technical usage, a proposition is a categorical object of abstract thought, something that is tantamount to an objective situation, a statistical event, or a state of affairs of a specified type. In distinction to the abstract proposition, a statement that a situation of the proposed type is actually in force is expressed in the form of a syntactic formula called a sentence.
  2. Another option enjoys a set of incidental advantages that makes it worth mentioning here and also worth exploring in a future discussion. Under this alternative, one refers to the signifying expressions as propositions, deliberately conflating propositions and sentences, but then introduces the needed distinction at another point of articulation, referring to the signified objects as positions.
  3. Attempting to strike a compromise with common usage, I often allow the word proposition to exploit the full range of its senses, denoting either object or sign according to context, and resorting to the phrase propositional expression whenever it is necessary to emphasize the involvement of the sign.

The operative distinction in every case, propositional or otherwise, is the difference in roles between objects and signs, not the names they are called by. To reconcile a logical account with the pragmatic theory of signs, one entity is construed as the propositional object (PO) and the other entity is recognized as the propositional sign (PS) at each moment of interpretation in a propositional sign relation. Once these roles are assigned, all the technology of sign relations applies to the logic of propositions as a special case. In the context of propositional sign relations, a semantic equivalence class (SEC) is referred to as a logical equivalence class (LEC). Each propositional object can then be associated, or even identified for all informative and practical purposes, with the LEC of its propositional signs. Accordingly, the proposition is reconstituted from its sentences in the appropriate way, as an abstract object existing in a semantic relation to its signs.

Taking this topic, the representation of sign relations, and seeking a computational formulation of its theory, leads to certain considerations about the best approach to the subject. Computational formulations are those with no recourse but to finitary resources. In setting up a computational formulation of any theory, one has to specify the finite set of axioms that are constantly available to subsequent reasoning. This makes it advisable to approach the topic of representations at a level of generality that will give the resulting theory as much power as possible, the kind of power to which inductive hypotheses can have easy and constant recourse. In order to furnish these resources with an ample supply of theoretical power …

In doing this, it is expeditious, if not absolutely necessary, to broaden the focus on sign relations in two ways: (1) to expand its extension from a special class of triadic relations to the wider sphere of \(n\!\)-place relations, and (2) to diffuse its intension from fully specified and concretely presented relations to incompletely specified and abstractly described relations.

6.16. Recursive Aspects

Note. This section will most likely be rendered obsolete once all the planned changes in notation are worked through the text, as I will make a point of always distinguishing the interpretive agents \(\text{A}\!\) and \(\text{B}\!\) from the corresponding sign relations \(L(\text{A})\!\) and \(L(\text{B}).\!\)

There is a one piece of unfinished business concerning the presentation of this example that deserves further comment.

Since the objects of reference \(\text{A}\!\) and \(\text{B}\!\) are imagined to be interpretive agents, it is convenient to use their names to denote the corresponding sign relations. Thus, the interpreters \(\text{A}\!\) and \(\text{B}\!\) are self-referent and mutually referent to the extent that they have names for themselves and each other. However, their discussion as a whole fails to contain any term for itself, and it even lacks a full set of grammatical cases for the objects in it. Whether these recursions and omissions cause any problems for my discussion will depend on the level of interpretive sophistication, not of \(\text{A}\!\) and \(\text{B},\!\) but of the external systems of interpretation that are brought to bear on it.

In defining the activities of interpreters \(\text{A}\!\) and \(\text{B}\!\) as sign relations, I have implicitly specified set-theoretic equations of the following form:

\(\begin{array}{lllllll} \text{A} & = & \{ & (\text{A}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}), & \ldots, & (\text{A}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}), & \\ & & & (\text{B}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}), & \ldots, & (\text{B}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime}) & \}, \\[10pt] \text{B} & = & \{ & (\text{A}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}), & \ldots, & (\text{A}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime}), & \\ & & & (\text{B}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}), & \ldots, & (\text{B}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}) & \}. \end{array}\)

The way I read these equations, they do not attempt to define the entities \(\text{A}\!\) and \(\text{B}\!\) in terms of themselves and each other. Instead, they define the whole instrumental activity of each interpreter, a highly complex duty, in terms of the interpreters' more perfunctory roles as objects of reference, and in terms of their associated actions on signs as mere tokens of each other's existence. In other words, the recursion does not have recourse to the full fledged faculties of \(\text{A}\!\) and \(\text{B}\!\) but only to their more inert excipients, their rote performances as inactive objects and passive images whose properly reduced complexities provide grounds for permitting the recursion to fly.

6.17. Patterns of Self-Reference

In setting out the plan of a full scale RIF there is an unspoken promise to justify eventually the thematic motives that experimentally tolerate its indulgence in self-reference, and it seems that this implicit hope for a full atonement in time is a key to the tensions of the work being borne. This section, in order to inspire confidence in the prospects of a RIF being achievable, and by way of allaying widespread suspicions about all types of self-reference, examines several forms of circular referral and notes that not all contemplation of self-reference is incurably vicious.

In this section I consider signs, expressions, sign relations, and systems of interpretation (SOIs) that involve forms of self-reference. Because it is the abstract forms of self-reference that constitute the chief interest of this study, I collect this whole subject matter under the heading patterns of self-reference (POSRs). With respect to this domain I entertain the classification of POSRs in two different ways.

In this section I take notice of a broad family of formal structures that I refer to as patterns of self-reference (POSRs), because they seem to have in common the proposed description of a formal object by means of recursive or circular references. In their basic characters, POSRs range from the familiar to the strange, from the obvious to the problematic, and from the legitimate to the spurious. Often a POSR is best understood as a formal object in its own right, or as a formal sign that foreshadows a definite object, but occasionally a POSR can only be interpreted as something in the character of a syntactic pattern, one that goes into the making of a questionable specification and represents merely a dubious attempt to indicate or describe an object. All in all, POSRs range from the kinds of functions and objects, or programs and data structures, that are successfully defined by recursion to the sorts of vitiating circles that doom every attempt to define an unknown term in terms of itself.

Because POSRs span the spectrum from the moderately straightforward to the deliberately misleading, there is a need for ways to tell them apart, at least, before pursuing their consequences too far. Of course, if one cannot rest without having all computable functions at one's command, then no program can tell all the good and bad programs apart. But if one can be satisfied with a somewhat more modest domain, then there is hope for a way, an experimental, fallible, and incremental way, but a way nonetheless, that eventually leads one to know the good and ultimately keeps one away from the bad.

When it comes to their propriety, POSRs are found on empirical grounds to fall into two varieties: the exculpable and the indictable kinds. Thus, it is reasonable to attempt an empirical distinction, proposing to let experience mark each POSR as an excusable self-reference (ESR) or an improper self-reference (ISR), as the case may be. But empirical grounds can be a hard basis to fall back on, since a recourse to actual experience with POSRs can risk an agent's participation in pretended sign relations and promissory representations that amount in the end to nothing more than forms of interpretive futility. Therefore, one seeks an arrangement of methods in general or an ordering of options in these special cases that makes the empirical trial a court of last resort and that avoids resorting to the actual experience of interpretation as a routine matter of course.

First, I recognize an empirical distinction that seems to exist between the less problematic and the more problematic varieties of self-reference, allowing POSRs to be sorted according to the consequential features that they have in actual experience. There are the good sorts, those cleared up to the limits of accumulated experience as innocuous usages and even as probable utilities, and then there are the bad sorts, those marked by hard experience as definitely problematic.

Next, I search for an intuitive distinction that can be supposed to exist between the good and the bad sorts of POSRs, invoking a formal character or computable predicate of a POSR whose prior inspection can provide interpreters with a definitive indication or a decisive piece of information as to whether a POSR is good or bad, without forcing them to undergo the consequences of its actual use.

Before I can pin down what is involved in finding these intuitive characters and distinctions, it is necessary to discuss the concept of intuition that is relevant here. This issue requires a substantial digression and is taken up in the next section. After that, the concrete examples I take to be acceptable POSRs are presented.

6.18. Practical Intuitions

[Variant] I use the word intuition in a pragmatic sense, at least, in a sense that is available to the word after the critique of pragmatism has purged it of certain vacuities that occasionally affect its use, especially the illusions of incorrigibility that place it beyond practical competence. In essence and etymology, and thus rehabilitated to its practical senses, an intuition is just an awareness, perhaps with a certain wariness, but certainly with no aura of infallibility. Thus, when I use the word intuition without further qualifications, it is intended to refer to a practical intuition of this kind, the only kind that has a recurring usefulness, and it ought to suggest the kinds of casual intuitions and fallible insights that intelligent agents ordinarily have, and that constitute their unformalized approximate knowledge of a given domain.

[Variant] The concept of intuition I am using here is a pragmatic one, referring to the kinds of casual intuitions and fallible insights that go to make up an agent's incompletely formalized and approximate knowledge of an object or domain. This sense of intuition differs from its technical meaning in various other philosophies, where it refers to a supposed modality of knowledge that involves an immediate cognition of an object, for instance, a direct perception of a fact about an object or an infallible apprehension of a fundamental truth about the world. Whatever the case, this makes an intuition a piece of knowledge about an object that is determined solely by something that exists outside the knower, and this can only be the object in itself, or what is called the transcendental object.

[Variant] This involves a particular notion of what constitutes an intuition, not any direct perception, immediate cognition, or otherwise infallible piece of knowledge that an agent might be supposed to have about an object or domain but only the modality of unformalized approximate knowledge that an agent actually has at the beginning of inquiry, with all the risks of casual intuition and fallible insight that go into it.

[Variant] The pragmatic concept of intuition is at odds with its technical meaning in certain other philosophies, where an intuition is supposed to be an immediate cognition of an object, perhaps a direct perception of a fact about an object or a state of affairs, perhaps an assured apprehension of a fundamental truth about the entire world of possible experience. The inference from this immediacy is supposed to be that an intuition is unmediated, therefore pure, therefore infallible.

[Variant] This candidate for an argument is a bit too quick, I think, so let me review its qualifications at a pettier pace. According to the way of thinking under examination, intuition is knowledge of an object that is not determined by previous knowledge of the same object. That is, an intuition is a piece of knowledge about an object that is determined solely by something that exists outside the knower, and this can only be the object in itself, or what is called the transcendental object. Accordingly, if the process that plants a bit of information in an agent's mind is not mediated by anything else under the agent's control, or by any previous step that the agent can help to determine, then the data in question is beyond correction, and thus acquires a status that is literally incorrigible.

There is a reason why the issue of immediacy has come up at this point. If there is a truth in the idea that “all thought takes place in signs”, as signs are understood in their pragmatic theory, then it means that thought in general, and inquiry in particular, is mediated by a process of interpretation that advances through the connotative components of one or many sign relations in the orbits of their denotative objects. Depending partly on the other assumptions that one makes about the nature of physical processes in the world, this constrains the models of embodied reasoning that one can entertain as being available to inquiry. One way of reading the implications of this “mediation” leads to the conclusion that thought is mediated by a potentially continuous process of interpretation, whose formal study requires the contemplation of potentially continuous sign relations.

Under common assumptions about the nature of causal processes, the possibility of continuity in sign relations becomes a logical necessity. This forces the distinction of immediacy to be recognized as a purely interpretive value, one that is attributed to a sign by a particular interpreter, and it renders the character of an intuition relative to the interpreter that is so impressed by it. The decision to interpret a datum of experience as an immediate sign is itself the result of a process of inference that says it is OK to do so, but it can be simply indicative of one interpreter's lack of interest or lack of capacity for pursuing the matter further.

The decision by an interpreter to treat a fact as immediate, often in spite of every indication to the contrary, can still be respected as such, but there need be nothing in the fact of the matter that makes it so. Nothing about their interpretive designation affects the logical status of axioms and primitives to be regarded as unproven truths and undefined meanings, respectively, but it does mark these entitlements as privileges that can be enjoyed uncontested only within a circumscribed system of reasoning, the whole of which system remains subject to being judged in competition with contending systems.

In contrast, uncommon assumptions about causality can lead to the consideration of discrete sign relations as complete entities in and of themselves. It is on these grounds, where the conceptual possibility of continuous sign relations meets the practical necessity of discrete sign relations, that the broader philosophy of pragmatism must come to terms with the narrower constraints of computing, indeed, where both this theory and this practice must begin to reckon with the forms of bounded rationality that are available to finite information creatures.

6.19. Examples of Self-Reference

For ease of reference, I introduce the following terminology. With respect to the empirical dimension, a good POSR is described as an exculpable self-reference (ESR) while a bad POSR is described as an indictable self-reference (ISR). With respect to the intuitive dimension, a good POSR is depicted as an explicative self-reference (ESR) while a bad POSR is depicted as an implicative self-reference (ISR). Here, underscored acronyms are used to mark the provisionally settled, hypothetically tentative, or status quo condition of these casually intuitive categories.

These categories of POSRs can be discussed in greater detail as follows:

  1. There is an empirical distinction that appears to impose itself on the varieties of self-reference, separating the forms that lead to trouble in thought and communication from the forms that do not. And there is a pragmatic reason for being interested in this distinction, the motive being to avoid the corresponding types of trouble in reflective thinking. Whether this apparent distinction can hold up under close examination is a good question to consider at a later point. But the real trouble to be faced at the moment is that an empirical distinction is a post hoc mark, a difference that makes itself obvious only after the possibly unpleasant facts to be addressed are already present in experience. Consequently, its certain recognition comes too late to avert the adverse portions of those circumstances that its very recognition is desired to avoid.

    According to the form of this empirical distinction, a POSR can be classified either as an exculpable self-reference (ESR) or as an indictable self-reference (ISR). The distinction and the categories to either side of it are intended to sort out the POSRs that are safe and effective to use in thought and communication from the POSRs that can be hazardous to the health of inquiry.

    More explicitly, the distinction between ESRs and ISRs is intended to capture the differences that exist between the following cases:

    1. ESRs are POSRs that cause no apparent problems in thought or communication, often appearing as practiaclly useful in many contexts and even as logically necessary in some contexts.

    2. ISRs are POSRs that lead to various sorts of trouble in the attempt to reason with them or to reason about them, that is, to use them consistently or even to decide for or against their use.

    I refer to this as an empirical distinction in spite of the fact that the domain of experience in question is decidedly a formal one, because it rests on the kinds of concrete experiences and grows through the kinds of unforeseen developments that are ever the hallmark of experimental knowledge.

    There is a pragmatic motive involved in this effort to classify the forms of self-reference, namely, to avoid certain types of trouble that seem to arise in reasoning by means of self referent forms. Accordingly, there is an obvious difference in the uses of self referent forms that is of focal interest here, but it presents itself as an empirical distinction, that is, an after the fact feature or post hoc mark. Namely, there are forms of self-reference that prove themselves useful in practice, being conducive to both thought and communication, and then there are forms that always seem to lead to trouble. The difference is evident enough after the impact of their effects has begun to set in, but it is not always easy to recognize these facts in advance of risking the very circumstances of confusion that one desires a classification to avoid.

    In summary, one has the following problem. There is found an empirical distinction between different kinds of self-reference, one that becomes evident and is easy to judge after the onset of their effects has begun to set in, between the kinds of self-reference that lead to trouble and the kinds that do not. But what kinds of intuitive features, properties that one could recognize before the fact, would serve to distinguish the immanent and imminent empirical categories before one has gone through the trouble of suffering their effects?

    Thus, one has the problem of translating between a given collection of empirical categories and a suitable collection of intuitive categories, the latter being of a kind that can be judged before the facts of experience have become inevitable, hoping thereby to correlate the two dimensions in such a way that the categories of intuition about POSRs can foretell the categories of experience with POSRs.

  2. In a tentative approach to the subject of self-reference, I notice a principled distinction between two varieties of self-reference, that I call constitutional, implicative, or intrinsic self-reference (ISR) and extra-constitutional, explicative, or extrinsic self-reference (ESR), respectively.

    1. ISR
    2. ESR

In the rest of this section I put aside the question of defining a thing, symbol, or concept in terms of itself, which promises to be an exercise in futility, and consider only the possibility of explaining, explicating, or elaborating a thing, symbol, or concept in terms of itself. In this connection I attach special importance to a particular style of exposition, one that reformulates one's initial idea of an object in terms of the active implications or the effective consequences that its presence in a situation or its recognition and use in an application constitutes for the practical agent concerned. This style of pragmatic reconstruction can serve a useful purpose in clarifying the information one possesses about the object, sign, or idea of concern. Properly understood, it is marks the effective reformulation of ideas in ways that are akin to the more reductive sorts of operational definition, but overall is both more comprehensive and more pointedly related to the pragmatic agent, or the actual interpreter of the symbols and concepts in question.

The pending example of a POSR is, of course, the system composed of a pair of sign relations \(\{ L(\text{A}), L(\text{B}) \},\!\) where the nouns and pronouns in each sign relation refer to the hypostatic agents \(\text{A}\!\) and \(\text{B}\!\) that are known solely as embodiments of the sign relations \(L(\text{A})\!\) and \(L(\text{B}).\!\) But this example, as reduced as it is, already involves an order of complexity that needs to be approached in more discrete stages than the ones enumerated in the current account. Therefore, it helps to take a step back from the full variety of sign relations and to consider related classes of POSRs that are typically simpler in principle.

  1. The first class of POSRs I want to consider is diverse in form and content and has many names, but the feature that seems to unite all its instances is a self-commenting or self-documenting character. Typically, this means a partially self-documenting (PSD) character. As species of formal structures, PSD data structures are rife throughout computer science, and PSD developmental sequences turn up repeatedly in mathematics, logic, and proof theory. For the sake of euphony and ease of reference I collect this class of PSD POSRs under the name of auto-graphs (AGs).

    The archetype of all auto-graphs is perhaps the familiar model of the natural numbers \(\mathbb{N}\!\) as a sequence of sets, each of whose successive sets collects all and only the previous sets of the sequence:

    \(\{\}, \quad \{\{\}\}, \quad \{\{\}, \{\{\}\}\}, \quad \{\{\}, \{\{\}\}, \{\{\}, \{\{\}\}\}\}, \quad \ldots\!\)

    This is the purest example of a PSD developmental sequence, where each member of the sequence documents the prior history of the development. This AG is akin to many kinds of PSD data structures that are found to be of constant use in computing. As a natural precursor to many kinds of intelligent data structures, it forms the inveterate backbone of a primitive capacity for intelligence. That is, this sequence has the sort of developing structure that can support the initial growth of learning in many species of creature constructions with adaptive constitutions, while it remains supple enough to supply an articulate skeleton for the evolving process of reflective inquiry. But this takes time to see.

    For future reference, I refer to this model of natural numbers as “MON”. The very familiarity of this MON means that one reflexively proceeds from reading the signs of its set notation to thinking of its sets as mathematical objects, with little awareness of the sign relation that mediates the process, or even much reflection after the fact that is independent of the reflections recorded. Thus, even though this MON documents a process of reflective development, it need inspire no extra reflection on the acts of understanding needed to follow its directions.

    In order to render this MON instructive for the development of a RIF, something intended to be a deliberately self-conscious construction, it is important to remedy the excessive lucidity of this MONs reflections, the confusing mix of opacity and transparency that comes in proportion to one's very familiarity with an object and that is compounded by one's very fluency in a language. To do this, it is incumbent on a proper analysis of the situation to slow the MON down, to interrupt one's own comprehension of its developing intent, and to articulate the details of the sign process that mediates it much more carefully than is customary.

    These goals can be achieved by singling out the formal language that is used by this MON to denote its set theoretic objects. This involves separating the object domain \({O = O_\text{MON}}\!\) from the sign domain \({S = S_\text{MON}},\!\) paying closer attention to the naive level of set notation that is actually used by this MON, and treating its primitive set theoretic expressions as a formal language all its own.

    Thus, I need to discuss a variety of formal languages on the following alphabet:

    \(\underline{\underline{X}} = \underline{\underline{X}}_\text{MON} = \{ ~ {}^{\backprime\backprime} ~ {}^{\prime\prime} ~ , ~ {}^{\backprime\backprime} , {}^{\prime\prime} ~ , ~ {}^{\backprime\backprime} \{ {}^{\prime\prime} ~ , ~ {}^{\backprime\backprime} \} {}^{\prime\prime} ~ \}.\!\)

    Because references to an alphabet of punctuation marks can be difficult to process in the ordinary style of text, it helps to have alternative ways of naming these symbols.

    First, I use raised angle brackets \({}^{\langle} \ldots {}^{\rangle},\!\) or supercilia, as alternate forms of quotation marks.

    \(\underline{\underline{X}} = \underline{\underline{X}}_\text{MON} = \{ ~ {}^{\langle} ~ {}^{\rangle} ~ , ~ {}^{\langle} , {}^{\rangle} ~ , ~ {}^{\langle} \{ {}^{\rangle} ~ , ~ {}^{\langle} \} {}^{\rangle} ~ \}.\!\)

    Second, I use a collection of conventional names to refer to the symbols.

    \(\underline{\underline{X}} = \underline{\underline{X}}_\text{MON} = \{ \text{blank}, \text{comma}, \text{lbrace}, \text{rbrace} \}.\!\)

    Although it is possible to present this MON in a way that dispenses with blanks and commas, the more expansive language laid out here turns out to have capacities that are useful beyond this immediate context.

  2. Reflection principles in propositional calculus. Many statements about the order are also statements in the order. Many statements in the order are already statements about the order.

  3. Next, I consider a class of POSRs that turns up in group theory. [Variant] The next class of POSRs I want to discuss is one that arises in group theory.

  4. Although it is seldom recognized, a similar form of self-reference appears in the study of group representations, and more generally, in the study of homomorphic representations of any mathematical structure. In particular, this type of ESR arises from the regular representation of a group in terms of its action on itself, that is, in the collection of effects that each element has on the all the individual elements of the group.

    There are several ways to side-step the issue of self-reference in this situation. Typically, they are used in combination to avoid the problematic features of a self-referential procedure and thus to effectively rationalize the representation.

    [Variant] As a preliminary study, it is useful to take up the slightly simpler brand of self-reference occurring in the topic of regular representations and to use it to make a first reconnaissance of the larger terrain.

    [Variant] As a first foray into the area I use the topic of group representations to illustrate the theme of extra-constitutional self-reference. To provide the discussion with concrete material I examine a couple of small groups, picking examples that incidentally serve a double purpose and figure more substantially in a later stage of this project.

    Each way of rationalizing the apparent self-reference begins by examining more carefully one of the features of the ostensibly circular formulation:

    \(x_i = \{ (x_1, ~ x_1 \cdot x_i), \ldots, (x_n, ~ x_n \cdot x_i) \}.\!\)
    1. One approach examines the apparent equality of the expressions.
    2. Another approach examines the nature of the objects that are invoked.

6.20. Three Views of Systems

In this work I am using the word system in three different ways, in senses that refer to an object system (OS), a temporal system (TS), and a formal system (FS), respectively. This section describes these three ways of looking at a system, first in abstract isolation from each other, as though they reflected wholly separate species of systems, and then in concrete connection with each other, as the wholly apparent aspects of a single, underlying, systematic integrity. Finally, I close out the purely speculative parts of these considerations by showing how they come to bear on the present example, a collection of potentially meaningful actions pressed into the form of dialogue between \(\text{A}\!\) and \(\text{B}.\!\)

  1. An object system (OS) is an arbitrary collection of elements that present themselves to be of interest in a particular situation of inquiry. Formally, an OS is little more than a set. It represents a first attempt to unify a manifold of phenomena under a common concept, to aggregate the objects of discussion and thought that are relevant to the situation, and to include them in a general class. Typically, an OS begins as nothing more than a gathering together of actual or proposed objects. To serve its purpose, it need afford no more than an initial point of departure for staking out a tentative course of inquiry, and it can continue to be useful throughout inquiry, if only as a peg to hang new observations and contemplations on as the investigation proceeds.
  2. A temporal system (TS) has states of being and the ability to move through sequences of states. Thus, it exists at a point in a space of states, undergoes transitions from state to state, and has the power, potential, or possibility of moving through various sequences of states. In doing this, the moment to moment existence of the typical TS sweeps out a characteristic succession of points in a space of states. When there is a definite constraint on the sequence of states that can occur, then one can begin to speak of a determinate, though not necessarily a deterministic dynamic process. In the sequel, the concept of a TS is used in an informal way, to refer to the most general kind of dynamic system conceivable, that is, an OS in which there is at least the barest notion of change or process that can serve to initiate discussion and that can continue to form the subject of further analysis.
  3. A formal system (FS) contains the signs, expressions, and forms of argumentation that embody a particular way of talking and thinking about the objects in a designated OS. For the agent that uses a given FS, its design determines the way that these objects are perceived, described, and reasoned about, and the details of its constitution have consequences for all the processes of observation, contemplation, logical expression, articulate communication, and controlled action that it helps to mediate. Thus, the FS serves two main types of purposes: (a) As a formal language, it permits the articulation of an agent's observations with respect to the actual and proposed properties of an object system. (b) In addition, it embodies a system of practices, including techniques of argumentation, that are useful in representing reasoning about the properties and activities of the object system and that give the FS meaning and bearing with respect to the objective world.

There is a standard form of disclaimer that needs to be attached to this scheme of categories, qualifying any claim that it might be interpreted as making about the ontological status of the proposed distinctions. As often as not, the three categories of systems identified above do not correspond to materially different types of underlying entities so much as different stages in their development, or only in the development of discussions about them. As always, these distinctions do not reveal the essential categories and the substantial divergences of real systems so much as they reflect different ways of viewing them.

The need for a note of caution at this point is due to a persistent but unfortunate tendency of the symbol-using mentality, one that forms a potentially deleterious side effect to the necessary analytic capacity. Namely, having once discovered the many splendored facets of each real object worth looking into, the mind never ceases from trying to force its imagined categories of descriptive expressions (CODEs) down into the original categories of real entities (COREs). In spite of every contrary impression, the deeper-lying substrate of existence is solely responsible for funding the phenomenal appearances of the world.

Out of this tendency of the symbol-using mentality arises a constant difficulty with every theory of every reality. Namely, every use of a theoretical framework to view an underlying reality leads the user to forget, temporarily, that the reality is anything but its appearance, image, or representation in that framework. Logically speaking, there is an inalienable spectre of negation involved in every form of apparition, imagination, or representation. This abnegation would be complete if it were not for the possibility held out that some underlying realities may nevertheless be capable of representing themselves over time.

The relationship of objects in an underlying reality to their images in a theoretical framework is a topic that this discussion will return to repeatedly as the work progresses. In sum, for now, all of the following statements are approximations to the truth. At any given moment, the image is usually not the object. At times, it can almost be anything but the object. It is even entirely possible, oddly enough, that the image is nothing but the negation of the object, but as often as not it enjoys a more complex relationship than that of sheer opposition. Over time, in some instances, the image can become nearly indistinguishable from its object, but whether this is a good thing or not, in the long run, I cannot tell. The sense of the resulting identification, the bearing of the image on its object, depends on exactly how and how exactly this final coincidence comes about.

One of the goals of this work, indeed, of the whole pragmatic theory of sign relations, is an adequate understanding of the relationship between underlying reality objects and theoretical framework images. The purpose and also the criterion of an adequate understanding is this: It would prevent an interpretive agent, even while immersed in the context of a pertinent sign relation and deliberately taking part in a share of its conduct, from ever being confused again about the different roles of objects and images.

If one assumes that there is a unique and all-inclusive universe, and thus only one kind of system in essence that generates the phenomenon known as the whole objective world, then this integral form of universe is bound to enjoy all three aspects of systems phenomena in full measure. Then the task for a fully system-theoretic and reflective inquiry is to see how all of these aspects of systems can be integrated into a single mode of realization.

In many cases the three senses of the word system reflect distinctive orders of structure and function in the types of systems indicated, suggesting that there is something essential and substantive about the distinctions between objects, changes, and forms. With regard to the underlying reality, however, these differences can be as artificial as any that conventional language poses between nouns, verbs, and sentences. Of course, when the underlying system is degenerate, or not fully realized in all the relevant aspects, then it is fair to say that it falls under some categories more than others. In the general case, however, the three senses of the word system merely embody the spectrum of attitudes and intentions that observing and interpreting agents can take up with respect to the same underlying type of system.

An object system may seem little more than a set, the barest attempt to unify a manifold of interesting phenomena under a common concept, but no object system becomes an object of discussion and thought without invoking the informal precursors of formal systems, in other words, systems of practices, casually taken up, that reflection has the power to formalize in time. And any formal system, put to work in practice, has a temporal and dynamic aspect, especially in the transitions taking place from sign to interpretant sign that fill out its connotative component. Thus, a formal system implicitly involves a temporal system, even if its own object system is not itself temporal in nature but rests in a stable, a static, or an abstract state.

Formal systems and their systems of practice are subject to conversion into object systems, becoming the objects of higher order formal systems through the operation of a critical intellectual step usually called reflection.

Using the pragmatic theory of sign relations, I regard every object system in the context of a particular formal system. I take these two as one, for now, because a formal system and its object system are defined in relation to each other and are not really separable in practice. Later, I will discuss a form of independence that can exist between the two, but only in the derivative sense that many formal systems can be brought to bear on what turn out to be equivalent object systems.

Any physical system, subject to recognizably lawful constraints, can generally be turned to use as a channel of communication, contingent only on the limitations imposed by its inherent informational capacity. Therefore, any object system of sufficient capacity that resides under an agent's interpretive control can used as a medium for language and converted to convey the more specialized formal system.

In every situation the three kinds of system, or views of a system, are naturally related to each other through the concept of a sign relation. Applied in their turn, sign relations contain within themselves the germ of a particular idea, that no system can be called complete until it has the means to reflect on its own nature, at least in some measure. Thus, by integrating the three senses of the word system within the notion of a sign relation, I am trying to make it as easy as possible to move around in a space of apparently indispensable perspectives. To wit, regarding sign relations as formal objects in and of themselves, an intelligent agent needs the capacities: (1) to reflect on the objective forms of their phenomenal appearances, and (2) to participate in the active forms of their interpretive conduct. Further, an agent needs the flexibility to take up each of these stances toward sign relations at will, reflecting on them or joining in them as the situation demands.

I close this section by discussing the relationship among the three views of systems that are relevant to the example of \(\text{A}\!\) and \(\text{B}.\!\)

[Variant] How do these three perspectives bear on the example of \(\text{A}\!\) and \(\text{B}\!\)?

[Variant] In order to show how these three perspectives bear on the present inquiry, I will now discuss the relationship they exhibit in the example of \(\text{A}\!\) and \(\text{B}.\!\)

In the present example, concerned with the form of communication that takes place between the interpreters \(\text{A}\!\) and \(\text{B},\!\) the topic of interest is not the type of dynamics that would change one of the original objects, \(\text{A}\!\) or \(\text{B},\!\) into the other. Thus, the object system is nothing more than the object domain \(O = \{ \text{A}, \text{B} \}\!\) shared between the sign relations \(L(\text{A})\!\) and \(L(\text{B}).\!\) In this case, where the object system reduces to an abstract set, falling under the action of a trivial dynamics, one says that the object system is stable or static. In more developed examples, when the dynamics at the level of the object system becomes more interesting, the objects in the object system are usually referred to as objective configurations or object states. Later examples will take on object systems that enjoy significant variations in the sequences of their objective states.

6.21. Building Bridges Between Representations

On the way to integrating dynamic and symbolic approaches to systems there is one important watershed that has to be crossed and recrossed, time and time again. This is a form of continental divide that decides between two alternative and exclusive modes of description (MODs) or categories of representation (CORs), and marks a writer's moment to moment selection of extensional representation (ER), on the one side, or intensional representation (IR), on the other. To apply the theme, in this section I address the task of building conceptual bridges between two different ways of describing or representing sign relations: (1) the ER that describes a sign relation in terms of its instances, and (2) the IR that describes a sign relation in terms of its properties.

It is best to begin the work of bridge-building on informal grounds, using concrete examples of ERs and IRs and taking advantage of basic ideas about their relationship that are readily available to every reader. After the overall scheme of construction is roughed out in this fashion, I plan to revisit the concept of representation in a more formal style, examining the balance of its in- and ex- “tensions” with a sharper eye to the relevant details and a greater chance of compassing the depths of form that arise between the two points of view.

The task of building this bridge is not trivial. In places, the basic elements of construction are yet to be forged from the available stocks, in others, the needed materials still lie in their ores, awaiting a suitable process to extract them, refine them, and bring them to a usable state. Due to the difficulties of this task and the length of time it will take to carry it out, I think it is advisable to establish two points of reference before setting to work.

  1. As a way of providing sufficient motivation for the effort, I will indicate the importance of this bridge with respect to the aims of inquiry in general.
  2. As a guard against a host of precipitous shortcuts that have been tried in the past, I will point out as clearly as possible a few of the obstacles that need to be surmounted. Once their structures are rightly understood, the obstructions that lie in the path of this bridge can be chalked up to experience with the reality of its construction, turned to use as stepping stones in the advance of its ultimate course, and given a fitting place in the progress of instruction.

Terms referring to properties of sign relations make it possible to formulate propositions about sign relations, either as occasioned by a clear and present example or in abstraction from any concrete instance. In turn, this makes it possible to carry on chains of reasoning about the properties of sign relations in detachment from the presence of actual cases that may or may not come to mind in the immediate present. This mode of abstraction, invoking the kind of IR that is involved in mediating every form of propositional reasoning, gives logic its wings and can lead to theories of great conceptual power, but it incurs the risk of leading reasoning astray into realms of irreferent pretension, eventually degenerating into spurious sounds that signify nothing.

It is only by means of an IR that logical reasoning, properly speaking, is able to begin. The stringency of this precept, if it is taken too strictly as a starting condition and applied solely in absolute terms, would be correctly perceived as demanding a provision that is jarring to every brand of good sense. But it was never meant to be taken this severely. In practice, the starkness of this tentative stipulation is moderated by the degree of fuzziness that still continues to reside in the interpretive distinction between ERs and IRs.

The alleged distinction between ERs and IRs, when it is projected to have a global application, remains arbitrary so long as it is taken at that level of abstraction, and it comes to take on the semblance of a definition only in relation to the interpretive conduct of a particular arbiter. No representation in actual practice is purely of one sort or the other, nor fails to have the characters of both types as a part of its mix. In other words, extensions and intensions are only abstractions from a profounder “tension” that is logically prior but functionally intermediate to them both, and every representation of any use will have its aspect of extensional particularity permeated by its aspect of intensional generality.

Toward the end of this construction I hope it will become clear that this bridge is a project intermediate in scale between the elementary linkage of signs to interpretants that is built into every sign relation and all the courses of conduct that go to span the gulf and build communication between vastly different systems of interpretation. In the meantime, there are strong analogies that make the architecture of this bridge parallel in form to the structures existing at both ends of the scale, shaping it in congruence with patterns of action that reside at both the micro and the macro levels. Observing these similarities and their lines of potential use as they arise will serve to guide the current work.

A sign relation is a complex object and its representations, insofar as they faithfully preserve its structure, are complex signs. Accordingly, the problems of translating between ERs and IRs of sign relations, of detecting when representations alleged to be of sign relations do indeed represent objects of the specified character, and of recognizing whether different representations do or do not represent the same sign relation as their common object — these are the familiar questions that would be asked of the signs and interpretants in a simple sign relation, but this time asked at a higher level, in regard to the complex signs and complex interpretants that are posed by the different stripes of representation. At the same time, it should be obvious that these are also the natural questions to be faced in building a bridge between representations.

How many different sorts of entities are conceivably involved in translating between ERs and IRs of sign relations? To address this question it helps to introduce a system of type notations that can be used to keep track of the various sorts of things, or the varieties of objects of thought, that are generated in the process of answering it. Table 47.1 summarizes the basic types of things that are needed in this pursuit, while the rest can be derived by constructions of the form \(X ~\mathrm{of}~ Y,\!\) notated \(X(Y)\!\) or just \(XY,\!\) for any basic types \(X\!\) and \(Y.\!\) The constructed types of things involved in the ERs and IRs of sign relations are listed in Tables 47.2 and 47.3, respectively.


\(\text{Table 47.1} ~~ \text{Basic Types for ERs and IRs of Sign Relations}\!\)
\(\text{Type}\!\) \(\text{Symbol}\!\)

\(\begin{array}{l} \text{Property} \\ \text{Sign} \\ \text{Set} \\ \text{Triple}\\ \text{Underlying Element} \end{array}\)

\(\begin{matrix} P \\ \underline{S} \\ S \\ T \\ U \end{matrix}\)


\(\text{Table 47.2} ~~ \text{Derived Types for ERs of Sign Relations}\!\)
\(\text{Type}\!\) \(\text{Symbol}\!\) \(\text{Construction}\!\)
\(\text{Relation}\!\) \({R}\!\) \({S(T(U))}\!\)


\(\text{Table 47.3} ~~ \text{Derived Types for IRs of Sign Relations}\!\)
\(\text{Type}\!\) \(\text{Symbol}\!\) \(\text{Construction}\!\)
\(\text{Relation}\!\) \(P(R)\!\) \(P(S(T(U)))\!\)


Nothing as yet in this scheme of types says that all of the entities playing a part in the discussion are necessarily distinct, but only that there are this many roles to fill.

Let \(\underline{S}\!\) be the type of signs, \(S\!\) the type of sets, \(T\!\) the type of triples, and \(U\!\) the type of underlying objects. Now consider the various sorts of things, or the varieties of objects of thought, that are invoked on each side, annotating each type as it is mentioned:

ERs of sign relations describe them as sets \((Ss)\!\) of triples \((Ts)\!\) of underlying elements \((Us).\!\) This makes for three levels of objective structure that must be put in coordination with each other, a task that is projected to be carried out in the appropriate OF of sign relations. Corresponding to this aspect of structure in the OF, there is a parallel aspect of structure in the IF of sign relations. Namely, the accessory sign relations that are used to discuss a targeted sign relation need to have signs for sets \({(\underline{S}Ss)},\!\) signs for triples \({(\underline{S}Ts)},\!\) and signs for the underlying elements \({(\underline{S}Us)}.\!\) This accounts for three levels of syntactic structure in the IF of sign relations that must be coordinated with each other and also with the targeted levels of objective structure.

[Variant] IRs of sign relations describe them in terms of properties \((Ps)\!\) that are taken as primitive entities in their own right. /// refer to properties \((Ps)\!\) of transactions \((Ts)\!\) of underlying elements \((Us).\!\)

[Variant] IRs of sign relations refer to properties of sets \((PSs),\!\) properties of triples \((PTs),\!\) and properties of underlying elements \((PUs).\!\) This amounts to three more levels of objective structure in the OF of the IR that need to be coordinated with each other and interlaced with the OF of the ER if the two are to be brought into the same discussion, possibly for the purpose of translating either into the other. Accordingly, the accessory sign relations that are used to discuss an IR of a targeted sign relation need to have \(\underline{S}PSs,\!\) \(\underline{S}PTs,\!\) and \(\underline{S}PUs.\!\)

6.22. Extensional Representations of Sign Relations

Up to this point, the concept of a sign relation has been discussed largely in terms of ERs. The sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) were initially described as collections of transactions among three participants and formalized as sets of triples of underlying elements.

Other examples of ERs are widely distributed throughout the foregoing discussion of \(\text{A}\!\) and \(\text{B}.\!\) The extensional mode of description is prevalent, not only in the presentation of sign relations by means of relational data tables, but also in the presentation of dyadic projections by means of digraphs. This manner of presentation follows the natural order of acquaintance with abstract relations, since the extensional mode of description is the category of representation that usually prevails whenever it is necessary to provide a detailed treatment of simple examples or an exhaustive account of individual instances.

Starting from a standpoint in concrete constructions, the easiest way to begin developing an explicit treatment of ERs is to gather the relevant materials in the forms already presented, to fill out the missing details and expand the abbreviated contents of these forms, and to review their full structures in a more formal light. Consequently, this section inaugurates the formal discussion of ERs by taking a second look at the interpreters \(\text{A}\!\) and \(\text{B},\!\) recollecting the Tables of their sign relations and finishing up the Tables of their dyadic components. Since the form of the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) no longer presents any novelty, I can exploit their second presentation as a first opportunity to examine a selection of finer points, previously overlooked. Also, in the process of reviewing this material it is useful to anticipate a number of incidental issues that are reaching the point of becoming critical within this discussion and to begin introducing the generic types of technical devices that are needed to deal with them.

The next set of Tables summarizes the ERs of \(L(\text{A})\!\) and \(L(\text{B}).\!\) For ease of reference, Tables 48.1 and 49.1 repeat the contents of Tables 1 and 2, respectively, the only difference being that appearances of ordinary quotation marks \(({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!\) are transcribed as invocations of the arch operator \(({}^{\langle} \ldots {}^{\rangle}).\!\) The reason for this slight change of notation will be explained shortly. The denotative components \(\mathrm{Den}(\text{A})\!\) and \(\mathrm{Den}(\text{B})\!\) are shown in the first two columns of Tables 48.2 and 49.2, respectively, while the third column gives the transition from sign to object as an ordered pair \((s, o).\!\) The connotative components \(\mathrm{Con}(\text{A})\!\) and \(\mathrm{Con}(\text{B})\!\) are shown in the first two columns of Tables 48.3 and 49.3, respectively, while the third column gives the transition from sign to interpretant as an ordered pair \((s, i).\!\)


\(\text{Table 48.1} ~~ \mathrm{ER}(L_\text{A}) : \text{Extensional Representation of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)


\(\text{Table 48.2} ~~ \mathrm{ER}(\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{A} {}^{\rangle}, \text{A}) \\ ({}^{\langle} \text{i} {}^{\rangle}, \text{A}) \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{B} {}^{\rangle}, \text{B}) \\ ({}^{\langle} \text{u} {}^{\rangle}, \text{B}) \end{matrix}\)


\(\text{Table 48.3} ~~ \mathrm{ER}(\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{A} {}^{\rangle}) \\ ({}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}) \\ ({}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{A} {}^{\rangle}) \\ ({}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}) \\ ({}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle}) \\ ({}^{\langle} \text{u} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}) \\ ({}^{\langle} \text{u} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle}) \end{matrix}\)


\(\text{Table 49.1} ~~ \mathrm{ER}(L_\text{B}) : \text{Extensional Representation of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)


\(\text{Table 49.2} ~~ \mathrm{ER}(\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}~\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{A} {}^{\rangle}, \text{A}) \\ ({}^{\langle} \text{u} {}^{\rangle}, \text{A}) \end{matrix}\!\)

\(\begin{matrix} \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{B} {}^{\rangle}, \text{B}) \\ ({}^{\langle} \text{i} {}^{\rangle}, \text{B}) \end{matrix}\)


\(\text{Table 49.3} ~~ \mathrm{ER}(\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{A} {}^{\rangle}) \\ ({}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle}) \\ ({}^{\langle} \text{u} {}^{\rangle}, {}^{\langle} \text{A} {}^{\rangle}) \\ ({}^{\langle} \text{u} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle}) \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} ({}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}) \\ ({}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}) \\ ({}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}) \\ ({}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}) \end{matrix}\)


6.23. Intensional Representations of Sign Relations

The next three sections consider how the ERs of \(L(\text{A})\!\) and \(L(\text{B})\!\) can be translated into a variety of different IRs. For the purposes of this introduction, only “faithful” translations between the different categories of representation are contemplated. This means that the conversion from ER to IR is intended to convey what is essentially the same information about \(L(\text{A})\!\) and \(L(\text{B}),\!\) to preserve all the relevant structural details implied by their various modes of description, but to do it in a way that brings selected aspects of their objective forms to light. General considerations surrounding the task of translation are taken up in this section, while the next two sections lay out different ways of carrying it through.

The larger purpose of this discussion is to serve as an introduction, not just to the special topic of devising IRs for sign relations, but to the general issue of producing, using, and comprehending IRs for any kind of relation or any domain of formal objects. It is hoped that a careful study of these simple IRs can inaugurate a degree of insight into the broader arenas of formalism of which they occupy an initial niche and into the wider landscapes of discourse of which they inhabit a natural corner, in time progressing up to the axiomatic presentation of formal theories about combinatorial domains and other mathematical objects.

For the sake of maximum clarity and reusability of results, I begin by articulating the abstract skeleton of the paradigm structure, treating the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) as sundry aspects of a single, unitary, but still uninterpreted object. Then I return at various successive stages to differentiate and individualize the two interpreters, to arrange more functional flesh on the basis provided by their structural bones, and to illustrate how their bare forms can be arrayed in many different styles of qualitative detail.

In building connections between ERs and IRs of sign relations the discussion turns on two types of partially ordered sets, or posets. Suppose that \(L\!\) is one of the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) and let \(\mathrm{ER}(L)\!\) be an ER of \(L.\!\)

In the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) both of their ERs are based on a common world set:

\(\begin{array}{*{15}{c}} W & = & \{ & \text{A} & , & \text{B} & , & {}^{\backprime\backprime} \text{A} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{B} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{i} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{u} {}^{\prime\prime} & \} \\ & = & \{ & w_1 & , & w_2 & , & w_3 & , & w_4 & , & w_5 & , & w_6 & \} \end{array}\)

An IR of any object is a description of that object in terms of its properties. A successful description of a particular object usually involves a selection of properties, those that are relevant to a particular purpose. An IR of \(L(\text{A})\!\) or \(L(\text{B})\!\) involves properties of its elementary points \(w \in W\!\) and properties of its elementary relations \(\ell \in O \times S \times I.\!\)

To devise an IR of any relation \(L\!\) one needs to describe \(L\!\) in terms of properties of its ingredients. Broadly speaking, the ingredients of a relation include its elementary relations or \(n\!\)-tuples and the elementary components of these \(n\!\)-tuples that reside in the relational domains.

The poset \(\mathrm{Pos}(W)\!\) of interest here is the power set \(\mathcal{P}(W) = \mathrm{Pow}(W).\!\)

The elements of these posets are abstractly regarded as properties or propositions that apply to the elements of \(W.\!\) These properties and propositions are independently given entities. In other words, they are primitive elements in their own right, and cannot in general be defined in terms of points, but they exist in relation to these points, and their extensions can be represented as sets of points.

[Variant] For a variety of foundational reasons that I do not fully understand, perhaps most of all because theoretically given structures have their real foundations outside the realm of theory, in empirically given structures, it is best to regard points, properties, and propositions as equally primitive elements, related to each other but not defined in terms of each other, analogous to the undefined elements of a geometry.

[Variant] There is a foundational issue arising in this context that I do not pretend to fully understand and cannot attempt to finally dispatch. What I do understand I will try to express in terms of an aesthetic principle: On balance, it seems best to regard extensional elements and intensional features as independently given entities. This involves treating points and properties as fundamental realities in their own rights, placing them on an equal basis with each other, and seeking their relation to each other, but not trying to reduce one to the other.

The discussion is now specialized to consider the IRs of the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) their denotative projections as the digraphs \(\mathrm{Den}(L_\text{A})\!\) and \(\mathrm{Den}(L_\text{B}),\!\) and their connotative projections as the digraphs \(\mathrm{Con}(L_\text{A})\!\) and \(\mathrm{Con}(L_\text{B}).\!\) In doing this I take up two different strategies of representation:

  1. The first strategy is called the literal coding, because it sticks to obvious features of each syntactic element to contrive its code, or the \({\mathcal{O}(n)}\!\) coding, because it uses a number on the order of \(n\!\) logical features to represent a domain of \(n\!\) elements.
  2. The second strategy is called the analytic coding, because it attends to the nuances of each sign's interpretation to fashion its code, or the \(\log (n)\!\) coding, because it uses roughly \(\log_2 (n)\!\) binary features to represent a domain of \(n\!\) elements.

6.24. Literal Intensional Representations

In this section I prepare the grounds for building bridges between ERs and IRs of sign relations. To establish an initial foothold on either side of the distinction and to gain a first march on connecting the two sites of the intended construction, I introduce an intermediate mode of description called a literal intensional representation (LIR).

Any LIR is a nominal form of IR that has exactly the same level of detail as an ER, merely shifting the interpretation of primitive terms from an extensional to an intensional modality, namely, from a frame of reference terminating in points, atomic elements, elementary objects, or real particulars to a frame of reference terminating in qualities, basic features, fundamental properties, or simple propositions. This modification, that translates the entire set of elementary objects in an ER into a parallel set of fundamental properties in a LIR, constitutes a form of modulation that can be subtle or trivial, depending on one's point of view. Regarded as trivial, it tends to go unmarked, leaving it up to the judgment of the interpreter to decide whether the same sign is meant to denote a point, a particular, a property, or a proposition. An interpretive variance that goes unstated tends to be treated as final. It is always possible to bring in more signs in an attempt to signify the variants intended, but it needs to be noted that every effort to control the interpretive variance by means of these epithets and expletives only increases the level of liability for accidental errors, if not the actual probability of misinterpretation. For the sake of this introduction, and in spite of these risks, I treat the distinction between extensional and intensional modes of interpretation as worthy of note and deserving of an explicit notation.


\(\text{Table 50.} ~~ \text{Notations for Objects and Their Signs}\!\)
\(\text{Object}\!\) \(\text{Sign of Object}\!\)

\(\begin{matrix} \text{A} & \text{A} & w_1 \\[6pt] \text{B} & \text{B} & w_2 \\[12pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} & {}^{\langle} \text{A} {}^{\rangle} & w_3 \\[6pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} & {}^{\langle} \text{B} {}^{\rangle} & w_4 \\[6pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} & {}^{\langle} \text{i} {}^{\rangle} & w_5 \\[6pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} & {}^{\langle} \text{u} {}^{\rangle} & w_6 \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} & {}^{\langle} \text{A} {}^{\rangle} & {}^{\langle} w_1 {}^{\rangle} \\[6pt] {}^{\langle} \text{B} {}^{\rangle} & {}^{\langle} \text{B} {}^{\rangle} & {}^{\langle} w_2 {}^{\rangle} \\[12pt] {}^{\langle\backprime\backprime} \text{A} {}^{\prime\prime\rangle} & {}^{\langle\langle} \text{A} {}^{\rangle\rangle} & {}^{\langle} w_3 {}^{\rangle} \\[6pt] {}^{\langle\backprime\backprime} \text{B} {}^{\prime\prime\rangle} & {}^{\langle\langle} \text{B} {}^{\rangle\rangle} & {}^{\langle} w_4 {}^{\rangle} \\[6pt] {}^{\langle\backprime\backprime} \text{i} {}^{\prime\prime\rangle} & {}^{\langle\langle} \text{i} {}^{\rangle\rangle} & {}^{\langle} w_5 {}^{\rangle} \\[6pt] {}^{\langle\backprime\backprime} \text{u} {}^{\prime\prime\rangle} & {}^{\langle\langle} \text{u} {}^{\rangle\rangle} & {}^{\langle} w_6 {}^{\rangle} \end{matrix}\)


\(\text{Table 51.1} ~~ \text{Notations for Properties and Their Signs (1)}\!\)
\(\text{Property}\!\) \(\text{Sign of Property}\!\)

\(\begin{matrix} {}^{\lbrace} \text{A} {}^{\rbrace} & {}^{\lbrace} \text{A} {}^{\rbrace} & {}^{\lbrace} w_1 {}^{\rbrace} \\[6pt] {}^{\lbrace} \text{B} {}^{\rbrace} & {}^{\lbrace} \text{B} {}^{\rbrace} & {}^{\lbrace} w_2 {}^{\rbrace} \\[12pt] {}^{\lbrace\backprime\backprime} \text{A} {}^{\prime\prime\rbrace} & {}^{\lbrace\langle} \text{A} {}^{\rangle\rbrace} & {}^{\lbrace} w_3 {}^{\rbrace} \\[6pt] {}^{\lbrace\backprime\backprime} \text{B} {}^{\prime\prime\rbrace} & {}^{\lbrace\langle} \text{B} {}^{\rangle\rbrace} & {}^{\lbrace} w_4 {}^{\rbrace} \\[6pt] {}^{\lbrace\backprime\backprime} \text{i} {}^{\prime\prime\rbrace} & {}^{\lbrace\langle} \text{i} {}^{\rangle\rbrace} & {}^{\lbrace} w_5 {}^{\rbrace} \\[6pt] {}^{\lbrace\backprime\backprime} \text{u} {}^{\prime\prime\rbrace} & {}^{\lbrace\langle} \text{u} {}^{\rangle\rbrace} & {}^{\lbrace} w_6 {}^{\rbrace} \end{matrix}\)

\(\begin{matrix} {}^{\langle\lbrace} \text{A} {}^{\rbrace\rangle} & {}^{\langle\lbrace} \text{A} {}^{\rbrace\rangle} & {}^{\langle\lbrace} w_1 {}^{\rbrace\rangle} \\[6pt] {}^{\langle\lbrace} \text{B} {}^{\rbrace\rangle} & {}^{\langle\lbrace} \text{B} {}^{\rbrace\rangle} & {}^{\langle\lbrace} w_2 {}^{\rbrace\rangle} \\[12pt] {}^{\langle\lbrace\backprime\backprime} \text{A} {}^{\prime\prime\rbrace\rangle} & {}^{\langle\lbrace\langle} \text{A} {}^{\rangle\rbrace\rangle} & {}^{\langle\lbrace} w_3 {}^{\rbrace\rangle} \\[6pt] {}^{\langle\lbrace\backprime\backprime} \text{B} {}^{\prime\prime\rbrace\rangle} & {}^{\langle\lbrace\langle} \text{B} {}^{\rangle\rbrace\rangle} & {}^{\langle\lbrace} w_4 {}^{\rbrace\rangle} \\[6pt] {}^{\langle\lbrace\backprime\backprime} \text{i} {}^{\prime\prime\rbrace\rangle} & {}^{\langle\lbrace\langle} \text{i} {}^{\rangle\rbrace\rangle} & {}^{\langle\lbrace} w_5 {}^{\rbrace\rangle} \\[6pt] {}^{\langle\lbrace\backprime\backprime} \text{u} {}^{\prime\prime\rbrace\rangle} & {}^{\langle\lbrace\langle} \text{u} {}^{\rangle\rbrace\rangle} & {}^{\langle\lbrace} w_6 {}^{\rbrace\rangle} \end{matrix}\)


\(\text{Table 51.2} ~~ \text{Notations for Properties and Their Signs (2)}\!\)
\(\text{Property}\!\) \(\text{Sign of Property}\!\)

\(\begin{matrix} \underline{\underline{\text{A}}} & \underline{\underline{\text{A}}} & \underline{\underline{w_1}} \\[6pt] \underline{\underline{\text{B}}} & \underline{\underline{\text{B}}} & \underline{\underline{w_2}} \\[12pt] \underline{\underline{{}^{\backprime\backprime} \text{A} {}^{\prime\prime}}} & \underline{\underline{{}^{\langle} \text{A} {}^{\rangle}}} & \underline{\underline{w_3}} \\[6pt] \underline{\underline{{}^{\backprime\backprime} \text{B} {}^{\prime\prime}}} & \underline{\underline{{}^{\langle} \text{B} {}^{\rangle}}} & \underline{\underline{w_4}} \\[6pt] \underline{\underline{{}^{\backprime\backprime} \text{i} {}^{\prime\prime}}} & \underline{\underline{{}^{\langle} \text{i} {}^{\rangle}}} & \underline{\underline{w_5}} \\[6pt] \underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}} & \underline{\underline{{}^{\langle} \text{u} {}^{\rangle}}} & \underline{\underline{w_6}} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \underline{\underline{\text{A}}} {}^{\rangle} & {}^{\langle} \underline{\underline{\text{A}}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_1}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{\text{B}}} {}^{\rangle} & {}^{\langle} \underline{\underline{\text{B}}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_2}} {}^{\rangle} \\[12pt] {}^{\langle} \underline{\underline{{}^{\backprime\backprime} \text{A} {}^{\prime\prime}}} {}^{\rangle} & {}^{\langle} \underline{\underline{{}^{\langle} \text{A} {}^{\rangle}}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_3}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{{}^{\backprime\backprime} \text{B} {}^{\prime\prime}}} {}^{\rangle} & {}^{\langle} \underline{\underline{{}^{\langle} \text{B} {}^{\rangle}}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_4}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{{}^{\backprime\backprime} \text{i} {}^{\prime\prime}}} {}^{\rangle} & {}^{\langle} \underline{\underline{{}^{\langle} \text{i} {}^{\rangle}}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_5}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}} {}^{\rangle} & {}^{\langle} \underline{\underline{{}^{\langle} \text{u} {}^{\rangle}}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_6}} {}^{\rangle} \end{matrix}\)


\(\text{Table 51.3} ~~ \text{Notations for Properties and Their Signs (3)}\!\)
\(\text{Property}\!\) \(\text{Sign of Property}\!\)

\(\begin{matrix} \underline{\underline{\text{A}}} & \underline{\underline{o_1}} & \underline{\underline{w_1}} \\[6pt] \underline{\underline{\text{B}}} & \underline{\underline{o_2}} & \underline{\underline{w_2}} \\[12pt] \underline{\underline{\text{a}}} & \underline{\underline{s_1}} & \underline{\underline{w_3}} \\[6pt] \underline{\underline{\text{b}}} & \underline{\underline{s_2}} & \underline{\underline{w_4}} \\[6pt] \underline{\underline{\text{i}}} & \underline{\underline{s_3}} & \underline{\underline{w_5}} \\[6pt] \underline{\underline{\text{u}}} & \underline{\underline{s_4}} & \underline{\underline{w_6}} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \underline{\underline{\text{A}}} {}^{\rangle} & {}^{\langle} \underline{\underline{o_1}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_1}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{\text{B}}} {}^{\rangle} & {}^{\langle} \underline{\underline{o_2}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_2}} {}^{\rangle} \\[12pt] {}^{\langle} \underline{\underline{\text{a}}} {}^{\rangle} & {}^{\langle} \underline{\underline{s_1}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_3}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{\text{b}}} {}^{\rangle} & {}^{\langle} \underline{\underline{s_2}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_4}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{\text{i}}} {}^{\rangle} & {}^{\langle} \underline{\underline{s_3}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_5}} {}^{\rangle} \\[6pt] {}^{\langle} \underline{\underline{\text{u}}} {}^{\rangle} & {}^{\langle} \underline{\underline{s_4}} {}^{\rangle} & {}^{\langle} \underline{\underline{w_6}} {}^{\rangle} \end{matrix}\)


\(\text{Table 52.1} ~~ \text{Notations for Instances and Their Signs (1)}\!\)
\(\text{Instance}\!\) \(\text{Sign of Instance}\!\)

\(\begin{matrix} {}^{\lbrack} \text{A} {}^{\rbrack} & {}^{\lbrack} \text{A} {}^{\rbrack} & {}^{\lbrack} w_1 {}^{\rbrack} \\[6pt] {}^{\lbrack} \text{B} {}^{\rbrack} & {}^{\lbrack} \text{B} {}^{\rbrack} & {}^{\lbrack} w_2 {}^{\rbrack} \\[12pt] {}^{\lbrack\backprime\backprime} \text{A} {}^{\prime\prime\rbrack} & {}^{\lbrack\langle} \text{A} {}^{\rangle\rbrack} & {}^{\lbrack} w_3 {}^{\rbrack} \\[6pt] {}^{\lbrack\backprime\backprime} \text{B} {}^{\prime\prime\rbrack} & {}^{\lbrack\langle} \text{B} {}^{\rangle\rbrack} & {}^{\lbrack} w_4 {}^{\rbrack} \\[6pt] {}^{\lbrack\backprime\backprime} \text{i} {}^{\prime\prime\rbrack} & {}^{\lbrack\langle} \text{i} {}^{\rangle\rbrack} & {}^{\lbrack} w_5 {}^{\rbrack} \\[6pt] {}^{\lbrack\backprime\backprime} \text{u} {}^{\prime\prime\rbrack} & {}^{\lbrack\langle} \text{u} {}^{\rangle\rbrack} & {}^{\lbrack} w_6 {}^{\rbrack} \end{matrix}\)

\(\begin{matrix} {}^{\langle\lbrack} \text{A} {}^{\rbrack\rangle} & {}^{\langle\lbrack} \text{A} {}^{\rbrack\rangle} & {}^{\langle\lbrack} w_1 {}^{\rbrack\rangle} \\[6pt] {}^{\langle\lbrack} \text{B} {}^{\rbrack\rangle} & {}^{\langle\lbrack} \text{B} {}^{\rbrack\rangle} & {}^{\langle\lbrack} w_2 {}^{\rbrack\rangle} \\[12pt] {}^{\langle\lbrack\backprime\backprime} \text{A} {}^{\prime\prime\rbrack\rangle} & {}^{\langle\lbrack\langle} \text{A} {}^{\rangle\rbrack\rangle} & {}^{\langle\lbrack} w_3 {}^{\rbrack\rangle} \\[6pt] {}^{\langle\lbrack\backprime\backprime} \text{B} {}^{\prime\prime\rbrack\rangle} & {}^{\langle\lbrack\langle} \text{B} {}^{\rangle\rbrack\rangle} & {}^{\langle\lbrack} w_4 {}^{\rbrack\rangle} \\[6pt] {}^{\langle\lbrack\backprime\backprime} \text{i} {}^{\prime\prime\rbrack\rangle} & {}^{\langle\lbrack\langle} \text{i} {}^{\rangle\rbrack\rangle} & {}^{\langle\lbrack} w_5 {}^{\rbrack\rangle} \\[6pt] {}^{\langle\lbrack\backprime\backprime} \text{u} {}^{\prime\prime\rbrack\rangle} & {}^{\langle\lbrack\langle} \text{u} {}^{\rangle\rbrack\rangle} & {}^{\langle\lbrack} w_6 {}^{\rbrack\rangle} \end{matrix}\)


\(\text{Table 52.2} ~~ \text{Notations for Instances and Their Signs (2)}\!\)
\(\text{Instance}\!\) \(\text{Sign of Instance}\!\)

\(\begin{matrix} \overline{\text{A}} & \overline{\text{A}} & \overline{w_1} \\[6pt] \overline{\text{B}} & \overline{\text{B}} & \overline{w_2} \\[12pt] \overline{{}^{\backprime\backprime} \text{A} {}^{\prime\prime}} & \overline{{}^{\langle} \text{A} {}^{\rangle}} & \overline{w_3} \\[6pt] \overline{{}^{\backprime\backprime} \text{B} {}^{\prime\prime}} & \overline{{}^{\langle} \text{B} {}^{\rangle}} & \overline{w_4} \\[6pt] \overline{{}^{\backprime\backprime} \text{i} {}^{\prime\prime}} & \overline{{}^{\langle} \text{i} {}^{\rangle}} & \overline{w_5} \\[6pt] \overline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}} & \overline{{}^{\langle} \text{u} {}^{\rangle}} & \overline{w_6} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \overline{\text{A}} {}^{\rangle} & {}^{\langle} \overline{\text{A}} {}^{\rangle} & {}^{\langle} \overline{w_1} {}^{\rangle} \\[6pt] {}^{\langle} \overline{\text{B}} {}^{\rangle} & {}^{\langle} \overline{\text{B}} {}^{\rangle} & {}^{\langle} \overline{w_2} {}^{\rangle} \\[12pt] {}^{\langle} \overline{{}^{\backprime\backprime} \text{A} {}^{\prime\prime}} {}^{\rangle} & {}^{\langle} \overline{{}^{\langle} \text{A} {}^{\rangle}} {}^{\rangle} & {}^{\langle} \overline{w_3} {}^{\rangle} \\[6pt] {}^{\langle} \overline{{}^{\backprime\backprime} \text{B} {}^{\prime\prime}} {}^{\rangle} & {}^{\langle} \overline{{}^{\langle} \text{B} {}^{\rangle}} {}^{\rangle} & {}^{\langle} \overline{w_4} {}^{\rangle} \\[6pt] {}^{\langle} \overline{{}^{\backprime\backprime} \text{i} {}^{\prime\prime}} {}^{\rangle} & {}^{\langle} \overline{{}^{\langle} \text{i} {}^{\rangle}} {}^{\rangle} & {}^{\langle} \overline{w_5} {}^{\rangle} \\[6pt] {}^{\langle} \overline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}} {}^{\rangle} & {}^{\langle} \overline{{}^{\langle} \text{u} {}^{\rangle}} {}^{\rangle} & {}^{\langle} \overline{w_6} {}^{\rangle} \end{matrix}\)


\(\text{Table 52.3} ~~ \text{Notations for Instances and Their Signs (3)}\!\)
\(\text{Instance}\!\) \(\text{Sign of Instance}\!\)

\(\begin{matrix} \overline{\text{A}} & \overline{o_1} & \overline{w_1} \\[6pt] \overline{\text{B}} & \overline{o_2} & \overline{w_2} \\[12pt] \overline{\text{a}} & \overline{s_1} & \overline{w_3} \\[6pt] \overline{\text{b}} & \overline{s_2} & \overline{w_4} \\[6pt] \overline{\text{i}} & \overline{s_3} & \overline{w_5} \\[6pt] \overline{\text{u}} & \overline{s_4} & \overline{w_6} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \overline{\text{A}} {}^{\rangle} & {}^{\langle} \overline{o_1} {}^{\rangle} & {}^{\langle} \overline{w_1} {}^{\rangle} \\[6pt] {}^{\langle} \overline{\text{B}} {}^{\rangle} & {}^{\langle} \overline{o_2} {}^{\rangle} & {}^{\langle} \overline{w_2} {}^{\rangle} \\[12pt] {}^{\langle} \overline{\text{a}} {}^{\rangle} & {}^{\langle} \overline{s_1} {}^{\rangle} & {}^{\langle} \overline{w_3} {}^{\rangle} \\[6pt] {}^{\langle} \overline{\text{b}} {}^{\rangle} & {}^{\langle} \overline{s_2} {}^{\rangle} & {}^{\langle} \overline{w_4} {}^{\rangle} \\[6pt] {}^{\langle} \overline{\text{i}} {}^{\rangle} & {}^{\langle} \overline{s_3} {}^{\rangle} & {}^{\langle} \overline{w_5} {}^{\rangle} \\[6pt] {}^{\langle} \overline{\text{u}} {}^{\rangle} & {}^{\langle} \overline{s_4} {}^{\rangle} & {}^{\langle} \overline{w_6} {}^{\rangle} \end{matrix}\)


Using two different strategies of representation:

Literal Coding. The first strategy is called the literal coding because it sticks to obvious features of each syntactic element to contrive its code, or the \({\mathcal{O}(n)}\!\) coding, because it uses a number on the order of \(n\!\) logical features to represent a domain of \(n\!\) elements.

Being superficial as a matter of principle, or adhering to the surface appearances of signs, enjoys the initial advantage that the very same codes can be used by any interpreter that is capable of observing them. The down side of resorting to this technique is that it typically uses an excessive number of logical dimensions to get each point of the intended space across.

Even while operating within the general lines of the literal, superficial, or \({\mathcal{O}(n)}\!\) strategy, there are still a number of choices to be made in the style of coding to be employed. For example, if there is an obvious distinction between different components of the world, like that between the objects in \(O = \{ \text{A}, \text{B} \}\!\) and the signs in \(S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!\) then it is common to let this distinction go formally unmarked in the LIR, that is, to omit the requirement of declaring an explicit logical feature to make a note of it in the formal coding. The distinction itself, as a property of reality, is in no danger of being obliterated or permanently erased, but it can be obscured and temporarily ignored. In practice, the distinction is not so much ignored as it is casually observed and informally attended to, usually being marked by incidental indices in the context of the representation.

Literal Coding

For the domain \(W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!\) of six elements one needs to use six logical features, in effect, elevating each individual object to the status of an exclusive ontological category in its own right. The easiest way to do this is simply to reuse the world domain \(W\!\) as a logical alphabet \(\underline{\underline{W}},\!\) taking element-wise identifications as follows:

\(\begin{array}{*{15}{c}} W & = & \{ & o_1 & , & o_2 & , & s_1 & , & s_2 & , & s_3 & , & s_4 & \} \\ & = & \{ & \text{A} & , & \text{B} & , & {}^{\backprime\backprime} \text{A} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{B} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{i} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{u} {}^{\prime\prime} & \} \\[10pt] \underline{\underline{W}} & = & \{ & \underline{\underline{w_1}} & , & \underline{\underline{w_2}} & , & \underline{\underline{w_3}} & , & \underline{\underline{w_4}} & , & \underline{\underline{w_5}} & , & \underline{\underline{w_6}} & \} \\ & = & \{ & \underline{\underline{\text{A}}} & , & \underline{\underline{\text{B}}} & , & \underline{\underline{\text{a}}} & , & \underline{\underline{\text{b}}} & , & \underline{\underline{\text{i}}} & , & \underline{\underline{\text{u}}} & \} \end{array}\)

Tables 53.1 and 53.2 show three different ways of coding the elements of an ER and the features of a LIR, respectively, for the world set \(W = W(\text{A}, \text{B}),\!\) that is, for the set of objects, signs, and interpretants that are common to the sign relations \(L(A)\!\) and \(L(B).\!\) Successive columns of these Tables give the mnemonic code, the pragmatic code, and the abstract code, respectively, for each element.


\(\text{Table 53.1} ~~ \text{Elements of} ~ \mathrm{ER}(W)\!\)
\(\text{Mnemonic Element}\!\)

\(w \in W\!\)
\(\text{Pragmatic Element}\!\)

\(w \in W\!\)
\(\text{Abstract Element}\!\)

\(w_i \in W\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} o_1 \\[4pt] o_2 \\[4pt] s_1 \\[4pt] s_2 \\[4pt] s_3 \\[4pt] s_4 \end{matrix}\)

\(\begin{matrix} w_1 \\[4pt] w_2 \\[4pt] w_3 \\[4pt] w_4 \\[4pt] w_5 \\[4pt] w_6 \end{matrix}\)


\(\text{Table 53.2} ~~ \text{Features of} ~ \mathrm{LIR}(W)\!\)

\(\text{Mnemonic Feature}\!\)

\(\underline{\underline{w}} \in \underline{\underline{W}}\!\)

\(\text{Pragmatic Feature}\!\)

\(\underline{\underline{w}} \in \underline{\underline{W}}\!\)

\(\text{Abstract Feature}\!\)

\(\underline{\underline{w_i}} \in \underline{\underline{W}}\!\)

\(\begin{matrix} \underline{\underline{\text{A}}} \\[4pt] \underline{\underline{\text{B}}} \\[4pt] \underline{\underline{\text{a}}} \\[4pt] \underline{\underline{\text{b}}} \\[4pt] \underline{\underline{\text{i}}} \\[4pt] \underline{\underline{\text{u}}} \end{matrix}\)

\(\begin{matrix} \underline{\underline{o_1}} \\[4pt] \underline{\underline{o_2}} \\[4pt] \underline{\underline{s_1}} \\[4pt] \underline{\underline{s_2}} \\[4pt] \underline{\underline{s_3}} \\[4pt] \underline{\underline{s_4}} \end{matrix}\)

\(\begin{matrix} \underline{\underline{w_1}} \\[4pt] \underline{\underline{w_2}} \\[4pt] \underline{\underline{w_3}} \\[4pt] \underline{\underline{w_4}} \\[4pt] \underline{\underline{w_5}} \\[4pt] \underline{\underline{w_6}} \end{matrix}\)


If the world of \(\text{A}\!\) and \(\text{B},\!\) the set \(W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!\) is viewed abstractly, as an arbitrary set of six atomic points, then there are exactly \(2^6 = 64\!\) abstract properties or potential attributes that might be applied to or recognized in these points. The elements of \(W\!\) that possess a given property form a subset of \(W\!\) called the extension of that property. Thus the extensions of abstract properties are exactly the subsets of \(W.\!\) The set of all subsets of \(W\!\) is called the power set of \(W,\!\) notated as \(\mathrm{Pow}(W)\!\) or \(\mathcal{P}(W).\!\) In order to make this way of talking about properties consistent with the previous definition of reality, it is necessary to say that one potential property is never realized, since no point has it, and its extension is the empty set \(\varnothing = \{ \}.\!\) All the natural properties of points that one observes in a concrete situation, properties whose extensions are known as natural kinds, can be recognized among the abstract, arbitrary, or set-theoretic properties that are systematically generated in this way. Typically, however, many of these abstract properties will not be recognized as falling among the more natural kinds.

Tables 54.1, 54.2, and 54.3 show three different ways of representing the elements of the world set \(W\!\) as vectors in the coordinate space \(\underline{W}\!\) and as singular propositions in the universe of discourse \(W^\Box.\!\) Altogether, these Tables present the literal codes for the elements of \(\underline{W}\!\) and \(W^\circ\!\) in their mnemonic, pragmatic, and abstract versions, respectively. In each Table, Column 1 lists the element \(w \in W,\!\) while Column 2 gives the corresponding coordinate vector \(\underline{w} \in \underline{W}\!\) in the form of a bit string. The next two Columns represent each \(w \in W\!\) as a proposition in \(W^\circ\!,\) in effect, reconstituting it as a function \(w : \underline{W} \to \mathbb{B}.\) Column 3 shows the propositional expression of each element in the form of a conjunct term, in other words, as a logical product of positive and negative features. Column 4 gives the compact code for each element, using a conjunction of positive features in subscripted angle brackets to represent the singular proposition corresponding to each element.


\(\text{Table 54.1} ~~ \text{Mnemonic Literal Codes for Interpreters A and B}\!\)
\(\text{Element}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} 100000 \\[4pt] 010000 \\[4pt] 001000 \\[4pt] 000100 \\[4pt] 000010 \\[4pt] 000001 \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{A}}~ (\underline{\underline{B}}) (\underline{\underline{a}}) (\underline{\underline{b}}) (\underline{\underline{i}}) (\underline{\underline{u}}) \\[4pt] (\underline{\underline{A}}) ~\underline{\underline{B}}~ (\underline{\underline{a}}) (\underline{\underline{b}}) (\underline{\underline{i}}) (\underline{\underline{u}}) \\[4pt] (\underline{\underline{A}}) (\underline{\underline{B}}) ~\underline{\underline{a}}~ (\underline{\underline{b}}) (\underline{\underline{i}}) (\underline{\underline{u}}) \\[4pt] (\underline{\underline{A}}) (\underline{\underline{B}}) (\underline{\underline{a}}) ~\underline{\underline{b}}~ (\underline{\underline{i}}) (\underline{\underline{u}}) \\[4pt] (\underline{\underline{A}}) (\underline{\underline{B}}) (\underline{\underline{a}}) (\underline{\underline{b}}) ~\underline{\underline{i}}~ (\underline{\underline{u}}) \\[4pt] (\underline{\underline{A}}) (\underline{\underline{B}}) (\underline{\underline{a}}) (\underline{\underline{b}}) (\underline{\underline{i}}) ~\underline{\underline{u}}~ \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{A}}\rangle}_W \\[4pt] {\langle\underline{\underline{B}}\rangle}_W \\[4pt] {\langle\underline{\underline{a}}\rangle}_W \\[4pt] {\langle\underline{\underline{b}}\rangle}_W \\[4pt] {\langle\underline{\underline{i}}\rangle}_W \\[4pt] {\langle\underline{\underline{u}}\rangle}_W \end{matrix}\)


\(\text{Table 54.2} ~~ \text{Pragmatic Literal Codes for Interpreters A and B}\!\)
\(\text{Element}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} 100000 \\[4pt] 010000 \\[4pt] 001000 \\[4pt] 000100 \\[4pt] 000010 \\[4pt] 000001 \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{o_1}}~ (\underline{\underline{o_2}}) (\underline{\underline{s_1}}) (\underline{\underline{s_2}}) (\underline{\underline{s_3}}) (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{o_1}}) ~\underline{\underline{o_2}}~ (\underline{\underline{s_1}}) (\underline{\underline{s_2}}) (\underline{\underline{s_3}}) (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{o_1}}) (\underline{\underline{o_2}}) ~\underline{\underline{s_1}}~ (\underline{\underline{s_2}}) (\underline{\underline{s_3}}) (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{o_1}}) (\underline{\underline{o_2}}) (\underline{\underline{s_1}}) ~\underline{\underline{s_2}}~ (\underline{\underline{s_3}}) (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{o_1}}) (\underline{\underline{o_2}}) (\underline{\underline{s_1}}) (\underline{\underline{s_2}}) ~\underline{\underline{s_3}}~ (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{o_1}}) (\underline{\underline{o_2}}) (\underline{\underline{s_1}}) (\underline{\underline{s_2}}) (\underline{\underline{s_3}}) ~\underline{\underline{s_4}}~ \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{o_1}}\rangle}_W \\[4pt] {\langle\underline{\underline{o_2}}\rangle}_W \\[4pt] {\langle\underline{\underline{s_1}}\rangle}_W \\[4pt] {\langle\underline{\underline{s_2}}\rangle}_W \\[4pt] {\langle\underline{\underline{s_3}}\rangle}_W \\[4pt] {\langle\underline{\underline{s_4}}\rangle}_W \end{matrix}\)


\(\text{Table 54.3} ~~ \text{Abstract Literal Codes for Interpreters A and B}\!\)
\(\text{Element}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} 100000 \\[4pt] 010000 \\[4pt] 001000 \\[4pt] 000100 \\[4pt] 000010 \\[4pt] 000001 \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{w_1}}~ (\underline{\underline{w_2}}) (\underline{\underline{w_3}}) (\underline{\underline{w_4}}) (\underline{\underline{w_5}}) (\underline{\underline{w_6}}) \\[4pt] (\underline{\underline{w_1}}) ~\underline{\underline{w_2}}~ (\underline{\underline{w_3}}) (\underline{\underline{w_4}}) (\underline{\underline{w_5}}) (\underline{\underline{w_6}}) \\[4pt] (\underline{\underline{w_1}}) (\underline{\underline{w_2}}) ~\underline{\underline{w_3}}~ (\underline{\underline{w_4}}) (\underline{\underline{w_5}}) (\underline{\underline{w_6}}) \\[4pt] (\underline{\underline{w_1}}) (\underline{\underline{w_2}}) (\underline{\underline{w_3}}) ~\underline{\underline{w_4}}~ (\underline{\underline{w_5}}) (\underline{\underline{w_6}}) \\[4pt] (\underline{\underline{w_1}}) (\underline{\underline{w_2}}) (\underline{\underline{w_3}}) (\underline{\underline{w_4}}) ~\underline{\underline{w_5}}~ (\underline{\underline{w_6}}) \\[4pt] (\underline{\underline{w_1}}) (\underline{\underline{w_2}}) (\underline{\underline{w_3}}) (\underline{\underline{w_4}}) (\underline{\underline{w_5}}) ~\underline{\underline{w_6}}~ \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{w_1}}\rangle}_W \\[4pt] {\langle\underline{\underline{w_2}}\rangle}_W \\[4pt] {\langle\underline{\underline{w_3}}\rangle}_W \\[4pt] {\langle\underline{\underline{w_4}}\rangle}_W \\[4pt] {\langle\underline{\underline{w_5}}\rangle}_W \\[4pt] {\langle\underline{\underline{w_6}}\rangle}_W \end{matrix}\)


\(\text{Table 55.1} ~~ \mathrm{LIR}_1 (L_\text{A}) : \text{Literal Representation of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\!\)


\(\text{Table 55.2} ~~ \mathrm{LIR}_1 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\!\)

\(\begin{matrix} ({\langle\underline{\underline{\text{a}}}\rangle}_W, {\langle\underline{\underline{\text{A}}}\rangle}_W) \\[4pt] ({\langle\underline{\underline{\text{i}}}\rangle}_W, {\langle\underline{\underline{\text{A}}}\rangle}_W) \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{b}}}\rangle}_W, {\langle\underline{\underline{\text{B}}}\rangle}_W) \\[4pt] ({\langle\underline{\underline{\text{u}}}\rangle}_W, {\langle\underline{\underline{\text{B}}}\rangle}_W) \end{matrix}\)


\(\text{Table 55.3} ~~ \mathrm{LIR}_1 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} 0_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}W} \\[4pt] 0_{\mathrm{d}W} \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\!\)

\(\begin{matrix} 0_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}W} \\[4pt] 0_{\mathrm{d}W} \end{matrix}\)


\(\text{Table 56.1} ~~ \mathrm{LIR}_1 (L_\text{B}) : \text{Literal Representation of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\!\)


\(\text{Table 56.2} ~~ \mathrm{LIR}_1 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{a}}}\rangle}_W, {\langle\underline{\underline{\text{A}}}\rangle}_W) \\[4pt] ({\langle\underline{\underline{\text{u}}}\rangle}_W, {\langle\underline{\underline{\text{A}}}\rangle}_W) \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{b}}}\rangle}_W, {\langle\underline{\underline{\text{B}}}\rangle}_W) \\[4pt] ({\langle\underline{\underline{\text{i}}}\rangle}_W, {\langle\underline{\underline{\text{B}}}\rangle}_W) \end{matrix}\)


\(\text{Table 56.3} ~~ \mathrm{LIR}_1 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} 0_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}W} \\[4pt] 0_{\mathrm{d}W} \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_W \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_W \end{matrix}\!\)

\(\begin{matrix} 0_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}W} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}W} \\[4pt] 0_{\mathrm{d}W} \end{matrix}\)


Lateral Coding

For the domain \(O = \{ \text{A}, \text{B} \}\!\) of two elements:

\(X = \{ o_1, o_2 \} = \{ \text{A}, \text{B} \}\!\)

For the domain \(S = I = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!\) of four elements one needs to use four logical features, in effect, elevating each individual sign to the status of an exclusive grammatical category in its own right. The easiest way to do this is simply to reuse the syntactic domain \(S = I\!\) as a logical alphabet \(\underline{\underline{Y}},\!\) taking element-wise identifications as follows:

\(\begin{array}{*{11}{c}} Y & = & \{ & s_1 & , & s_2 & , & s_3 & , & s_4 & \} \\ & = & \{ & {}^{\backprime\backprime} \text{A} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{B} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{i} {}^{\prime\prime} & , & {}^{\backprime\backprime} \text{u} {}^{\prime\prime} & \} \\[10pt] \underline{\underline{Y}} & = & \{ & \underline{\underline{s_1}} & , & \underline{\underline{s_2}} & , & \underline{\underline{s_3}} & , & \underline{\underline{s_4}} & \} \\ & = & \{ & \underline{\underline{{}^{\backprime\backprime} \text{A} {}^{\prime\prime}}} & , & \underline{\underline{{}^{\backprime\backprime} \text{B} {}^{\prime\prime}}} & , & \underline{\underline{{}^{\backprime\backprime} \text{i} {}^{\prime\prime}}} & , & \underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}} & \} \end{array}\!\)

Tables 57.1, 57.2, and 57.3 show several ways of representing the elements of \(O\!\) and \(S,\!\) presenting the lateral codes for world elements in their mnemonic, pragmatic, and abstract versions, respectively. In each Table, Column 2 gives the coordinate vector \(\underline{x} \in \underline{X}\!\) or \(\underline{y} \in \underline{Y}\!\) as a bit string, using a subscript to indicate the relevant space, \(\underline{X}\!\) or \(\underline{Y}.\!\) Column 3 lists the propositional expression of each element in the form of a conjunct term, in other words, as a logical product of positive and negative features, using doubly underlined capital letters for literal features of objects and doubly underlined lower case letters for literal features of quoted signs. Finally, Column 4 shows the compact code for each element, using a conjunction of positive features in subscripted angle brackets to represent the corresponding conjunct term as a singular proposition.


\(\text{Table 57.1} ~~ \text{Mnemonic Lateral Codes for Interpreters A and B}\!\)
\(\text{Element}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {10}_X \\[4pt] {01}_X \\[4pt] {1000}_Y \\[4pt] {0100}_Y \\[4pt] {0010}_Y \\[4pt] {0001}_Y \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{A}}~ (\underline{\underline{B}}) \\[4pt] (\underline{\underline{A}}) ~\underline{\underline{B}}~ \\[4pt] ~\underline{\underline{a}}~ (\underline{\underline{b}}) (\underline{\underline{i}}) (\underline{\underline{u}}) \\[4pt] (\underline{\underline{a}}) ~\underline{\underline{b}}~ (\underline{\underline{i}}) (\underline{\underline{u}}) \\[4pt] (\underline{\underline{a}}) (\underline{\underline{b}}) ~\underline{\underline{i}}~ (\underline{\underline{u}}) \\[4pt] (\underline{\underline{a}}) (\underline{\underline{b}}) (\underline{\underline{i}}) ~\underline{\underline{u}}~ \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{A}}\rangle}_X \\[4pt] {\langle\underline{\underline{B}}\rangle}_X \\[4pt] {\langle\underline{\underline{a}}\rangle}_Y \\[4pt] {\langle\underline{\underline{b}}\rangle}_Y \\[4pt] {\langle\underline{\underline{i}}\rangle}_Y \\[4pt] {\langle\underline{\underline{u}}\rangle}_Y \end{matrix}\)


\(\text{Table 57.2} ~~ \text{Pragmatic Lateral Codes for Interpreters A and B}\!\)
\(\text{Element}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {10}_X \\[4pt] {01}_X \\[4pt] {1000}_Y \\[4pt] {0100}_Y \\[4pt] {0010}_Y \\[4pt] {0001}_Y \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{o_1}}~ (\underline{\underline{o_2}}) \\[4pt] (\underline{\underline{o_1}}) ~\underline{\underline{o_2}}~ \\[4pt] ~\underline{\underline{s_1}}~ (\underline{\underline{s_2}}) (\underline{\underline{s_3}}) (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{s_1}}) ~\underline{\underline{s_2}}~ (\underline{\underline{s_3}}) (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{s_1}}) (\underline{\underline{s_2}}) ~\underline{\underline{s_3}}~ (\underline{\underline{s_4}}) \\[4pt] (\underline{\underline{s_1}}) (\underline{\underline{s_2}}) (\underline{\underline{s_3}}) ~\underline{\underline{s_4}}~ \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{o_1}}\rangle}_X \\[4pt] {\langle\underline{\underline{o_2}}\rangle}_X \\[4pt] {\langle\underline{\underline{s_1}}\rangle}_Y \\[4pt] {\langle\underline{\underline{s_2}}\rangle}_Y \\[4pt] {\langle\underline{\underline{s_3}}\rangle}_Y \\[4pt] {\langle\underline{\underline{s_4}}\rangle}_Y \end{matrix}\)


\(\text{Table 57.3} ~~ \text{Abstract Lateral Codes for Interpreters A and B}\!\)
\(\text{Element}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {10}_X \\[4pt] {01}_X \\[4pt] {1000}_Y \\[4pt] {0100}_Y \\[4pt] {0010}_Y \\[4pt] {0001}_Y \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{x_1}}~ (\underline{\underline{x_2}}) \\[4pt] (\underline{\underline{x_1}}) ~\underline{\underline{x_2}}~ \\[4pt] ~\underline{\underline{y_1}}~ (\underline{\underline{y_2}}) (\underline{\underline{y_3}}) (\underline{\underline{y_4}}) \\[4pt] (\underline{\underline{y_1}}) ~\underline{\underline{y_2}}~ (\underline{\underline{y_3}}) (\underline{\underline{y_4}}) \\[4pt] (\underline{\underline{y_1}}) (\underline{\underline{y_2}}) ~\underline{\underline{y_3}}~ (\underline{\underline{y_4}}) \\[4pt] (\underline{\underline{y_1}}) (\underline{\underline{y_2}}) (\underline{\underline{y_3}}) ~\underline{\underline{y_4}}~ \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{x_1}}\rangle}_X \\[4pt] {\langle\underline{\underline{x_2}}\rangle}_X \\[4pt] {\langle\underline{\underline{y_1}}\rangle}_Y \\[4pt] {\langle\underline{\underline{y_2}}\rangle}_Y \\[4pt] {\langle\underline{\underline{y_3}}\rangle}_Y \\[4pt] {\langle\underline{\underline{y_4}}\rangle}_Y \end{matrix}\)


\(\text{Table 58.1} ~~ \mathrm{LIR}_2 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\!\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)


\(\text{Table 58.2} ~~ \mathrm{LIR}_2 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{a}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{i}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\!\)

\(\begin{matrix} ({\langle\underline{\underline{\text{b}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{u}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \end{matrix}\)


\(\text{Table 58.3} ~~ \mathrm{LIR}_2 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\!\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \\[4pt] ~\underline{\underline{\text{da}}}~ (\underline{\underline{\text{db}}}) ~\underline{\underline{\text{di}}}~ (\underline{\underline{\text{du}}}) \\[4pt] ~\underline{\underline{\text{da}}}~ (\underline{\underline{\text{db}}}) ~\underline{\underline{\text{di}}}~ (\underline{\underline{\text{du}}}) \\[4pt] (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \\[4pt] (\underline{\underline{\text{da}}}) ~\underline{\underline{\text{db}}}~ (\underline{\underline{\text{di}}}) ~\underline{\underline{\text{du}}}~ \\[4pt] (\underline{\underline{\text{da}}}) ~\underline{\underline{\text{db}}}~ (\underline{\underline{\text{di}}}) ~\underline{\underline{\text{du}}}~ \\[4pt] (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \end{matrix}\)


\(\text{Table 59.1} ~~ \mathrm{LIR}_2 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)


\(\text{Table 59.2} ~~ \mathrm{LIR}_2 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \\[4pt] ~\underline{\underline{\text{A}}}~ (\underline{\underline{\text{B}}}) \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{a}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{u}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \\[4pt] (\underline{\underline{\text{A}}}) ~\underline{\underline{\text{B}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{b}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{i}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \end{matrix}\)


\(\text{Table 59.3} ~~ \mathrm{LIR}_2 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \\[4pt] ~\underline{\underline{\text{a}}}~ (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) (\underline{\underline{\text{i}}}) ~\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \\[4pt] ~\underline{\underline{\text{da}}}~ (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) ~\underline{\underline{\text{du}}}~ \\[4pt] ~\underline{\underline{\text{da}}}~ (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) ~\underline{\underline{\text{du}}}~ \\[4pt] (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \end{matrix}\!\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) ~\underline{\underline{\text{b}}}~ (\underline{\underline{\text{i}}}) (\underline{\underline{\text{u}}}) \\[4pt] (\underline{\underline{\text{a}}}) (\underline{\underline{\text{b}}}) ~\underline{\underline{\text{i}}}~ (\underline{\underline{\text{u}}}) \end{matrix}\)

\(\begin{matrix} (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \\[4pt] (\underline{\underline{\text{da}}}) ~\underline{\underline{\text{db}}}~ ~\underline{\underline{\text{di}}}~ (\underline{\underline{\text{du}}}) \\[4pt] (\underline{\underline{\text{da}}}) ~\underline{\underline{\text{db}}}~ ~\underline{\underline{\text{di}}}~ (\underline{\underline{\text{du}}}) \\[4pt] (\underline{\underline{\text{da}}}) (\underline{\underline{\text{db}}}) (\underline{\underline{\text{di}}}) (\underline{\underline{\text{du}}}) \end{matrix}\)


\(\text{Table 60.1} ~~ \mathrm{LIR}_3 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)


\(\text{Table 60.2} ~~ \mathrm{LIR}_3 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{a}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{i}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{b}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{u}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \end{matrix}\)


\(\text{Table 60.3} ~~ \mathrm{LIR}_3 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} 0_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}Y} \\[4pt] 0_{\mathrm{d}Y} \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} 0_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}Y} \\[4pt] 0_{\mathrm{d}Y} \end{matrix}\)


\(\text{Table 61.1} ~~ \mathrm{LIR}_3 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)


\(\text{Table 61.2} ~~ \mathrm{LIR}_3 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{A}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{A}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{a}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{u}}}\rangle}_Y, {\langle\underline{\underline{\text{A}}}\rangle}_X) \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{B}}}\rangle}_X \\[4pt] {\langle\underline{\underline{\text{B}}}\rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} ({\langle\underline{\underline{\text{b}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \\[4pt] ({\langle\underline{\underline{\text{i}}}\rangle}_Y, {\langle\underline{\underline{\text{B}}}\rangle}_X) \end{matrix}\)


\(\text{Table 61.3} ~~ \mathrm{LIR}_3 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{a}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{u}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} 0_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle}_{\mathrm{d}Y} \\[4pt] 0_{\mathrm{d}Y} \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{b}}}\rangle}_Y \\[4pt] {\langle\underline{\underline{\text{i}}}\rangle}_Y \end{matrix}\)

\(\begin{matrix} 0_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}Y} \\[4pt] {\langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle}_{\mathrm{d}Y} \\[4pt] 0_{\mathrm{d}Y} \end{matrix}\)


6.25. Analytic Intensional Representations

In this section the ERs of \(L(\text{A})\!\) and \(L(\text{B})\!\) are translated into a variety of different IRs that actually accomplish some measure of analytic work. These are referred to as analytic intensional representations (AIRs). This strategy of representation is referred to as a structural coding or a sensitive coding, because it pays attention to the structure of its object domain and attends to the nuances of each sign's interpretation to fashion its code. It may also be characterized as a \(\log(n)\!\) coding, because it uses roughly \(\log_2(n)\!\) binary features to represent a domain of \(n\!\) elements.

For the domain \(O = \{ \text{A}, \text{B} \}\!\) of two elements one needs to use a single logical feature. It is often convenient to use an object feature that is relative to the interpreter using it, for instance, telling whether the object described is the self or the other.

For the domain \(S = I = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!\) of four elements one needs to use two logical features. One possibility is to classify each element according to its syntactic category, as being a noun or a pronoun, and according to its semantic category, as denoting the self or the other.

Tables 62.1, 62.2, and 62.3 show several ways of representing these categories in terms of feature-value pairs and propositional codes. In each Table, Column 1 describes the category in question, Column 2 gives the mnemonic form of a propositional expression for that category, and Column 3 gives the abbreviated form of that expression, using a notation for propositional calculus where parentheses circumscribing a term or expression are interpreted as forming its logical negation.


\(\text{Table 62.1} ~~ \text{Analytic Codes for Object Features}\!\)
\(\text{Category}\!\) \(\text{Mnemonic}\!\) \(\text{Code}\!\)

\(\begin{array}{l} \text{Self} \\[4pt] \text{Other} \end{array}\)

\(\begin{matrix} \text{self} \\[4pt] \text{(self)} \end{matrix}\)

\(\begin{matrix} \text{s} \\[4pt] \text{(s)} \end{matrix}\)


\(\text{Table 62.2} ~~ \text{Analytic Codes for Semantic Features}\!\)
\(\text{Category}\!\) \(\text{Mnemonic}\!\) \(\text{Code}\!\)

\(\begin{array}{l} \text{1st Person} \\[4pt] \text{2nd Person} \end{array}\)

\(\begin{matrix} \text{my} \\[4pt] \text{(my)} \end{matrix}\)

\(\begin{matrix} \text{m} \\[4pt] \text{(m)} \end{matrix}\)


\(\text{Table 62.3} ~~ \text{Analytic Codes for Syntactic Features}\!\)
\(\text{Category}\!\) \(\text{Mnemonic}\!\) \(\text{Code}\!\)

\(\begin{array}{l} \text{Noun} \\[4pt] \text{Pronoun} \end{array}\)

\(\begin{matrix} \text{name} \\[4pt] \text{(name)} \end{matrix}\)

\(\begin{matrix} \text{n} \\[4pt] \text{(n)} \end{matrix}\)


Tables 63 and 64 list the codes for each element of the world domain \(W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!\) giving all features relative to the interpreters \(\text{A}\!\) and \(\text{B},\!\) respectively.


\(\text{Table 63.} ~~ \text{Analytic Codes for Interpreter A}\!\)
\(\text{Name}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Mnemonic}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {1}_X \\[4pt] {0}_X \\[4pt] {11}_Y \\[4pt] {01}_Y \\[4pt] {10}_Y \\[4pt] {00}_Y \end{matrix}\)

\(\begin{matrix} ~x_1~ \\[4pt] (x_1) \\[4pt] ~y_1~~y_2~ \\[4pt] (y_1)~y_2~ \\[4pt] ~y_1~(y_2) \\[4pt] (y_1)(y_2) \end{matrix}\)

\(\begin{matrix} ~\text{self}~ \\[4pt] (\text{self}) \\[4pt] ~\text{my}~~\text{name}~ \\[4pt] (\text{my})~\text{name}~ \\[4pt] ~\text{my}~(\text{name}) \\[4pt] (\text{my})(\text{name}) \end{matrix}\)

\(\begin{matrix} ~\text{s}~ \\[4pt] (\text{s}) \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] (\text{m})~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] (\text{m})(\text{n}) \end{matrix}\)


\(\text{Table 64.} ~~ \text{Analytic Codes for Interpreter B}\!\)
\(\text{Name}\!\) \(\text{Vector}\!\) \(\text{Conjunct Term}\!\) \(\text{Mnemonic}\!\) \(\text{Code}\!\)

\(\begin{matrix} \text{A} \\[4pt] \text{B} \\[4pt] {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\[4pt] {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {0}_X \\[4pt] {1}_X \\[4pt] {01}_Y \\[4pt] {11}_Y \\[4pt] {10}_Y \\[4pt] {00}_Y \end{matrix}\)

\(\begin{matrix} (x_1) \\[4pt] ~x_1~ \\[4pt] (y_1)~y_2~ \\[4pt] ~y_1~~y_2~ \\[4pt] ~y_1~(y_2) \\[4pt] (y_1)(y_2) \end{matrix}\)

\(\begin{matrix} (\text{self}) \\[4pt] ~\text{self}~ \\[4pt] (\text{my})~\text{name}~ \\[4pt] ~\text{my}~~\text{name}~ \\[4pt] ~\text{my}~(\text{name}) \\[4pt] (\text{my})(\text{name}) \end{matrix}\)

\(\begin{matrix} (\text{s}) \\[4pt] ~\text{s}~ \\[4pt] (\text{m})~\text{n}~ \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] (\text{m})(\text{n}) \end{matrix}\)


Tables 65.1 and 66.1 transcribe the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) respectively, into the forms of the AIR just suggested. Tables 65.2 and 66.2 extract the denotative components of \(L(\text{A})\!\) and \(L(\text{B}),\!\) respectively, and isolate the transitions from signs to objects as ordered pairs of the form \((s, o).\!\) Tables 65.3 and 66.3 extract the connotative components of \(L(\text{A})\!\) and \(L(\text{B}),\!\) respectively, and represent the transitions from signs to interpretants in terms of differential features, in other words, as propositions in the differential extension of the syntactic domain.


\(\text{Table 65.1} ~~ \mathrm{AIR}_1 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{s} \\[4pt] \text{s} \\[4pt] \text{s} \\[4pt] \text{s} \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{s}) \\[4pt] (\text{s}) \\[4pt] (\text{s}) \\[4pt] (\text{s}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \end{matrix}\)


\(\text{Table 65.2} ~~ \mathrm{AIR}_1 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} \text{s} \\[4pt] \text{s} \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \mapsto ~\text{s}~ \\[4pt] ~\text{m}~(\text{n}) \mapsto ~\text{s}~ \end{matrix}\)

\(\begin{matrix} (\text{s}) \\[4pt] (\text{s}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \mapsto (\text{s}) \\[4pt] (\text{m})(\text{n}) \mapsto (\text{s}) \end{matrix}\)


\(\text{Table 65.3} ~~ \mathrm{AIR}_1 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{dm})(\text{dn}) \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})(\text{dn}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{dm})(\text{dn}) \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})(\text{dn}) \end{matrix}\)


\(\text{Table 66.1} ~~ \mathrm{AIR}_1 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} (\text{s}) \\[4pt] (\text{s}) \\[4pt] (\text{s}) \\[4pt] (\text{s}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} \text{s} \\[4pt] \text{s} \\[4pt] \text{s} \\[4pt] \text{s} \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)


\(\text{Table 66.2} ~~ \mathrm{AIR}_1 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} (\text{s}) \\[4pt] (\text{s}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \mapsto (\text{s}) \\[4pt] (\text{m})(\text{n}) \mapsto (\text{s}) \end{matrix}\)

\(\begin{matrix} \text{s} \\[4pt] \text{s} \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \mapsto ~\text{s}~ \\[4pt] ~\text{m}~(\text{n}) \mapsto ~\text{s}~ \end{matrix}\)


\(\text{Table 66.3} ~~ \mathrm{AIR}_1 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \\[4pt] (\text{m})~\text{n}~ \\[4pt] (\text{m})(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{dm})(\text{dn}) \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})(\text{dn}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \\[4pt] ~\text{m}~~\text{n}~ \\[4pt] ~\text{m}~(\text{n}) \end{matrix}\)

\(\begin{matrix} (\text{dm})(\text{dn}) \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})~\text{dn}~ \\[4pt] (\text{dm})(\text{dn}) \end{matrix}\)


\(\text{Table 67.1} ~~ \mathrm{AIR}_2 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)


\(\text{Table 67.2} ~~ \mathrm{AIR}_2 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{array}{r} {\langle * \rangle}_Y \mapsto {\langle * \rangle}_X \\[4pt] {\langle\text{m}\rangle}_Y \mapsto {\langle * \rangle}_X \end{array}\)

\(\begin{matrix} {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{array}{r} {\langle\text{n}\rangle}_Y \mapsto {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_Y \mapsto {\langle ! \rangle}_X \end{array}\)


\(\text{Table 67.3} ~~ \mathrm{AIR}_2 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \end{matrix}\)


\(\text{Table 68.1} ~~ \mathrm{AIR}_2 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)


\(\text{Table 68.2} ~~ \mathrm{AIR}_2 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{array}{r} {\langle\text{n}\rangle}_Y \mapsto {\langle ! \rangle}_X \\[4pt] {\langle ! \rangle}_Y \mapsto {\langle ! \rangle}_X \end{array}\)

\(\begin{matrix} {\langle * \rangle}_X \\[4pt] {\langle * \rangle}_X \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{array}{r} {\langle * \rangle}_Y \mapsto {\langle * \rangle}_X \\[4pt] {\langle\text{m}\rangle}_Y \mapsto {\langle * \rangle}_X \end{array}\)


\(\text{Table 68.3} ~~ \mathrm{AIR}_2 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\) \(\text{Transition}\!\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \\[4pt] {\langle\text{n}\rangle}_Y \\[4pt] {\langle ! \rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \\[4pt] {\langle * \rangle}_Y \\[4pt] {\langle\text{m}\rangle}_Y \end{matrix}\)

\(\begin{matrix} {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y} \\[4pt] {\langle\mathrm{d}!\rangle}_{\mathrm{d}Y} \end{matrix}\)


6.26. Differential Logic and Directed Graphs

This section extracts the graph-theoretic content of the previous series of Tables, using it to illustrate the logical description, or intensional representation (IR), of graphs and digraphs. Where the points of graphs and digraphs are described by conjunctions of logical features, the edges and arcs are described by differential features, possibly in conjunction with the ordinary features that depict their points of origin and destination.

Because of the formal confound that I mentioned earlier, anchored in the essentially accidential circumstance that \(\text{A}\!\) and \(\text{B}\!\) and I all use the same proper names for \(\text{A}\!\) and \(\text{B},\!\) I cannot analyze the denotative aspects of the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) without analyzing the corresponding parts of my own denotative conduct, namely, those actions of mine that involve parallel uses of the signs \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) and \({}^{\backprime\backprime} \text{B} {}^{\prime\prime}.\!\) However, it will soon become obvious that I have not prepared the discussion at this point with the technical means it needs to carry out this task in any meaningful way. In order to do this, it would be necessary to consider the common world \(W =\!\) \(\{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!\) of the two sign relations as an initially homogeneous set and then to provide explicit logical features that mark the distinction between objects and signs. For the sake of simplicity, I am putting these considerations off to a subsequent round of analysis. On this pass, the denotative sections of each analytic scheme are filled in with what amount to inert proxies for the actual analyses to be carried out later.

6.27. Differential Logic and Group Operations

This section isolates the group-theoretic content of the previous series of Tables, using it to illustrate the following principle: When a geometric object, like a graph or digraph, is given an intensional representation (IR) in terms of a set of logical properties or propositional features, then many of the transformational aspects of that object can be represented in the differential extension of that IR.

One approach to the study of a temporal system is through the paradigm or principle of sequential inference.

Principle of sequential inference. A sequential inference rule is operative in any setting where the following list of ingredients can be identified.

  1. There is a frame of observation that affords, arranges for, or determines a sequence of observations on a system.
  2. There is an observable property or a logical feature \(x\!\) that can be true or false of the system at any given moment \(t\!\) of observation.
  3. There is a pair \((t, t')\!\) of succeeding moments of observation.

Relative to a setting of this kind, the rules of sequential inference are exemplified by the schematism shown in Table 41.


\(\text{Table 69.} ~~ \text{Schematism of Sequential Inference}\!\)
\(\text{Initial Premiss}\!\) \(\text{Differential Premiss}\!\) \(\text{Inferred Sequel}\!\)

\(\begin{matrix} ~x~ ~\mathrm{at}~ t \\[4pt] ~x~ ~\mathrm{at}~ t \\[4pt] (x) ~\mathrm{at}~ t \\[4pt] (x) ~\mathrm{at}~ t \end{matrix}\)

\(\begin{matrix} ~\mathrm{d}x~ ~\mathrm{at}~ t \\[4pt] (\mathrm{d}x) ~\mathrm{at}~ t \\[4pt] ~\mathrm{d}x~ ~\mathrm{at}~ t \\[4pt] (\mathrm{d}x) ~\mathrm{at}~ t \end{matrix}\)

\(\begin{matrix} (x) ~\mathrm{at}~ t' \\[4pt] ~x~ ~\mathrm{at}~ t' \\[4pt] ~x~ ~\mathrm{at}~ t' \\[4pt] (x) ~\mathrm{at}~ t' \end{matrix}\)


It might be thought that a notion of real time \((t \in \mathbb{R})\!\) is needed at this point to fund the account of sequential processes. From a logical point of view, however, I think it will be found that it is precisely out of such data that the notion of time has to be constructed.

The symbol \({}^{\backprime\backprime} \ominus\!\!- {}^{\prime\prime},\) read thus, then, or yields, can be used to mark sequential inferences, allowing for expressions like \(x \land \mathrm{d}x \ominus\!\!-~ (x).\!\) In each case, a suitable context of temporal moments \((t, t')\!\) is understood to underlie the inference.

A sequential inference constraint is a logical condition that applies to a temporal system, providing information about the kinds of sequential inference that apply to the system in a hopefully large number of situations. Typically, a sequential inference constraint is formulated in intensional terms and expressed by means of a collection of sequential inference rules or schemata that tell what sequential inferences apply to the system in particular situations. Since it has the status of logical theory about an empirical system, a sequential inference constraint is subject to being reformulated in terms of its set-theoretic extension, and it can be established as existing in the customary sort of dual relationship with this extension. Logically, it determines, and, empirically, it is determined by the corresponding set of sequential inference triples, the \((x, y, z)\!\) such that \(x \land y \ominus\!\!-~ z.\!\) The set-theoretic extension of a sequential inference constraint is thus a triadic relation, generically notated as \(\ominus,\!\) where \(\ominus \subseteq X \times \mathrm{d}X \times X\!\) is defined as follows.

\(\ominus ~=~ \{ (x, y, z) \in X \times \mathrm{d}X \times X : x \land y \ominus\!\!-~ z \}.\!\)

Using the appropriate isomorphisms, or recognizing how, in terms of the information given, that each of several descriptions is tantamount to the same object, the triadic relation \(\ominus \subseteq X \times \mathrm{d}X \times X\!\) constituted by a sequential inference constraint can be interpreted as a proposition \(\ominus : X \times \mathrm{d}X \times X \to \mathbb{B}\!\) about sequential inference triples, and thus as a map \(\ominus : \mathrm{d}X \to (X \times X \to \mathbb{B})\!\) from the space \(\mathrm{d}X\!\) of differential states to the space of propositions about transitions in \(X.\!\)


Question. Group Actions? \(r : \mathrm{d}X \to (X \to X)\!\)


\(\text{Table 70.1} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{A} (V_4)\!\)
\(\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Logical} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{List} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{Term} \end{matrix}\) \(\begin{matrix} \text{Genetic} \\ \text{Element} \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] r \\[4pt] s \\[4pt] t \end{matrix}\)

\(\begin{matrix} (\mathrm{d}\underline{\underline{\text{a}}}) (\mathrm{d}\underline{\underline{\text{b}}}) (\mathrm{d}\underline{\underline{\text{i}}}) (\mathrm{d}\underline{\underline{\text{u}}}) \\[4pt] ~\mathrm{d}\underline{\underline{\text{a}}}~ (\mathrm{d}\underline{\underline{\text{b}}}) ~\mathrm{d}\underline{\underline{\text{i}}}~ (\mathrm{d}\underline{\underline{\text{u}}}) \\[4pt] (\mathrm{d}\underline{\underline{\text{a}}}) ~\mathrm{d}\underline{\underline{\text{b}}}~ (\mathrm{d}\underline{\underline{\text{i}}}) ~\mathrm{d}\underline{\underline{\text{u}}}~ \\[4pt] ~\mathrm{d}\underline{\underline{\text{a}}}~ ~\mathrm{d}\underline{\underline{\text{b}}}~ ~\mathrm{d}\underline{\underline{\text{i}}}~ ~\mathrm{d}\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} \langle \mathrm{d}! \rangle \\[4pt] \langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle \\[4pt] \langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle \\[4pt] \langle \mathrm{d}* \rangle \end{matrix}\)

\(\begin{matrix} \mathrm{d}! \\[4pt] \mathrm{d}\underline{\underline{\text{a}}} \cdot \mathrm{d}\underline{\underline{\text{i}}} ~ ! \\[4pt] \mathrm{d}\underline{\underline{\text{b}}} \cdot \mathrm{d}\underline{\underline{\text{u}}} ~ ! \\[4pt] \mathrm{d}* \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] \mathrm{d}_{\text{ai}} \\[4pt] \mathrm{d}_{\text{bu}} \\[4pt] \mathrm{d}_{\text{ai}} * \mathrm{d}_{\text{bu}} \end{matrix}\)


\(\text{Table 70.2} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{B} (V_4)\!\)
\(\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Logical} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{List} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{Term} \end{matrix}\) \(\begin{matrix} \text{Genetic} \\ \text{Element} \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] r \\[4pt] s \\[4pt] t \end{matrix}\)

\(\begin{matrix} (\mathrm{d}\underline{\underline{\text{a}}}) (\mathrm{d}\underline{\underline{\text{b}}}) (\mathrm{d}\underline{\underline{\text{i}}}) (\mathrm{d}\underline{\underline{\text{u}}}) \\[4pt] ~\mathrm{d}\underline{\underline{\text{a}}}~ (\mathrm{d}\underline{\underline{\text{b}}}) (\mathrm{d}\underline{\underline{\text{i}}}) ~\mathrm{d}\underline{\underline{\text{u}}}~ \\[4pt] (\mathrm{d}\underline{\underline{\text{a}}}) ~\mathrm{d}\underline{\underline{\text{b}}}~ ~\mathrm{d}\underline{\underline{\text{i}}}~ (\mathrm{d}\underline{\underline{\text{u}}}) \\[4pt] ~\mathrm{d}\underline{\underline{\text{a}}}~ ~\mathrm{d}\underline{\underline{\text{b}}}~ ~\mathrm{d}\underline{\underline{\text{i}}}~ ~\mathrm{d}\underline{\underline{\text{u}}}~ \end{matrix}\)

\(\begin{matrix} \langle \mathrm{d}! \rangle \\[4pt] \langle \mathrm{d}\underline{\underline{\text{a}}} ~ \mathrm{d}\underline{\underline{\text{u}}} \rangle \\[4pt] \langle \mathrm{d}\underline{\underline{\text{b}}} ~ \mathrm{d}\underline{\underline{\text{i}}} \rangle \\[4pt] \langle \mathrm{d}* \rangle \end{matrix}\)

\(\begin{matrix} \mathrm{d}! \\[4pt] \mathrm{d}\underline{\underline{\text{a}}} \cdot \mathrm{d}\underline{\underline{\text{u}}} ~ ! \\[4pt] \mathrm{d}\underline{\underline{\text{b}}} \cdot \mathrm{d}\underline{\underline{\text{i}}} ~ ! \\[4pt] \mathrm{d}* \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] \mathrm{d}_{\text{au}} \\[4pt] \mathrm{d}_{\text{bi}} \\[4pt] \mathrm{d}_{\text{au}} * \mathrm{d}_{\text{bi}} \end{matrix}\)


\({\text{Table 70.3} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{C} (V_4)}\!\)
\(\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Logical} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{List} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{Term} \end{matrix}\) \(\begin{matrix} \text{Genetic} \\ \text{Element} \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] r \\[4pt] s \\[4pt] t \end{matrix}\)

\(\begin{matrix} (\mathrm{d}\text{m}) (\mathrm{d}\text{n}) \\[4pt] ~\mathrm{d}\text{m}~ (\mathrm{d}\text{n}) \\[4pt] (\mathrm{d}\text{m}) ~\mathrm{d}\text{n}~ \\[4pt] ~\mathrm{d}\text{m}~ ~\mathrm{d}\text{n}~ \end{matrix}\)

\(\begin{matrix} \langle\mathrm{d}!\rangle \\[4pt] \langle\mathrm{d}\text{m}\rangle \\[4pt] \langle\mathrm{d}\text{n}\rangle \\[4pt] \langle\mathrm{d}*\rangle \end{matrix}\)

\(\begin{matrix} \mathrm{d}! \\[4pt] \mathrm{d}\text{m}! \\[4pt] \mathrm{d}\text{n}! \\[4pt] \mathrm{d}* \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] \mathrm{d}_{\text{m}} \\[4pt] \mathrm{d}_{\text{n}} \\[4pt] \mathrm{d}_{\text{m}} * \mathrm{d}_{\text{n}} \end{matrix}\)


\(\text{Table 71.1} ~~ \text{The Differential Group} ~ G = V_4\!\)
\(\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Logical} \\ \text{Element} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{List} \end{matrix}\) \(\begin{matrix} \text{Active} \\ \text{Term} \end{matrix}\) \(\begin{matrix} \text{Genetic} \\ \text{Element} \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] r \\[4pt] s \\[4pt] t \end{matrix}\)

\(\begin{matrix} (\mathrm{d}\text{m}) (\mathrm{d}\text{n}) \\[4pt] ~\mathrm{d}\text{m}~ (\mathrm{d}\text{n}) \\[4pt] (\mathrm{d}\text{m}) ~\mathrm{d}\text{n}~ \\[4pt] ~\mathrm{d}\text{m}~ ~\mathrm{d}\text{n}~ \end{matrix}\)

\(\begin{matrix} \langle\mathrm{d}!\rangle \\[4pt] \langle\mathrm{d}\text{m}\rangle \\[4pt] \langle\mathrm{d}\text{n}\rangle \\[4pt] \langle\mathrm{d}*\rangle \end{matrix}\)

\(\begin{matrix} \mathrm{d}! \\[4pt] \mathrm{d}\text{m}! \\[4pt] \mathrm{d}\text{n}! \\[4pt] \mathrm{d}* \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] \mathrm{d}_{\text{m}} \\[4pt] \mathrm{d}_{\text{n}} \\[4pt] \mathrm{d}_{\text{m}} * \mathrm{d}_{\text{n}} \end{matrix}\)


\(\text{Table 71.2} ~~ \text{Cosets of} ~ G_\text{m} ~ \text{in} ~ G\!\)
\(\text{Group Coset}\!\) \(\text{Logical Coset}~\!\) \(\text{Logical Element}\!\) \(\text{Group Element}\!\)
\(G_\text{m}\!\) \((\mathrm{d}\text{m})\!\)

\(\begin{matrix} (\mathrm{d}\text{m})(\mathrm{d}\text{n}) \\[4pt] (\mathrm{d}\text{m})~\mathrm{d}\text{n}~ \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] \mathrm{d}_\text{n} \end{matrix}\)

\(G_\text{m} * \mathrm{d}_\text{m}\!\) \(\mathrm{d}\text{m}\!\)

\(\begin{matrix} ~\mathrm{d}\text{m}~(\mathrm{d}\text{n}) \\[4pt] ~\mathrm{d}\text{m}~~\mathrm{d}\text{n}~ \end{matrix}\)

\(\begin{matrix} \mathrm{d}_\text{m} \\[4pt] \mathrm{d}_\text{n} * \mathrm{d}_\text{m} \end{matrix}\)


\(\text{Table 71.3} ~~ \text{Cosets of} ~ G_\text{n} ~ \text{in} ~ G\!\)
\(\text{Group Coset}\!\) \(\text{Logical Coset}~\!\) \(\text{Logical Element}\!\) \(\text{Group Element}\!\)
\(G_\text{n}\!\) \(({\mathrm{d}\text{n})}\!\)

\(\begin{matrix} (\mathrm{d}\text{m})(\mathrm{d}\text{n}) \\[4pt] ~\mathrm{d}\text{m}~(\mathrm{d}\text{n}) \end{matrix}\)

\(\begin{matrix} 1 \\[4pt] \mathrm{d}_\text{m} \end{matrix}\)

\(G_\text{n} * \mathrm{d}_\text{n}\!\) \(\mathrm{d}\text{n}\!\)

\(\begin{matrix} (\mathrm{d}\text{m})~\mathrm{d}\text{n}~ \\[4pt] ~\mathrm{d}\text{m}~~\mathrm{d}\text{n}~ \end{matrix}\)

\(\begin{matrix} \mathrm{d}_\text{n} \\[4pt] \mathrm{d}_\text{m} * \mathrm{d}_\text{n} \end{matrix}\)


6.28. The Bridge : From Obstruction to Opportunity

There are many reasons for using intensional representations to describe formal objects, especially as the size and complexity of these objects grows beyond the bounds of finite information capacities to represent them in practical terms. This is extremely pertinent to the progress of the present discussion. As often happens, when a top-down investigation of complex families of formal objects actually succeeds in arriving at examples that are simple enough to contemplate in extensional terms, it can be difficult to see the relation of such impoverished examples to the cases of original interest, all of them typically having infinite cardinality and indefinite complexity. In short, once a discussion is brought down to the level of its smallest cases it can be nearly impossible to bring it back up to the level of its intended application. Without invoking intensional representations of sign relations there is little hope that this discussion can rise far beyond its present level, eternally elaborating the subtleties of cases as elementary as \(L(\text{A})\!\) and \(L(\text{B}).\!\)

There are many obstacles to building this bridge, but if these forms of obstruction are understood in the proper fashion, it is possible to use them as stepping stones, to capitalize on their redoubtable structures, and to convert their recalcitrant materials into a formal calculus that can serve the aims and means of instruction.

This approach requires me to consider a chain of relationships that connects signs, names, concepts, properties, sets, and objects, along with various ways that these classes of entities have been viewed at different periods in the development of mathematical logic.

I would like to begin by giving an “impressionistic capsule history” of the relevant developments in mathematical logic, admittedly as viewed from a certain perspective, but hoping to allow room for alternative perspectives to have their way and present themselves in their own best light.

Variant 1. The human mind, boggling at the many to many relation between objects and signs that it finds in the world as soon as it begins to reflect on its own reasoning process, hits upon the strategy of interposing a realm of intermediate nodes between objects and signs, and looking through this medium for ways to factor the original relation into simpler components.

Variant 2. At the beginning of logic, the human mind, as soon as it begins to reflect on its own reasoning process, boggles at the many to many relation between objects and signs that it finds itself conducting through the world.

There are two methods for attempting to disentangle this confusion that are generally tried, the first more rarely, the second quite frequently, though apparently in opposite proportion to their respective chances of actual success. In order to describe the rationales of these methods I need to introduce a number of technical concepts.

Suppose \(P\!\) and \(Q\!\) are dyadic relations, with \(P \subseteq X \times Y\!\) and \(Q \subseteq Y \times Z.\!\) Then the contension of \(P\!\) and \(Q\!\) is a triadic relation \(R \subseteq X \times Y \times Z\!\) that is notated as \(R = P\!\!\And\!\!Q\) and defined as follows.

\(P\!\!\And\!\!Q ~=~ \{ (x, y, z) \in X \times Y \times Z : (x, y) \in P ~\text{and}~ (y, z) \in Q \}.\)

In other words, \(P\!\!\And\!\!Q\) is the intersection of the inverse projections \(P' = \mathrm{Pr}_{12}^{-1}(P)\!\) and \(Q' = \mathrm{Pr}_{23}^{-1}(Q),\!\) which are defined as follows:

\(\begin{matrix} \mathrm{Pr}_{12}^{-1}(P) & = & P \times Z & = & \{ (x, y, z) \in X \times Y \times Z : (x, y) \in P \}. \\[4pt] \mathrm{Pr}_{23}^{-1}(Q) & = & X \times Q & = & \{ (x, y, z) \in X \times Y \times Z : (y, z) \in Q \}. \end{matrix}\)

Inverse projections are often referred to as extensions, in spite of the conflict this creates with the extensions of concepts and terms.

One of the standard turns of phrase that finds use in this setting, not only for translating between extensional representations and intensional representations, but for converting both into computational forms, is to associate any set \(S\!\) contained in a space \(X\!\) with two other types of formal objects: (1) a logical proposition \(p_S\!\) known as the characteristic, indicative, or selective proposition of \(S,\!\) and (2) a boolean-valued function \(f_S : X \to \mathbb{B}\!\) known as the characteristic, indicative, or selective function of \(S.\!\)

Strictly speaking, the logical entity \(p_S\!\) is the intensional representation of the tribe, presiding at the highest level of abstraction, while \(f_S\!\) and \(S\!\) are its more concrete extensional representations, rendering its concept in functional and geometric materials, respectively. Whenever it is possible to do so without confusion, I try to use identical or similar names for the corresponding objects and species of each type, and I generally ignore the distinctions that otherwise set them apart. For instance, in moving toward computational settings, \(f_S\!\) makes the best computational proxy for \(p_S,\!\) so I commonly refer to the mapping \(f_S : X \to \mathbb{B}\!\) as a proposition on \(X.\!\)

Regarded as logical models, the elements of the contension \(P\!\!\And\!\!Q\) satisfy the proposition referred to as the conjunction of extensions \(P^\prime\!\) and \(Q^\prime.\!\)

Next, the composition of \(P\!\) and \(Q\!\) is a dyadic relation \(R' \subseteq X \times Z\!\) that is notated as \(R' = P \circ Q\!\) and defined as follows.

\(P \circ Q ~=~ \mathrm{Pr}_{13} (P\!\!\And\!\!Q) ~=~ \{ (x, z) \in X \times Z : (x, y, z) \in P\!\!\And\!\!Q \}.\)

In other words:

\(P \circ Q ~=~ \{ (x, z) \in X \times Z : (x, y) \in P ~\text{and}~ (y, z) \in Q \}.\!\)

Begin Fragment. I will have to find my notes on this.

Using these notions, the customary methods for disentangling a many-to-many relation can be explained as follows:

  1.  
  2.  

In the logic of the ancients, the many-to-one relation of things to general names ...

End Fragment.

In early approaches to mathematical logic, from Leibniz to Peirce and Frege, one ordinarily spoke of the extensions and intensions of concepts.

Typically, one starts a work of bridge-building by casting a thin line across the intervening gap, using this expediency to conduct a slightly more substantial linkage over the rift, and then proceeding through a train of successors to draw increasingly stronger connections between the opposing shores until a load-bearing framework can be established. There is an analogue of this operation that fits the current situation, and this is something I can do this by taking up the sign relations \(L(\text{A})\!\) and \(L(\text{B}),\!\) already introduced in extensional terms, and re-describing the abstract features of their structures in intensional terms.

This would be the ideal plan. But bridging the “tensions”, “ex-” and “in-”, that subsist within the forms of representation is not as easy as that. In order to convey the importance of the task and provide a motivation for carrying it out, I will plot a chain of relationships that stretches from signs, names, and concepts to properties, sets, and objects.

As a way of resolving the discerned “tensions”, posed here to fall into “ex-” and “in-” kinds, the strategy just described affords a way of approaching the problem that is less like a bridge than a pole vault, taking its pivot on a fixed set of narrowly circumscribed sign relations to make a transit from extensional to intensional outlooks on their form. With time and reflection, the logical depth of the supposed distinction, the “pretension” of maintaining a couple of separate but equal tensions in isolation from each other, does not withstand a persistent probing. Accordingly, the gulf between the two realms can always be fathomed by a finitely informed creature, in fact, by the very form of interpreter that created the fault in the first place. Consequently, converting the form of a transient vault into the substance of a usable bridge requires in adjunction only that initially pliable and ultimately tensile sorts of connecting lines be conducted along the tracery of the vault until the work of castling the gap can begin.

In the pragmatic theory of signs, the word representation is a technical term that is synonymous with the word sign, in other words, it applies to an entity in the most general category of things that can enter into sign relations in the roles of signs and interpretants. In this usage the scope of the term representation includes all sorts of syntactic, descriptive, and conceptual entities, a range of options I will frequently find it convenient to suggest by drawing on a pair of stock phrases: terms and concepts (TACs) in a conjunctive context versus terms or concepts (TOCs) in a disjunctive context.

In mathematics, the word representation is commonly reserved for referring to a homomorphism, that is, a linear transformation or a structure-preserving mapping \(h : X \to Y\!\) between mathematical objects, that is, structure-bearing spaces in a category of comparable domains.

In keeping with the spirit of the current discussion, I will first present a set of examples that are designed to illustrate what I mean by an intensional representation. In general, an intensional representation of any object is a sign, description, or concept that denotes, describes, or conceives its object in terms of its properties, that is, in terms of the logical attributes that the object possesses or the propositional features that the object is supposed to have. If the object to be represented is a complex formal object like a sign relation, then there needs to be an intensional representation of each elementary sign relation and an intensional representation of the sign relation as a whole.

But first, before I try to tackle this project, it is advisable to seek a measure of theoretical advantage that I can bring to bear on the task. This I can do by anchoring my focal outlook on sign relations within a more global consideration of \(n\!\)-place relations. Not only will this help with the conceptual recasting of \(L(\text{A})\!\) and \(L(\text{B}),\!\) but it will also support later stages of the present work, especially in the effort to build a collection of readily accessible linkages between the extensions and the intensions of each construct that I try to use, and ultimately of each concept and term that might conceivably find a use in inquiry.

6.29. Projects of Representation

Note. This section is very rough and will need to be rewritten.

There are numerous modalities of description and representation that are involved in linking the extensions and intensions of terms and concepts. To facilitate the building of a suitable analytic and synthetic framework for this task, and to abbreviate future references to the categories of modalities that come into play, I will employ a set of technical notions, along with their aliases and acronyms, to be indicated next.

First of all, I want to call attention to a mode of description or a category of representation that logically precedes all forms of extensional representation and intensional representation. To do this, I introduce the notion of a project of representation and the mode or category of pre-tensional representation that contains the elements for attempting its actualization.

A project of representation, in the interval before its completion or its viability can be assured with any measurable degree of certainty, initiates a mode of description called a prospective representation or a pretended representation. Taken over time, the project of representation issues in a series of prospective representation notices, a sequence of explicit expressions that constitute its prospectus or pretension. If the project of representation turns out to be feasible, and if all goes well with its realization, then the promises tendered by these notes can be redeemed in full. Regarded from a retrospective vantage point, and ever contingent on the eventual success of the project of representation, what amounts to little more than a species of potentially attractive prospective representation bulletins can be valued ultimately as a set of successive approximations to the intended concept and, further, to the objective reality intended. It is only at this point that a piece of prospective representation achieves the virtues of an actual representation, embodying in its purported description a modicum of actually resolved tension, and becoming amenable to formalization as a non-degenerate example of a sign relation. This tells how a project of representation develops in time under conditions ideal enough to achieve its aim. Next, I describe the features of prospective representation as a form of existence sub specie aeternitatis.

Under the banner of prospective representation parades an uncontrolled and wide-ranging variety of pretended signs and prospective signs, entities with the alleged or apparent significance of signs, of likely stories that are potentially meaningful and useful as descriptions and representations, but the character of whose participation in a sign relation is yet to be judged and warranted. A certain quality of PR marks the brand of significance that a candidate sign possesses before it is known for certain whether it will be genuine or spurious in its role as a sign, the meagre benefit of the doubt that can be granted any reputed sign before its character as an authentic sign has been tested in the performance of its assigned functions.

The prospects and pretensions of prospective representation are associated with a quality of tension that prevails on the scene of representation even before it is definite that objects have signs and signs have objects, an uncertain character that supervenes on the stage before one is sure that a project of representation will succeed in its intentions — far enough for its extension to stretch over many determinate instances, or well enough for its intension to hold forth any distinctive properties. This whole modality of allegation and aspiration, given the self described significance and uncertified self-advertisement that signs in all their immature phases and developmental crises cannot help but affect, taken along with the valid ambitions of potential signs to become signs in fact, is a class of pretended and prospective meaning that it seems fitting to label as a category of pretensional representation.

The assortment of prospective representation mechanisms that goes into a run-of-the-mill project of representation is a rather odd lot, drawing into its curious train every style of potential, preliminary, prospective, provisional, purported, putative, and otherwise pretensional representation. The motley array of artifice and device that is permitted under this gangly heading seems ill-suited to becoming recognized as a natural category, and perhaps it is destined to persist for all time pretty much as one presently finds it, too quixotic to regiment fully and too recalcitrant to organize under compact terms. For the purposes of the current project, the point of distinguishing the category of prospective representation as a mode of description is twofold:

  1. The category of prospective representation is drawn up to encompass the categories of extensional representation and intensional representation and to allow the creation or recognition of a collection of continuities and correspondences between them.
  2. The prospective representation mode of description draws attention to several important facts about the more problematic phases of interpretation and inquiry processes, especially including the inchoate actions of their initial stages and the moments when global paradigm shifts are manifestly in progress. All that interpreters have to go on for much of the intermediate time that they spend involved in the sign formation processes of inquiry is a type of prospective representation.

Variant. The purpose of these prospective representation labels is to draw attention to the indirect character and the allusory nature of the interpretive processes that contribute to the initial stages and non-routine phases of inquiry.

Variant. The point of pointing out the indirect character and the allusory nature of all prospective representation is to draw attention to the circumstance that intelligent agents of interpretation and inquiry have to be capable of waiting, trading, and acting on purported and potential representations. To proceed at all from the conditions prevailing at the outset of inquiry, they have to lead off from the slightest inklings that a problematic phenomenon may disclose an objective reality that is a key to its resolution, to take their initial direction from uncertain signs that an object of value might be in the offing, and to open the bidding on a brand of improper symbols whose very qualifications as representations are required by the nature of interpretive inquiry to remain in question for much of the mean time taken up by the pursuit of the alleged objects. It is part of the task that prospective representation is all these agents have to go on, …

Extensional descriptions of \(n\!\)-place relations are the kinds that can be presented in relational data tables, or at least initiated and partially illustrated in this form. If an \(n\!\)-place relation is constituted as a finite information construction, where the relation as a whole plus each of its elements is specified in discrete and finite terms, then the prospective tabulation can be carried out to completion, at least in principle, explicitly enumerating the elementary relations or the \(n\!\)-tuples of relational domain elements that enter into the relation.

Extensional descriptions are so close to what one casually and commonly regards as “immediate experience” that the knowledge of a relation one gains by means of their indications is often not thought to lie in the medium of description at all. The acquaintance with the character of an objective relation that extensional descriptions so successfully manage to record and convey is a type of impression that one often fails to reflect on as arriving through signs at all, and thus it can leave an impression of knowing its object that is susceptible to being confounded with a direct experience of the relation itself.

This does not have to be a bad thing in practice. Indeed, it is a factor contributing to the success of extensional descriptions that an agent can usually afford to remain oblivious to the more indirect aspects of their interpretation, to proceed with impunity to take them at face value, and unless there is some obvious trouble to work on the assumption that they are in fact nothing more than what they seem to denote. Thus, one often finds extensional presentations treated as though they arose from radically empirical sources of data, instituting fundamentally pure modes of “knowledge by acquaintance” and reputed to lie in meaningful contradistinction to every other type of representation, the rest falling into categories that grade them as mere “knowledge by description”. However, when it comes to the ends of deliberate design and analysis, the usual assumptions can no longer be relied on to justify their own usage, and they need to make themselves available for examination whenever the limits of their usefulness are called into question.

One of the purposes of introducing an explicit theory of sign relations into the present study of inquiry is to examine the status of this idea, namely, that it makes sense to posit an absolute distinction between knowledge by acquaintance and knowledge by description. Pragmatic thinking begins with a certain amount of skepticism toward this notion, on account of the many illusions that appear to trace their origins to it, but overall it seeks to arrive at a language in which to examine the question thoroughly, and to devise a means for individual interpreters to make a clear choice for themselves, with respect to the possibly of their purposes being divergent in the mean time, one way or the other.

Whatever the outcome of these individual decisions, independent in principle no matter how they turn out in practice, and without forcing the preliminary acknowledgement of any unavoidable gulf or unbridgeable abyss that might be imagined to separate the modalities of acquaintance and description, one nevertheless wants to preserve the practical uses of the comparative, interpretive, relative, and otherwise sufficiently qualified and circumspect orderings of data and descriptions along these lines that can be organized by individual interpreters and communities of interpretation. It is for these reasons that I have taken the trouble to conduct this discussion in a way that diverts some attention toward its own casual context, and I hope that this strategy is able to reflect, or at least scatter, enough light back on the enfolding context of informal sign relations that it can dispel any inkling of an automatic distinction of this form, or at least to cast doubts on the remaining traces of its illusion.

For the rest of this section I restrict the discussion to sign relations of the type \(L \subseteq O \times S \times I\!\) and elementary sign relations of the form \((o, s, i) \in L.\!\)

A discussion of concrete examples, intended to serve as a preparatory treatment for approaching a significantly more complex area, is necessarily limited in its focus to isolated cases, in effect, to those that remain simple enough to be instructive in a preliminary approach to the topic. This means that the observable properties of the initial examples, with respect to the class they are aimed to exemplify, will sort themselves into two kinds: (1) their essential, generic, or genuine properties, and (2) their accidental, factitious, or spurious properties.

But the present discussion of sign relations cannot illustrate the properties of even these elementary examples in an adequate way without considering extended multitudes of other relations, both those that share the same properties and those that do not. Consequently, by way of getting the comparative study of sign relations started on a casual basis, an end that is served in addition by placing sign relations within the broader setting of \(n\!\)-place relations, I will exploit a few devices of taxonomic nomenclature, intending them to be applied for the moment in a purely informal way.

An order, genus, or species of relations is a class or set of relations that obey a particular collection of axioms, or that satisfy a certain combination of operational constraints and axiomatic properties. These respective terms, given in order of increasing specificity, are not intended to be applied too systematically, but only roughly to indicate how many axioms are listed in the specification of a class of relations and thus how narrowly the indicated class is pinned down relative to other classes within the context of a particular discussion.

For example, this terminology allows me to indicate a general order \(G\!\) of sign relations \(L,\!\) each of whose connotative components \(L_{SI}\!\) is an equivalence relation, and then it allows me to extend this investigation by pursuing the prospective existence of a generalized order \(G'\!\) of sign relations \(L,\!\) each of which has many properties analogous to the sign relations in \(G,\!\) with the exception or extension that \(G'\!\) is more broadly formulated in certain designated respects.

The purpose of these informal taxonomic distinctions is not to specify absolute levels of generality, something that could not be achieved in a global manner without splitting hierarchies of hereditary properties and the whole host of their successive heirs down to the ultimate pedigrees, but merely to organize properties relative to each other in comparative terms, in which case three levels of generality are usually enough to orient oneself locally in any ontology, no matter how wide or deep. Thus, the main interest that the terms order, genus, and species will subserve in this connection is to indicate the taxonomic directions of generalization and specialization that a particular investigation is trying to achieve among classes of relations: generalizing a class by abstracting features or removing constraints from its original definition, and specializing a class by concretizing features or adding constraints to its initial characterization.

In order to talk and think about any sign relation at all, not to mention addressing the topic of a generic order of sign relations, one has to use signs to do it, and this requires one's taking part in what can be called a higher order of sign relations. By way of definition, a sign relation is a higher order sign relation if some of its signs refer to objects that are themselves sign relations or classes of sign relations.

So long as one expects to deal with only a few sign relations at a time, managing to use only a few conventional names to denote each of them, then one's participation in a higher order sign relation hardly ever becomes too problematic, and it rarely needs to be formalized in order for one to cope with the duties of serving as its unofficial interpreter. Once a reflective involvement with higher order sign relations gets started, however, there will be difficulties that continue to grow and lurk just beneath the apparently conversant surface of their all too facile fluency.

By way of example, a singular sign that denotes an entire sign relation refers by extension to a class of elementary sign relations, or a set of transaction triples \((o, s, i).\!\) So far, this is still not too much of a problem. But when one begins to develop large numbers of conventional symbols and complicated formulas for referring to the same classes of sign transactions, then considerations of effective and efficient interpretation will demand that these symbols and formulas be organized into semantic equivalence classes with recognizable characters. That is, one is forced to find computable types of similarity relations defined on pairs of symbols and pairs of formulas that tell whether they refer to the same class of sign transactions or not. It is almost inevitable in such a situation that canonical representatives of these equivalence classes will have to be developed, and a means for transforming arbitrarily complex and obscure expressions into optimally simple and clear equivalents will also become necessary.

At this stage one is brought face to face with the task of implementing a full fledged interpreter for a particular higher order sign relation, summarized as follows:

  1. The objects of \(Q\!\) are the abstract classes of transactions that constitute the sign relations in question.
  2. The signs of \(Q\!\) are the collection of symbols and formulas used as conventional names and analytic expressions for the sign relations in question.
  3. The interpretants of \(Q\!\) …

But a generic name intended to reference a whole class of sign relations is another matter altogether, especially when it comes into play in a comparative study of many different orders of relations.

6.30. Connected, Integrated, Reflective Symbols

Triadic relations need to be recognized as the minimal subsistents or staple elements of continuity that are capable of keeping the symbols for generalized objects or “hypostatic abstractions” viable in practice. In order to remain fully functioning in all the ways that initially make them useful, abstract terms have to stay connected in each of the many directions of relationship that make their use both flexible and stable, namely, (1) attached to the substantive particulars of their denotations, (2) dedicated to the associational and definitional connotations that constitute their law-abiding participation in a commonwealth of other abstract terms, (3) relevant to the ongoing understanding of inquiring agents and interpretive communities. Anything less, any attempt to use staple structural elements of lower arity than triadic bonds is bound to corrupt in time the dimensional solidity of these symbolic amalgamations.

In order for its knowledge to be reflective, an intelligent system must have the ability to reason about sign relations, not only the ones in which it operates but also the ones in which it might participate. A natural way of approaching this task is to consider the domain of sign relations set within the embedding framework of \(n\!\)-place relations, since resourcefulness with relations in general is something that a reasonably competent knowledge-based system will need anyway.

Now here is a class of mathematical objects, \(n\!\)-place relations, that are worthy of some thought, no matter what application might be intended, and given the levels of combinatorial complexity that their study raises, it is likely that suitable software will need to play a role in their investigation.

One of the ways that the design principles declared above bear on the application to \(n\!\)-place relations is as follows. In order to support reasoning about general classes of relations, and sign relations in particular, a computational system (or implemented formal system) must have signs or names that are available to refer to the subject matter of particular relations and symbols or formulas that are able to represent predicates of relations. If these references and representations are to avoid all the various ways of becoming logically empty and effectively vacuous — something they can do (1) by failing to have sufficient denotation from the very outset or (2) by exceeding the conceptual and computational bounds needed to maintain consistency and tractability at any subsequent stage of processing their indications — then …

6.31. Relations in General

In a realistic computational framework, where incomplete and inconsistent information is the rule, it is necessary to work with genera of relations that are increasingly relaxed in their constraining characters but still preserve a measure of analogy with the fundamental species of relations that are found to be prevalent in perfect information contexts.

In the present application the kinds of relations of primary interest are functions, equivalence relations, and other species of relations defined by axiomatic properties. Thus, the information-theoretic generalizations of these structures lead to partially defined functions and partially constrained versions of these specially defined classes of relations.

The purpose of this Section is to outline the kinds of generalized functions and other families of relations that are needed to extend the discussion of the present example. In this connection, to frame the problem in concrete terms, I need to adapt the square bracket notation for two generalizations of equivalence relations, to be defined below. But first, a number of broader issues need to be treated.

Generally speaking, one is free to interpret references to generalized objects either as indications of partially formed analogues or as partially informed descriptions of their corresponding species. I refer to these alternatives as the object-theoretic and the sign-theoretic options, respectively. The first interpretation assumes that vague and general references still have denotations, merely to vague and general objects. The second interpretation ascribes the partialities of information to the characters of the signs and expressions that are doing the denoting. In most cases that arise in casual discussion the choice between these conventions is purely stylistic. However, in many of the more intricate situations that arise in formal discussion the object choice often fails utterly, and whenever the utmost care is required it will usually be the attention to signs that saves the day.

In order to speak of generalized orders of relations I need to outline the dimensions of variation along which I intend the characters of already familiar orders of relations to be broadened. Generally speaking, the taxonomic features of \(n\!\)-place relations that I wish to liberalize can be read off from their local incidence properties (LIPs).

Definition. A local incidence property of a \(k\!\)-place relation \(L \subseteq X_1 \times \ldots \times X_k\!\) is one that is based on the following type of data. Pick an element \(x\!\) in one of the domains \({X_j}\!\) of \(L.\!\) Let \(L_{x \,\text{at}\, j}\!\) be a subset of \(L\!\) called the flag of \(L\!\) with \(x\!\) at \({j},\!\) or the \(x \,\text{at}\, j\!\) flag of \(L.\!\) The local flag \(L_{x \,\text{at}\, j} \subseteq L\!\) is defined as follows.

\(L_{x \,\text{at}\, j} = \{ (x_1, \ldots, x_j, \ldots, x_k) \in L : x_j = x \}.\!\)

Any property \(P\!\) of \(L_{x \,\text{at}\, j}\!\) constitutes a local incidence property of \(L\!\) with reference to the locus \(x \,\text{at}\, j.\!\)

Definition. A \(k\!\)-place relation \(L \subseteq X_1 \times \ldots \times X_k\!\) is \(P\!\)-regular at \(j\!\) if and only if every flag of \(L\!\) with \(x\!\) at \(j\!\) is \(P,\!\) letting \(x\!\) range over the domain \(X_j,\!\) in symbols, if and only if \(P(L_{x \,\text{at}\, j})\!\) is true for all \({x \in X_j}.\!\)

Of particular interest are the local incidence properties of relations that can be calculated from the cardinalities of their local flags, and these are naturally called numerical incidence properties (NIPs).

For example, \(L\!\) is \(c\text{-regular at}~ j\!\) if and only if the cardinality of the local flag \(L_{x \,\text{at}\, j}\!\) is equal to \(c\!\) for all \(x \in X_j,\!\) coded in symbols, if and only if \(|L_{x \,\text{at}\, j}| = c\!\) for all \({x \in X_j}.\!\)

In a similar fashion, it is possible to define the numerical incidence properties \((< c)\text{-regular at}~ j,\!\) \((> c)\text{-regular at}~ j,\!\) and so on. For ease of reference, a few of these definitions are recorded below.

\(\begin{array}{lll} L ~\text{is}~ c\text{-regular at}~ j & \iff & |L_{x \,\text{at}\, j}| = c ~\text{for all}~ x \in X_j. \\[6pt] L ~\text{is}~ (< c)\text{-regular at}~ j & \iff & |L_{x \,\text{at}\, j}| < c ~\text{for all}~ x \in X_j. \\[6pt] L ~\text{is}~ (> c)\text{-regular at}~ j & \iff & |L_{x \,\text{at}\, j}| > c ~\text{for all}~ x \in X_j. \\[6pt] L ~\text{is}~ (\le c)\text{-regular at}~ j & \iff & |L_{x \,\text{at}\, j}| \le c ~\text{for all}~ x \in X_j. \\[6pt] L ~\text{is}~ (\ge c)\text{-regular at}~ j & \iff & |L_{x \,\text{at}\, j}| \ge c ~\text{for all}~ x \in X_j. \end{array}\!\)

The definition of local flags can be broadened to give a definition of regional flags. Suppose \(L \subseteq X_1 \times \ldots \times X_k\!\) and choose a subset \(M \subseteq X_j.\!\) Let \(L_{M \,\text{at}\, j}\!\) be a subset of \(L\!\) called the flag of \(L\!\) with \(M\!\) at \({j},\!\) or the \(M \,\text{at}\, j\!\) flag of \(L,\!\) defined as follows.

\(L_{M \,\text{at}\, j} = \{ (x_1, \ldots, x_j, \ldots, x_k) \in L : x_j \in M \}.\!\)

Returning to dyadic relations, it is useful to describe some familiar classes of objects in terms of their local and numerical incidence properties. Let \(L \subseteq X \times Y\!\) be an arbitrary dyadic relation. The following properties of \(L\!\) can then be defined.

\(\begin{array}{lll} L ~\text{is total at}~ X & \iff & L ~\text{is}~ (\ge 1)\text{-regular}~ \text{at}~ X. \\[6pt] L ~\text{is total at}~ Y & \iff & L ~\text{is}~ (\ge 1)\text{-regular}~ \text{at}~ Y. \\[6pt] L ~\text{is tubular at}~ X & \iff & L ~\text{is}~ (\le 1)\text{-regular}~ \text{at}~ X. \\[6pt] L ~\text{is tubular at}~ Y & \iff & L ~\text{is}~ (\le 1)\text{-regular}~ \text{at}~ Y. \end{array}\)

If \(L\!\) is tubular at \(X,\!\) then \(L\!\) is known as a partial function or a prefunction from \(X\!\) to \(Y,\!\) indicated by writing \(L : X \rightharpoonup Y.\!\) We have the following definitions and notations.

\(\begin{array}{lll} L ~\text{is a prefunction}~ L : X \rightharpoonup Y & \iff & L ~\text{is tubular at}~ X. \\[6pt] L ~\text{is a prefunction}~ L : X \leftharpoonup Y & \iff & L ~\text{is tubular at}~ Y. \end{array}\)

If \(L\!\) is a prefunction \(L : X \rightharpoonup Y\!\) that happens to be total at \(X,\!\) then \(L\!\) is known as a function from \(X\!\) to \(Y,\!\) indicated by writing \(L : X \to Y.\!\) To say that a relation \(L \subseteq X \times Y\!\) is totally tubular at \(X\!\) is to say that \(L\!\) is 1-regular at \(X.\!\) Thus, we may formalize the following definitions.

\(\begin{array}{lll} L ~\text{is a function}~ L : X \to Y & \iff & L ~\text{is}~ 1\text{-regular at}~ X. \\[6pt] L ~\text{is a function}~ L : X \leftarrow Y & \iff & L ~\text{is}~ 1\text{-regular at}~ Y. \end{array}\!\)

In the case of a 2-adic relation \(L \subseteq X \times Y\!\) that has the qualifications of a function \(f : X \to Y,\!\) there are a number of further differentia that arise.

\(\begin{array}{lll} f ~\text{is surjective} & \iff & f ~\text{is total at}~ Y. \\[6pt] f ~\text{is injective} & \iff & f ~\text{is tubular at}~ Y. \\[6pt] f ~\text{is bijective} & \iff & f ~\text{is}~ 1\text{-regular at}~ Y. \end{array}\)

A few more comments on terminology are needed in further preparation. One of the constant practical demands encountered in this project is to have available a language and a calculus for relations that can permit discussion and calculation to range over functions, dyadic relations, and \(n\!\)-place relations with a minimum amount of trouble in making transitions from subject to subject and in drawing the appropriate generalizations.

Up to this point in the discussion, the analysis of the \(\text{A}\!\) and \(\text{B}\!\) dialogue has concerned itself almost exclusively with the relationship of triadic sign relations to the dyadic relations obtained from them by taking their projections onto various relational planes. In particular, a major focus of interest was the extent to which salient properties of sign relations can be gleaned from a study of their dyadic projections.

Two important topics for later discussion will be concerned with: (1) the sense in which every \(n\!\)-place relation can be decomposed in terms of triadic relations, and (2) the fact that not every triadic relation can be further reduced to conjunctions of dyadic relations.

Variant. It is one of the constant technical needs of this project to maintain a flexible language for talking about relations, one that permits discussion to shift from functional to relational emphases and from dyadic relations to \(n\!\)-place relations with a maximum of ease. It is not possible to do this without violating the favored conventions of one technical linguistic community or another. I have chosen a strategy of use that respects as many different usages as possible, but in the end it cannot help but to reflect a few personal choices. To some extent my choices are guided by an interest in developing the information, computation, and decision-theoretic aspects of the mathematical language used. Eventually, this requires one to render every distinction, even that of appearing or not in a particular category, as being relative to an interpretive framework.

While operating in this context, it is necessary to distinguish domains in the broad sense from domains of definition in the narrow sense. For \(k\!\)-place relations it is convenient to use the terms domain and quorum as references to the wider and narrower sets, respectively.

For a \(k\!\)-place relation \(L \subseteq X_1 \times \ldots \times X_k,\!\) we have the following usages.

  1. The notation \({}^{\backprime\backprime} \mathrm{Dom}_j (L) {}^{\prime\prime}\!\) denotes the set \(X_j,\!\) called the domain of \(L\!\) at \(j\!\) or the \(j^\text{th}\!\) domain of \(L.\!\).
  2. The notation \({}^{\backprime\backprime} \mathrm{Quo}_j (L) {}^{\prime\prime}\!\) denotes a subset of \({X_j}\!\) called the quorum of \(L\!\) at \(j\!\) or the \(j^\text{th}\!\) quorum of \(L,\!\) defined as follows.

\(\begin{array}{lll} \mathrm{Quo}_j (L) & = & \text{the largest}~ Q \subseteq X_j ~\text{such that}~ ~L_{Q \,\text{at}\, j}~ ~\text{is}~ (> 1)\text{-regular at}~ j, \\[6pt] & = & \text{the largest}~ Q \subseteq X_j ~\text{such that}~ |L_{Q \,\text{at}\, j}| > 1 ~\text{for all}~ x \in Q \subseteq X_j. \end{array}\)

In the special case of a dyadic relation \(L \subseteq X_1 \times X_2 = X \times Y,\!\) including the case of a partial function \(p : X \rightharpoonup Y\!\) or a total function \(f : X \to Y,\!\) we have the following conventions.

  1. The arbitrarily designated domains \(X_1 = X\!\) and \(X_2 = Y\!\) that form the widest sets admitted to the dyadic relation are referred to as the domain or source and the codomain or target, respectively, of the relation in question.
  2. The terms quota and range are reserved for those uniquely defined sets whose elements actually appear as the first and second members, respectively, of the ordered pairs in that relation. Thus, for a dyadic relation \(L \subseteq X \times Y,\!\) we identify \(\mathrm{Quo} (L) = \mathrm{Quo}_1 (L) \subseteq X\!\) with what is usually called the domain of definition of \(L\!\) and we identify \(\mathrm{Ran} (L) = \mathrm{Quo}_2 (L) \subseteq Y\!\) with the usual range of \(L.\!\)

A partial equivalence relation (PER) on a set \(X\!\) is a relation \(L \subseteq X \times X\!\) that is an equivalence relation on its domain of definition \(\mathrm{Quo} (L) \subseteq X.\!\) In this situation, \([x]_L\!\) is empty for each \(x\!\) in \(X\!\) that is not in \(\mathrm{Quo} (L).\!\) Another way of reaching the same concept is to call a PER a dyadic relation that is symmetric and transitive, but not necessarily reflexive. Like the “self-identical elements” of old that epitomized the very definition of self-consistent existence in classical logic, the property of being a self-related or self-equivalent element in the purview of a PER on \(X\!\) singles out the members of \(\mathrm{Quo} (L)\!\) as those for which a properly meaningful existence can be contemplated.

A moderate equivalence relation (MER) on the modus \(M \subseteq X\!\) is a relation on \(X\!\) whose restriction to \(M\!\) is an equivalence relation on \(M.\!\) In symbols, \(L \subseteq X \times X\!\) such that \(L|M \subseteq M \times M\!\) is an equivalence relation. Notice that the subset of restriction, or modus \(M,\!\) is a part of the definition, so the same relation \(L\!\) on \(X\!\) could be a MER or not depending on the choice of \(M.\!\) In spite of how it sounds, a moderate equivalence relation can have more ordered pairs in it than the ordinary sort of equivalence relation on the same set.

In applying the equivalence class notation to a sign relation \(L,\!\) the definitions and examples considered so far cover only the case where the connotative component \(L_{SI}\!\) is a total equivalence relation on the whole syntactic domain \(S.\!\) The next job is to adapt this usage to PERs.

If \(L\!\) is a sign relation whose syntactic projection \(L_{SI}\!\) is a PER on \(S\!\) then we may still write \({}^{\backprime\backprime} [s]_L {}^{\prime\prime}\!\) for the “equivalence class of \(s\!\) under \(L_{SI}\!\)”. But now, \([s]_L\!\) can be empty if \(s\!\) has no interpretant, that is, if \(s\!\) lies outside the “adequately meaningful” subset of the syntactic domain, where synonymy and equivalence of meaning are defined. Otherwise, if \(s\!\) has an \(i\!\) then it also has an \(o,\!\) by the definition of \(L_{SI}.\!\) In this case, there is a triple \({(o, s, i) \in L},\!\) and it is permissible to let \([o]_L = [s]_L.\!\)

6.32. Partiality : Selective Operations

One of the main subtasks of this project is to develop a computational framework for carrying out set-theoretic operations on abstractly represented classes and for reasoning about their indicated results. This effort has the general aim of enabling one to articulate the structures of \(n\!\)-place relations and the special aim of allowing one to reflect theoretically on the properties and projections of sign relations. A prototype system that makes a beginning in this direction has already been implemented, to which the current work contributes a major part of the design philosophy and technical documentation. This section presents the rudiments of set-theoretic notation in a way that conforms to these goals, taking the development only so far as needed for immediate application to sign relations like \(L(\text{A})\!\) and \(L(\text{B}).\!\)

One of the most important design considerations that goes into building the requisite software system is how well it furthers certain lines of abstraction and generalization. One of these dimensions of abstraction or directions of generalization is discussed in this section, where I attempt to unify its many appearances under the theme of partiality. This name is chosen to suggest the desired sense of abstract intention since the extensions of concepts that it favors and for which it leaves room are outgrowths of the limitation that finite signs and expressions can never provide more than partial information about the richness of individual detail that is always involved in any real object. All in all, this modicum of tolerance for uncertainty is the very play in the wheels of determinism that provides a significant chance for luck to play a part in the finer steps toward finishing every real objective.

If a slogan is needed to charge this form of propagation, it is only that “Necessity is the mother of invention.” In other words, it is precisely this lack of perfect information that yields the opportunity for novel forms of speciation to develop among finitely informed creatures (FICs), and just this need of perfect information that drives the evolving forms of independent determination and spontaneous creation in any area, no matter how well the arena is circumscribed by the restrictions of signs.

In tracing the echoes of this theme, it is necessary to reflect on the circumstance that degenerate sign relations happen to be perfectly possible in practice, and it is desirable to provide a critical method that can address the facts of their flaws in theoretically insightful terms. Relative to particular environments of interpretation, nothing proscribes the occurrence of sign relations that are defective in any of their various facets, namely: (1) with signs that fail to denote or connote, (2) with interpretants that lack of being faithfully represented or reliably objectified, and (3) with objects that make no impression or remain ineffable in the preferred medium.

A cursory examination of the topic of partiality, as just surveyed, reveals two strains fixing how this “quality of murky” in general reigns. This division depends on the disposition of \(n\!\)-tuples as the individual elements that inhabit an \(n\!\)-place relation.

  1. If the integrity of elementary relations as n-tuples is maintained, then the predicate of partiality characterizes only the state of information that one has, either about elementary relations or about entire relations, or both. Thus, this strain of partiality affects the determination of relations at two distinct levels of their formation:
    1. At the level of elementary relations, it frees up the point to which \(n\!\)-tuples are pinned down by signs or expressions of relations by modifying the name that indicates or the formula that specifies a relation.
    2. At the level of entire relations, it relaxes the grip that axioms and constraints have on the character of a relation by modifying the strictness or generalizing the form of their application.
  2. If partial \(n\!\)-tuples are admitted, and not permitted to be confused with \((< n)\!\)-tuples, then one arrives at the concept of an \(n\!\)-place relational complex.

Relational Complex?

\(L ~=~ L^{(1)} \cup \ldots \cup L^{(k)}\!\)

Sign Relational Complex?

\(L ~=~ L^{(1)} \cup L^{(2)} \cup L^{(3)}\!\)

It is possible to see two directions of remove that signs and concepts can take in departing from complete specifications of individual objects, and thus to see two dimensions of variation in the requisite varieties of partiality, each of which leads off into its own distinctive realm of abstraction.

  1. In a direction of generality, with general signs and concepts, one loses an amount of certainty as to exactly what object the sign or concept applies at any given moment, and thus this can be recognized as an extensional type of abstraction.
  2. In a direction of vagueness, with vague signs and concepts, one loses a degree of security as to exactly what property the sign or concept implies in the current context, and thus this can be classified as an intensional mode of abstraction.

The first order of business is to draw some distinctions, and at the same time to note some continuities, between the varieties of partiality that remain to be sufficiently clarified and the more mundane brands of partiality that are already familiar enough for present purposes, but lack perhaps only the formality of being recognized under that heading.

The most familiar illustrations of information-theoretic partiality, partial indication, or “signs bearing partial information about objects” occur every time one uses a general name, for example, the name of a class, genus, or set. Almost as commonly, the formula that expresses a logical proposition can be regarded as a partial specification of its logical models or satisfying interpretations. Just as the name of a class or genus can be taken as a partially informed reference or a plural indefinite reference (PIR) to one of its elements or species, so the name of an \(n\!\)-place relation can be viewed as a PIR to one of its elementary relations or \(n\!\)-tuples, and the formula or expression of a proposition can be understood as a PIR to one its models or satisfying interpretations. For brevity, this variety of referential indetermination can be called the generic partiality of signs as information bearers.

Note. In this discussion I will not systematically distinguish between the logical entity typically called a proposition or a statement and the syntactic entity usually called an expression, formula, or sentence. Instead, I work on the assumption that both types of entity are always involved in everything one proposes and also on the hope that context will determine which aspect of proposing is most apt. For precision, the abstract category of propositions proper will have to be reconstituted as logical equivalence classes of syntactically diverse expressions. For the present, I will use the phrase propositional expression whenever it is necessary to call particular attention to the syntactic entity. Likewise, I will not always separate higher order propositions, that is, propositions about propositions, from their corresponding formulations in the guise of higher order propositional expressions.

Even though partial information is the usual case of information (as rendered by signs about objects) I will continue to use this phrase, for all its informative redundancy, to emphasize the issues of partial definition, determination, and specification that arise under the pervasive theme of partiality.

In speaking of properties and classes of relations, one would like to allude to all relations as the implicit domain of discussion, setting each particular topic against this optimally generous and neutral background. But even before discussion is restricted to a computational framework the notion of all (of almost anything) proves to be problematic in its very conception, not always amenable to assuming a consistent concept. So the connotation of all relations — really just a passing phrase that pops up in casual and careless discussions — must be relegated to the status of an informal concept, one that takes on definite meaning only when related to a context of constructive examples and formal models.

Thus, in talking sensibly about properties and classes of relations, one is always invoking, explicitly or implicitly, a preconceived domain of discussion or an established universe of discourse \(X,\!\) and in relation to this \(X\!\) one is always talking, expressly or otherwise, about a selected subset \(A \subset X\!\) that exhibits the property in question and a binary-valued selector function \(f_A : X \to \mathbb{B}\!\) that picks out the class in question.

When the subject matter of discussion is bounded by a universal set \(X,\!\) out of which all objects referred to must come, then every PIR to an object can be identified with the name or formula (sign or expression) of a subset \(A \subseteq X\!\) or else with that of its selector function \(f_A : X \to \mathbb{B}.\!\) Conceptually, one imagines generating all the objects in \(X\!\) and then selecting the ones that satisfy a definitive test for membership in \(A.\!\)

In a realistic computational framework, however, when the domain of interest is given generatively in a genuine sense of the word, that is, defined solely in terms of the primitive elements and operations that are needed to generate it, and when the resource limitations in actual effect make it impractical to enumerate all the possibilities in advance of selecting the adumbrated subset, then the implementation of PIRs becomes a genuine computational problem.

Considered in its application to \(n\!\)-place relations, the generic brand of partial specification constitutes a rather limited type of partiality, in that every element conceived as falling under the specified relation, no matter how indistinctly indicated, is still envisioned to maintain its full arity and to remain every bit a complete, though unknown, \(n\!\)-tuple. Still, there is a simple way to extend the concept of generic partiality in a significant fashion, achieving a form of PIRs to relations by making use of higher order propositions.

Extending the concept of generic partiality, by iterating the principle on which it is based, leads to higher order propositions about elementary relations, or propositions about relations, as one way to achieve partial specifications of relations, or PIRs to relations.

This direction of generalization expands the scope of PIRs by means of an analogical extension, and can be charted in the following manner. If the sign or expression (name or formula) of an \(n\!\)-place relation can be interpreted as a proposition about \(n\!\)-tuples and thus as a PIR to an elementary relation, then a higher order proposition about \(n\!\)-tuples is a proposition about \(n\!\)-place relations that can be used to formulate a PIR to an \(n\!\)-place relation.

In order to formalize these ideas, it is helpful to have notational devices for switching back and forth among different ways of exemplifying what is abstractly the same contents of information, in particular, for translating among sets, their logical expressions, and their functional indications.

Given a set \(X\!\) and a subset \(A \subseteq X,\!\) let the selector function of \(A\!\) in \(X\!\) be notated as \(A^\sharp\!\) and defined as follows.

\(\begin{array}{lll} A^\sharp : X \to \mathbb{B} & \text{such that} & A^\sharp (x) = 1 \iff x \in A. \end{array}\)

Other names for the same concept, appearing under various notations, are the characteristic function or the indicator function of \(A\!\) in \(X.\!\)

Conversely, given a boolean-valued function \(f : X \to \mathbb{B},\!\) let the selected set of \(f\!\) in \(X\!\) be notated as \(f_\flat\!\) and defined as follows.

\(\begin{array}{lll} f_\flat \subseteq X & \text{such that} & f_\flat = f^{-1}(1) = \{ x \in X : f(x) = 1 \}. \end{array}\)

Other names for the same concept are the fiber, level set, or pre-image of 1 under the mapping \(f : X \to \mathbb{B}.\!\)

Obviously, the relation between these operations is such that the following equations hold.

\(\begin{array}{lll} (A^\sharp)_\flat = A & \text{and} & (f_\flat)^\sharp = f. \end{array}\)

It will facilitate future discussions to go through the details of applying these selective operations to the case of \(n\!\)-place relations. If \(L \subseteq X_1 \times \ldots \times X_n\!\) is an \(n\!\)-place relation, then \(L^\sharp : X_1 \times \ldots \times X_n \to \mathbb{B}\!\) is the selector of \(L\!\) defined as follows.

\(\begin{array}{lll} L^\sharp (x_1, \ldots, x_n) = 1 & \iff & (x_1, \ldots, x_n) \in L. \end{array}\)

6.33. Sign Relational Complexes

In a computational framework, indeed, in any constructively analytic and practically applied setting, the problem of working with insufficient information to fully determine one's object is a constant feature that goes with the territory of finite information constructions (FICs). The fineness of detail that is able to be specified by formal symbols is continually bedeviled by the frustrating truncations of every signal to a finite code and by the resistive constrictions of every intention to the restrictive confines of what can actually be conducted. Of course, one tries to get around the more finessible limitations, but the figurative extensions that one hopes to achieve by recourse to quasi-circular definitions and by reversion to parable and hyperbole — all of these tactics appeal to a pre-established aptness of reception on the part of interpreters that begs the very question of a determinate understanding and that risks falling short of the exact attitude needed for success. At any rate, the indirect strategy of approach relies on such large reserves of enthymeme to fuel its course that the grasp of a period to set bounds on its argument and fix a term to its conclusion is often found diverging in ways that both underreach and overreach its object, and well-founded or not the search for a generic method of definition typically ends so completely dumbfounded that it often trails off into the inescapable vacuity of a quasi terminal ellipsis …

This section treats the problems of insufficient information and indeterminate objects under the heading of partializations, using this as a briefer term for the information-theoretic generalizations of the relevant object domains that take the use of indeterminate denotations, or partial determinations of objects, explicitly into account.

In working with partializations or information-theoretic generalizations of any subject matter, one has a choice between two options:

  1. Under the object-theoretic alternative one views the partiality as something attaching to the objects of discussion. Consequently, one operates as if the problems distinctive of the extended subject matter were questions of managing ordinary information about a strange new breed of partial objects.
  2. Under the sign-theoretic alternative one takes the partiality as something affecting only the signs used in discussion. Accordingly, one approaches the task as a matter of handling partial information about ordinary objects, namely, the same domains of objects initially given at the outset of discussion.

But a working maxim of information theory says that “Partial information is your ordinary information.” Applied to the principle regulating the sign-theoretic convention this means that the adjective partial is swallowed up by the substantive information, so that the ostensibly more general case is always already subsumed within the ordinary case. Because partiality is part and parcel to the usual nature of information, it is a perfectly typical feature of the signs and expressions bearing it to provide normally only partial information about ordinary objects.

The only time when a finite sign or expression can give the appearance of determining a perfectly precise content or a post-finite amount of information, for example, when the symbol \({}^{\backprime\backprime} e {}^{\prime\prime}\!\) is used to denote the number also known as “the unique base of the natural logarithms” — this can only happen when interpreters are prepared, by dint of the information embodied in their prior design and preliminary training, to accept as meaningful and be terminally satisfied with what is still only a finite content, syntactically speaking. Every remaining impression that a perfectly determinate object, an individual in the original sense of the word, has nevertheless been successfully specified — this can only be the aftermath of some prestidigitation, that is, the effect of some pre-arranged consensus, for example, of accepting a finite system of definitions and axioms that are supposed to define the space \(\mathbb{R}\!\) and the element \(e\!\) within it, and of remembering or imagining that an effective proof system has once been able or will yet be able to convince one of its demonstrations.

Ultimately, one must be prepared to work with probability distributions that are defined on entire spaces \(O\!\) of the relevant objects or outcomes. But probability distributions are just a special class of functions \(f : O \to [0, 1] \subseteq \mathbb{R},\!\) where \(\mathbb{R}\!\) is the real line, and this means that the corresponding theory of partializations involves the dual aspect of the domain \(O,\!\) dealing with the functionals defined on it, or the functions that map it into coefficient spaces. And since it is unavoidable in a computational framework, one way or another every type of coefficient information, real or otherwise, must be approached bit by bit. That is, all information is defined in terms of the either-or decisions that must be made to determine it. So, to make a long story short, one might as well approach this dual aspect by starting with the functions \(f : O \to \{ 0, 1 \} = \mathbb{B},\!\) in effect, with the logic of propositions.

I turn now to the question of partially specified relations, or partially informed relations (PIRs), in other words, to the explicit treatment of relations in terms of the information that is logically possessed or actually expressed about them. There seem to be several ways to approach the concept of an \(n\!\)-place PIR and the supporting notion of a partially specified \(n\!\)-tuple. Since the term partial relation is already implicitly in use for the general class of relations that are not necessarily total on any of their domains, I will coin the term pro-relation, on analogy with pronoun and proposition, to denote an expression of information about a relation, a contingent indication that, if and when completed, conceivably points to a particular relation.

One way to deal with partially informed categories of \(n\!\)-place relations is to contemplate incomplete relational forms or schemata. Regarded over the years chiefly in logical and intensional terms, constructs of roughly this type have been variously referred to as rhemes or rhemata (Peirce), unsaturated relations (Frege), or frames (in current AI parlance). Expressed in extensional terms, talking about partially informed categories of \(n\!\)-place relations is tantamount to admitting elementary relations with missing elements. The question is not just syntactic — How to represent an \(n\!\)-tuple with empty places? — but also semantic — How to make sense of an \(n\!\)-tuple with less than \(n\!\) elements?

In order to deal with PIRs in a thoroughly consistent fashion, it appears necessary to contemplate elementary relations that present themselves as being unsaturated (in Frege's sense of that term), in other words, to consider elements of a presumptive product space that in some sense wanna be \(n\!\)-tuples or would be sequences of a certain length, but are currently missing components in some of their places.

To the extent that the issues of partialization become obvious at the level of symbols and can be dealt with by elementary syntactic means, they initially make their appearance in terms of the various ways that data can go missing.

The alternate notation \(a \widehat{~} b\!\) is provided for the ordered pair \((a, b).\!\) This choice of representation for ordered pairs is especially apt in the case of concrete indices and localized addresses, where one wants the lead item to serve as a pointed reminder of the itemized content, as in \(j \widehat{~} X_j = (j, X_j),\!\) and it helps to stress the individuality of each member in the indexed family, as in the following set of equivalent notations.

\(\begin{matrix} G & = & \{ G_j \} & = & \{ j \widehat{~} G_j \} & = & \{ (j, G_j ) \}. \end{matrix}\)

The link device \((\,\widehat{~}~)\!\) works well in any situation where one desires to accentuate the fact that a formal subscript is being reclaimed and elevated to the status of an actual parameter. By way of the operation indicated by the link symbol the index bound to an object term can be rehabilitated as a full-fledged component of an elementary relation, thereby schematically embedding the indicated object in the experiential space of a typical agent.

The form of the link notation is intended to suggest the use of pointers and views in computational frameworks, letting one interpret \(j \widehat{~} x\!\) in several different ways, for example, any one of the following.

\(\begin{array}{lllll} j \widehat{~} x & = & j^\texttt{,}\text{s access to}~ x, & j^\texttt{,}\text{s allusion to}~ x, & j^\texttt{,}\text{s copy of}~ x, \\[4pt] & & j^\texttt{,}\text{s indication of}~ x, & j^\texttt{,}\text{s information on}~ x, & j^\texttt{,}\text{s view of}~ x. \end{array}\)

Presently, the distinction between indirect pointers and direct pointers, that is, between virtual copies and actual views of an objective domain, is not yet relevant here, being a dimension of variation that the discussion is currently abstracting over.

6.34. Set-Theoretic Constructions

The next few sections deal with the informational relationships that exist between \(n\!\)-place relations and the relations of fewer dimensions that arise as their projections. A number of set-theoretic constructions of constant use in this investigation are brought together and described in the present section. Because their intended application is mainly to sign relations and other triadic relations, and since the current focus is restricted to discrete examples of these types, no attempt is made to present these constructions in their most general and elegant fashions, but only to deck them out in the forms that are most readily pressed into immediate service.

An initial set of operations, required to establish the subsequent constructions, all have in common the property that they do exactly the opposite of what is normally done in abstracting sets from situations. These operations reconstitute, though still in a generic, schematic, or stereotypical manner, some of the details of concrete context and interpretive nuance that are commonly suppressed in forming sets. Stretching points back along the direction of their initial pointing out, these extensions return to the mix a well-chosen selection of features, putting back in those dimensions from which ordinary sets are forced to abstract and in their ordination to treat as distractions.

In setting up these constructions, one typically makes use of two kinds of index sets, in colloquial terms, clipboards and scrapbooks.

  1. The smaller and shorter-term index sets, typically having the form \(I = \{ 1, \ldots, n \},\!\) are used to keep tabs on the terms of finite sets and sequences, unions and intersections, sums and products.

    In this context and elsewhere, the notation \({[n] = \{ 1, \ldots, n \}}\!\) will be used to refer to a standard segment (finite initial subset) of the natural numbers \(\mathbb{N} = \{ 1, 2, 3, \ldots \}.\!\)

  2. The larger and longer-term index sets, typically having the form \(J \subseteq \mathbb{N} = \{ 1, 2, 3, \ldots \},\!\) are used to enumerate families of objects that enjoy a more abiding reference throughout the course of a discussion.

Definition. An indicated set \(j \widehat{~} S\!\) is an ordered pair \(j \widehat{~} S = (j, S),\!\) where \(j \in J\!\) is the indicator of the set and \(S\!\) is the set indicated.

Definition. An indited set \(j \widehat{~} S\!\) extends the incidental and extraneous indication of a set into a constant indictment of its entire membership.

\(\begin{array}{lll} j \widehat{~} S & = & j \widehat{~} \{ j \widehat{~} s : s \in S \} \\[4pt] & = & j \widehat{~} \{ (j, s) : s \in S \} \\[4pt] & = & (j, \{ j \} \times S) \end{array}\)

Notice the difference between these notions and the more familiar concepts of an indexed set, numbered set, and enumerated set. In each of these cases the construct that results is one where each element has a distinctive index attached to it. In contrast, the above indications and indictments attach to the set \(S\!\) as a whole, and respectively to each element of it, the same index number \(j.\!\)

Definition. An indexed set \((S, L)\!\) is constructed from two components: its underlying set \(S\!\) and its indexing relation \(L : S \to \mathbb{N},\!\) where \(L\!\) is total at \(S\!\) and tubular at \(\mathbb{N}.\!\) It is defined as follows:

\((S, L) = \{ \{ s \} \times L(s) : s \in S \} = \{ (s, j) : s \in S, j \in L(s) \}.\!\)

\(L\!\) assigns a unique set of “local habitations” \(L(s)\!\) to each element \(s\!\) in the underlying set \(S.\!\)

Definition. A numbered set \((S, f),\!\) based on the set \(S\!\) and the injective function \({f : S \to \mathbb{N}},\) is defined as follows. …

Definition. An enumerated set \((S, f)\!\) is a numbered set with a bijective \(f.\!\) …

Definition. The \(n\!\)-fold sum (co-product, disjoint union) of the sets \(X_1, \ldots, X_n\!\) is notated and defined as follows:

\(\coprod_{i=1}^n X_i ~=~ X_1 + \ldots + X_n ~=~ 1 \widehat{~} X_1 \cup \ldots \cup n \widehat{~} X_n.\!\)

Definition. The \(n\!\)-fold product (cartesian product) of the sets \(X_1, \ldots, X_n\!\) is notated and defined as follows:

\(\prod_{i=1}^n X_i ~=~ X_1 \times \ldots \times X_n ~=~ \{ (x_1, \ldots, x_n) : x_i \in X_i \}.\!\)

As an alternative definition, the \(n\!\)-tuples of \(\prod_{i=1}^n X_i\!\) can be regarded as sequences of elements from the successive \(X_i\!\) and thus as functions that map \([n]\!\) into the sum of the \(X_i,\!\) namely, as the functions \(f : [n] \to \coprod_{i=1}^n X_i\!\) that obey the condition \(f(i) \in i \widehat{~} X_i.\!\)

\(\prod_{i=1}^n X_i ~=~ X_1 \times \ldots \times X_n ~=~ \{ f : [n] \to \coprod_{i=1}^n X_i ~|~ f(i) \in X_i \}.\!\)

Viewing these functions as relations \(f \subseteq J \times J \times X,\!\) where \(X = \bigcup_{i=1}^n X_i\!\) …

Another way to view these elements is as triples \(i \widehat{~} j \widehat{~} x\!\) such that \(i = j\!\) …

6.35. Reducibility of Sign Relations

This Section introduces a topic of fundamental importance to the whole theory of sign relations, namely, the question of whether triadic relations are determined by, reducible to, or reconstructible from their dyadic projections.

Suppose \(L \subseteq X \times Y \times Z\!\) is an arbitrary triadic relation and consider the information about \(L\!\) that is provided by collecting its dyadic projections. To formalize this information define the projective triple of \(L\!\) as follows:

\(\mathrm{Proj}^{(2)} L ~=~ (\mathrm{proj}_{12} L, ~ \mathrm{proj}_{13} L, ~ \mathrm{proj}_{23} L).\!\)

If \(L\!\) is visualized as a solid body in the 3-dimensional space \(X \times Y \times Z,\!\) then \(\mathrm{Proj}^{(2)} L\!\) can be visualized as the arrangement or ordered collection of shadows it throws on the \(XY, ~ XZ, ~ YZ\!\) planes, respectively.

Two more set-theoretic constructions are worth introducing at this point, in particular for describing the source and target domains of the projection operator \(\mathrm{Proj}^{(2)}.\!\)

The set of subsets of a set \(S\!\) is called the power set of \(S.\!\) This object is denoted by either of the forms \(\mathrm{Pow}(S)\!\) or \(2^S\!\) and defined as follows:

\(\mathrm{Pow}(S) ~=~ 2^S ~=~ \{ T : T \subseteq S \}.\!\)

The power set notation can be used to provide an alternative description of relations. In the case where \(S\!\) is a cartesian product, say \({S = X_1 \times \ldots \times X_n},\!\) then each \(n\!\)-place relation \(L\!\) described as a subset of \(S,\!\) say \(L \subseteq X_1 \times \ldots \times X_n,\!\) is equally well described as an element of \(\mathrm{Pow}(S),\!\) in other words, as \(L \in \mathrm{Pow}(X_1 \times \ldots \times X_n).\!\)

The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets \((X, Y, Z),\!\) is called the dyadic explosion of \(X \times Y \times Z.\!\) This object is denoted \(\mathrm{Explo}(X, Y, Z ~|~ 2),\!\) read as the explosion of \(X \times Y \times Z\!\) by twos, or more simply as \(X, Y, Z ~\mathrm{choose}~ 2,\!\) and defined as follows:

\(\mathrm{Explo}(X, Y, Z ~|~ 2) ~=~ \mathrm{Pow}(X \times Y) \times \mathrm{Pow}(X \times Z) \times \mathrm{Pow}(Y \times Z).\!\)

This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.

By means of these constructions the operation that forms \(\mathrm{Proj}^{(2)} L\!\) for each triadic relation \(L \subseteq X \times Y \times Z\!\) can be expressed as a function:

\(\mathrm{Proj}^{(2)} : \mathrm{Pow}(X \times Y \times Z) \to \mathrm{Explo}(X, Y, Z ~|~ 2).\!\)

In this setting the issue of whether triadic relations are reducible to or reconstructible from their dyadic projections, both in general and in specific cases, can be identified with the question of whether \(\mathrm{Proj}^{(2)}\!\) is injective. The mapping \(\mathrm{Proj}^{(2)}\!\) is said to preserve information about the triadic relations \(L \in \mathrm{Pow}(X \times Y \times Z)\!\) if and only if it is injective, otherwise one says that some loss of information has occurred in taking the projections. Given a specific instance of a triadic relation \(L \in \mathrm{Pow}(X \times Y \times Z),\!\) it can be said that \(L\!\) is determined by (reducible to or reconstructible from) its dyadic projections if and only if \((\mathrm{Proj}^{(2)})^{-1}(\mathrm{Proj}^{(2)}L)\!\) is the singleton set \(\{ L \}.\!\) Otherwise, there exists an \(L'\!\) such that \(\mathrm{Proj}^{(2)}L = \mathrm{Proj}^{(2)}L',\!\) and in this case \(L\!\) is said to be irreducibly triadic or genuinely triadic. Notice that irreducible or genuine triadic relations, when they exist, naturally occur in sets of two or more, the whole collection of them being equated or confounded with one another under \(\mathrm{Proj}^{(2)}.\!\)

The next series of Tables illustrates the operation of \(\mathrm{Proj}^{(2)}\!\) by means of its actions on the sign relations \(L_\text{A}\!\) and \(L_\text{B}.\!\) For ease of reference, Tables 72.1 and 73.1 repeat the contents of Tables 1 and 2, respectively, while the dyadic relations comprising \(\mathrm{Proj}^{(2)}L_\text{A}\!\) and \(\mathrm{Proj}^{(2)}L_\text{B}\!\) are shown in Tables 72.2 to 72.4 and Tables 73.2 to 73.4, respectively.


\(\text{Table 72.1} ~~ \text{Sign Relation of Interpreter A}~\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 72.2} ~~ \text{Dyadic Projection} ~ L(\text{A})_{OS}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 72.3} ~~ \text{Dyadic Projection} ~ L(\text{A})_{OI}\!\)
\(\text{Object}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 72.4} ~~ \text{Dyadic Projection} ~ L(\text{A})_{SI}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 73.1} ~~ \text{Sign Relation of Interpreter B}~\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 73.2} ~~ \text{Dyadic Projection} ~ L(\text{B})_{OS}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 73.3} ~~ \text{Dyadic Projection} ~ L(\text{B})_{OI}\!\)
\(\text{Object}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 73.4} ~~ \text{Dyadic Projection} ~ L(\text{B})_{SI}\!\)
\(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)


A comparison of the corresponding projections in \(\mathrm{Proj}^{(2)} L(\text{A})\!\) and \(\mathrm{Proj}^{(2)} L(\text{B})\!\) shows that the distinction between the triadic relations \(L(\text{A})\!\) and \(L(\text{B})\!\) is preserved by \(\mathrm{Proj}^{(2)},\!\) and this circumstance allows one to say that this much information, at least, can be derived from the dyadic projections. However, to say that a triadic relation \(L \in \mathrm{Pow} (O \times S \times I)\!\) is reducible in this sense it is necessary to show that no distinct \(L' \in \mathrm{Pow} (O \times S \times I)\!\) exists such that \(\mathrm{Proj}^{(2)} L = \mathrm{Proj}^{(2)} L',\!\) and this can take a rather more exhaustive or comprehensive investigation of the space \(\mathrm{Pow} (O \times S \times I).\!\)

As it happens, each of the relations \(L = L(\text{A})\!\) or \(L = L(\text{B})\!\) is uniquely determined by its projective triple \(\mathrm{Proj}^{(2)} L.\!\) This can be seen as follows.

Consider any coordinate position \((s, i)\!\) in the plane \(S \times I.\!\) If \((s, i)\!\) is not in \(L_{SI}\!\) then there can be no element \((o, s, i)\!\) in \(L,\!\) therefore we may restrict our attention to positions \((s, i)\!\) in \(L_{SI},\!\) knowing that there exist at least \(|L_{SI}| = 8\!\) elements in \(L,\!\) and seeking only to determine what objects \(o\!\) exist such that \((o, s, i)\!\) is an element in the objective fiber of \((s, i).\!\) In other words, for what \({o \in O}\!\) is \((o, s, i) \in \mathrm{proj}_{SI}^{-1}((s, i))?\!\) The fact that \(L_{OS}\!\) has exactly one element \((o, s)\!\) for each coordinate \(s \in S\!\) and that \(L_{OI}\!\) has exactly one element \((o, i)\!\) for each coordinate \(i \in I,\!\) plus the “coincidence” of it being the same \(o\!\) at any one choice for \((s, i),\!\) tells us that \(L\!\) has just the one element \((o, s, i)\!\) over each point of \(S \times I.\!\) This proves that both \(L(\text{A})\!\) and \(L(\text{B})\!\) are reducible in an informational sense to triples of dyadic relations, that is, they are dyadically reducible.

6.36. Irreducibly Triadic Relations

Most likely, any triadic relation \(L \subseteq X \times Y \times Z\!\) imposed on arbitrary domains \(X, Y, Z\!\) could find use as a sign relation, provided it embodies any constraint at all, in other words, so long as it forms a proper subset of its total space, a relationship symbolized by writing \(L \subset X \times Y \times Z.\!\) However, triadic relations of this sort are not guaranteed to form the most natural examples of sign relations.

In order to show what an irreducibly triadic relation looks like, this Section presents a pair of triadic relations that have the same dyadic projections, and thus cannot be distinguished from each other on this basis alone. As it happens, these examples of triadic relations can be discussed independently of sign relational concerns, but structures of their general ilk are frequently found arising in signal-theoretic applications, and they are undoubtedly closely associated with problems of reliable coding and communication.

Tables 74.1 and 75.1 show a pair of irreducibly triadic relations \(L_0\!\) and \(L_1,\!\) respectively. Tables 74.2 to 74.4 and Tables 75.2 to 75.4 show the dyadic relations comprising \(\mathrm{Proj}^{(2)} L_0\!\) and \(\mathrm{Proj}^{(2)} L_1,\!\) respectively.


\(\text{Table 74.1} ~~ \text{Relation} ~ L_0 =\{ (x, y, z) \in \mathbb{B}^3 : x + y + z = 0 \}\!\)
\(x\!\) \(y\!\) \(z\!\)
\(\begin{matrix}0\\0\\1\\1\end{matrix}\) \(\begin{matrix}0\\1\\0\\1\end{matrix}\) \(\begin{matrix}0\\1\\1\\0\end{matrix}\)


\(\text{Table 74.2} ~~ \text{Dyadic Projection} ~ (L_0)_{12}\!\)
\(x\!\) \(y\!\)
\(\begin{matrix}0\\0\\1\\1\end{matrix}\) \(\begin{matrix}0\\1\\0\\1\end{matrix}\)


\(\text{Table 74.3} ~~ \text{Dyadic Projection} ~ (L_0)_{13}\!\)
\(x\!\) \(z\!\)
\(\begin{matrix}0\\0\\1\\1\end{matrix}\) \(\begin{matrix}0\\1\\1\\0\end{matrix}\)


\(\text{Table 74.4} ~~ \text{Dyadic Projection} ~ (L_0)_{23}\!\)
\(y\!\) \(z\!\)
\(\begin{matrix}0\\1\\0\\1\end{matrix}\) \(\begin{matrix}0\\1\\1\\0\end{matrix}\)


\(\text{Table 75.1} ~~ \text{Relation} ~ L_1 =\{ (x, y, z) \in \mathbb{B}^3 : x + y + z = 1 \}\!\)
\(x\!\) \(y\!\) \(z\!\)
\(\begin{matrix}0\\0\\1\\1\end{matrix}\) \(\begin{matrix}0\\1\\0\\1\end{matrix}\) \(\begin{matrix}1\\0\\0\\1\end{matrix}\)


\(\text{Table 75.2} ~~ \text{Dyadic Projection} ~ (L_1)_{12}\!\)
\(x\!\) \(y\!\)
\(\begin{matrix}0\\0\\1\\1\end{matrix}\) \(\begin{matrix}0\\1\\0\\1\end{matrix}\)


\(\text{Table 75.3} ~~ \text{Dyadic Projection} ~ (L_1)_{13}\!\)
\(x\!\) \(z\!\)
\(\begin{matrix}0\\0\\1\\1\end{matrix}\) \(\begin{matrix}1\\0\\0\\1\end{matrix}\)


\(\text{Table 75.4} ~~ \text{Dyadic Projection} ~ (L_1)_{23}\!\)
\(y\!\) \(z\!\)
\(\begin{matrix}0\\1\\0\\1\end{matrix}\) \(\begin{matrix}1\\0\\0\\1\end{matrix}\)


The relations \(L_0, L_1 \subseteq \mathbb{B}^3\!\) are defined by the following equations, with algebraic operations taking place as in \(\text{GF}(2),\!\) that is, with \(1 + 1 = 0.\!\)

  1. The triple \((x, y, z)\!\) in \(\mathbb{B}^3\!\) belongs to \(L_0\!\) if and only if \({x + y + z = 0}.\!\) Thus, \(L_0\!\) is the set of even-parity bit vectors, with \(x + y = z.\!\)
  2. The triple \((x, y, z)\!\) in \(\mathbb{B}^3\!\) belongs to \(L_1\!\) if and only if \({x + y + z = 1}.\!\) Thus, \(L_1\!\) is the set of odd-parity bit vectors, with \(x + y = z + 1.\!\)

The corresponding projections of \(\mathrm{Proj}^{(2)} L_0\!\) and \(\mathrm{Proj}^{(2)} L_1\!\) are identical. In fact, all six projections, taken at the level of logical abstraction, constitute precisely the same dyadic relation, isomorphic to the whole of \(\mathbb{B} \times \mathbb{B}\!\) and expressed by the universal constant proposition \(1 : \mathbb{B} \times \mathbb{B} \to \mathbb{B}.\!\) In summary:

\(\begin{array}{lllll} (L_0)_{12} & = & (L_1)_{12} & \cong & \mathbb{B}^2 \\[4pt] (L_0)_{13} & = & (L_1)_{13} & \cong & \mathbb{B}^2 \\[4pt] (L_0)_{23} & = & (L_1)_{23} & \cong & \mathbb{B}^2 \end{array}\)

Thus, \(L_0\!\) and \(L_1\!\) are both examples of irreducibly triadic relations.

6.37. Propositional Types

This Section describes a formal system of type expressions that are analogous to formulas of propositional logic and discusses their use as a calculus of predicates for classifying, analyzing, and drawing typical inferences about \(k\!\)-place relations, in particular, for reasoning about the results of operations on relations and about the properties of their transformations and combinations.

Definition. Given a cartesian product \(X \times Y,\!\) an ordered pair \((x, y) \in X \times Y\!\) has the type \(S \cdot T,\!\) written \((x, y) : S \cdot T,\!\) if and only if \(x \in S \subseteq X\!\) and \(y \in T \subseteq Y.\!\) Notice that an ordered pair may have many types.

Definition. A relation \(L \subseteq X \times Y\!\) has type \(S \cdot T,\!\) written \(L : S \cdot T,\!\) if and only if every \((x, y) \in L\!\) has type \(S \cdot T,\!\) that is, if and only if \(L \subseteq S \times T\!\) for some \(S \subseteq X\!\) and \(T \subseteq Y.\!\)

Notation. Parentheses in the Courier or Teletype font, \(\texttt{( ... )},\!\) are used to indicate the negations of propositions and the complements of sets. When a \(k\!\)-place relation \(L\!\) is initially given relative to the domains \(X_1, \ldots, X_k\!\) and a set \(S\!\) is mentioned as a subset of one of them, say \(S \subseteq X_j,\!\) then the relevant complement of \(S\!\) in such a context is the one taken relative to \(X_j.\!\) Thus we have the following equivalents.

\(\texttt{(} S \texttt{)} ~=~ -\!S ~=~ X_j - S\!\)

In case of ambiguities that are not resolved by context, indices may be used as follows.

\(\texttt{(} S \texttt{)}_j ~=~ X_j - S\!\)

In any case, the intended term can always be written out in full, as \(X_j - S.\!\)


Fragments

Consider a relation \(L\!\) of the following type.

\(L : \texttt{(} S \texttt{(} T \texttt{))}\!\)

[The following piece occurs in § 6.35.]

The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets \((X, Y, Z),\!\) is called the dyadic explosion of \(X \times Y \times Z.\!\) This object is denoted \(\mathrm{Explo}(X, Y, Z ~|~ 2),\!\) read as the explosion of \(X \times Y \times Z\!\) by twos, or more simply as \(X, Y, Z ~\mathrm{choose}~ 2,\!\) and defined as follows:

\(\mathrm{Explo}(X, Y, Z ~|~ 2) ~=~ \mathrm{Pow}(X \times Y) \times \mathrm{Pow}(X \times Z) \times \mathrm{Pow}(Y \times Z)\!\)

This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.

[Maybe the following piece belongs there, too.]

Just to provide a hint of what's at stake, consider the following suggestive identity:

\(2^{XY} \times 2^{XZ} \times 2^{YZ} ~=~ 2^{(XY + XY + YZ)}\!\)

What sense would have to be found for the sums on the right in order to interpret this equation as a set theoretic isomorphism? Answering this question requires the concept of a co-product, roughly speaking, a “disjointed union” of sets. By the time this discussion has detailed the forms of indexing necessary to maintain these constructions, it should have become patently obvious that the forms of analysis and synthesis that are called on to achieve the putative reductions to and reconstructions from dyadic relations in actual fact never really leave the realm of genuinely triadic relations, but merely reshuffle its contents in various convenient fashions.

6.38. Considering the Source

There are several ways to contemplate the supplementation of signs, the sorts of augmentation that are crucial to meaning in the case of indices. Some approaches are analytic, in the sense that they regard signs as derivative compounds and try to break up the unitary concept of an individual sign into a congeries of seemingly more real, more actual, or more determinate sign instances. Other approaches are synthetic, in the sense that they accept a given collection of signs at face value and try to reconstruct more objective realities through the formation of abstract categories on this basis.

6.38.1. Attributed Signs

One type of analytic method takes it as a maxim for the logic of context that “Every sign or text is indexed by the context in which it occurs”. This means that all signs, including indices, are themselves indexed, though initially only tacitly, by the objective situation, the syntactic context, and the actual interpreter that makes use of them.

To begin formalizing this brand of supplementation, it is necessary to mark salient aspects of the situational, contextual, and inclusively interpretive features of sign usage that were previously held tacit. In effect, signs once regarded as primitive objects need to be newly analyzed as categorical abstractions that cover multitudes of existential sign instances or signs in use.

One way to develop these dimensions of the \(\text{A}\!\) and \(\text{B}\!\) example is to articulate the interpretive parameters of signs by means of subscripts or superscripts attached to the signs or their quotations, in this way forming a corresponding set of situated signs or attributed remarks.

The attribution of signs to their interpreters preserves the original object domain but produces an expanded syntactic domain, a corresponding set of attributed signs. In our \(\text{A}\!\) and \(\text{B}\!\) example this gives the following domains.

\(\begin{array}{ccl} O & = & \{ \text{A}, \text{B} \} \\[6pt] S & = & \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \} \\[6pt] I & = & \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \} \end{array}\)

Table 76 displays the results of indexing every sign of the \(\text{A}\!\) and \(\text{B}\!\) example with a superscript indicating its source or exponent, namely, the interpreter who actively communicates or transmits the sign. The operation of attribution produces two new sign relations, but it turns out that both sign relations have the same form and content, so a single Table will do. The new sign relation generated by this operation will be denoted \(\mathrm{At} (\text{A}, \text{B})\!\) and called the attributed sign relation for the \(\text{A}\!\) and \(\text{B}\!\) example.


\(\text{Table 76.} ~~ \text{Attributed Sign Relation for Interpreters A and B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \end{matrix}\)


Thus informed, the semiotic equivalence relation for interpreter \(\text{A}\!\) yields the following semiotic equations.

  \([{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}]_\text{A}\!\) \(=\!\) \([{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}]_\text{A}\!\) \(=\!\) \([{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}]_\text{A}\!\) \(=\!\) \([{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}]_\text{A}\!\)
or  \({}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}\!\) \(=_\text{A}\!\)  \({}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}\!\) \(=_\text{A}\!\)  \({}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}\!\) \(=_\text{A}\!\)  \({}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}\!\)

In comparison, the semiotic equivalence relation for interpreter \(\text{B}\!\) yields the following semiotic equations.

  \([{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}]_\text{B}\!\) \(=\!\) \([{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}]_\text{B}\!\) \(=\!\) \([{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}]_\text{B}\!\) \(=\!\) \([{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}]_\text{B}\!\)
or  \({}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}\!\) \(=_\text{B}\!\)  \({}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}\!\) \(=_\text{B}\!\)  \({}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}\!\) \(=_\text{B}\!\)  \({}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}\!\)

Consequently, the semiotic equivalence relations for \(\text{A}\!\) and \(\text{B}\!\) both induce the same semiotic partition on \(S,\!\) namely, the following.

\( \{ \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}} \}~,~\{ {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}} \} \}.\! \)

By means of a simple attribution step a certain level of congruity has been reached in the community of interpretation comprised of \(\text{A}\!\) and \(\text{B}.\!\) This new-found agreement on what is abstractly a single semiotic equivalence relation means that its equivalence classes reconstruct the structure of the object domain within the parts of the corresponding semiotic partition. This allows a measure of objectivity or inter-subjectivity to be predicated of the sign relation's representation.

An instance of \(\text{Y}\!\) using \({}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!\) is considered to be an objective event, the kind of happening to which all suitably placed observers can point, and adverting to an occurrence of \({}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!\) is more specific and less vague than resorting to instances of \({}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!\) as if being issued by anonymous sources. The situated sign \({}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!\) is a wider sign than \({}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!\) in the sense that it takes in a broader field of view on the interpretive situation and provides more information about the context of use. As to the reception of attributed remarks, the interpreter that can recognize signs of the form \({}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!\) is one that knows what it means to consider the source.

It is best to read the superscripts on attributed signs as accentuations and integral parts of the quotation marks, taking \({}^{\backprime\backprime} \ldots {}^{\prime\prime\text{A}}\!\) and \({}^{\backprime\backprime} \ldots {}^{\prime\prime\text{B}}\!\) as variant inflections of \({}^{\backprime\backprime} \ldots {}^{\prime\prime}.\!\) Thus, I can refer to the sign \({}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!\) just as I would refer to the sign \({}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!\) in the present informal context, without any additional marks of quotation.

Taking a cue from this usage, the ordinary quotes that I use to mark salient relationships of signs and expressions with respect to the informal context can now be regarded as quotes that I myself, operating as a casual interpreter, tacitly index. Even without knowing the complete sign relation that I have in mind, the one that I presumably use to conduct this discussion, the sign relation that \({}^{\backprime\backprime} \text{I} {}^{\prime\prime}\!\) represents can nevertheless be partially formalized by means of a certain functional equation, namely, the following equation between semantic functions:

\({}^{\backprime\backprime} \ldots {}^{\prime\prime} ~=~ {}^{\backprime\backprime} \ldots {}^{\prime\prime\text{I}}\!\)

By way of vocal expression, the attributed sign \({}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!\) can be pronounced in any of the following ways.

\(\begin{array}{l} {}^{\backprime\backprime} \text{X} {}^{\prime\prime} ~\text{quoth}~ \text{Y} \\[4pt] {}^{\backprime\backprime} \text{X} {}^{\prime\prime} ~\text{said by}~ \text{Y} \\[4pt] {}^{\backprime\backprime} \text{X} {}^{\prime\prime} ~\text{used by}~ \text{Y} \end{array}\)

To facilitate visual imagery, each token of the type \({}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!\) can be pictured as a specific occasion where the sign \({}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!\) is being used or issued by the interpreter \(\text{Y}.\!\)

The construal of objects as classes of attributed signs leads to a measure of inter-subjective agreement between the interpreters \(\text{A}\!\) and \(\text{B}.\!\) Something like this must be the goal of any system of communication, and analogous forms of congruity and gregarity are likely to be found in any system for establishing mutually intelligible responses and maintaining socially coordinated practices.

Nevertheless, the particular types of “analytic” solutions that were proposed for resolving the conflict of interpretations between \(\text{A}\!\) and \(\text{B}\!\) are conceptually unsatisfactory in several ways. The constructions instituted retain the quality of hypotheses, especially due to the level of speculation about fundamental objects that is required to support them. There remains something fictional and imaginary about the nature of the object instances that are posited to form the ontological infrastructure, the supposedly more determinate strata of being that are presumed to anchor the initial objects of discussion.

Founding objects on a particular selection of object instances is always initially an arbitrary choice, a meet response to a judgment call and a responsibility that cannot be avoided, but still a bit of guesswork that needs to be tested for its reality in practice.

This means that the postulated objects of objects cannot have their reality probed and proved in detail but evaluated only in terms of their conceivable practical effects.

6.38.2. Augmented Signs

One synthetic method …

Suppose now that each of the agents \(\text{A}\!\) and \(\text{B}\!\) reflects on the situational context of their discussion and observes on every occasion of utterance exactly who is saying what. By this critically reflective operation of considering the source each interpreter is empowered to create, in effect, an extended token or situated sign out of each utterance by indexing it with the proper name of its utterer. Though it arises by reflection, the augmented sign is not a higher order of abstraction so much as a restoration or reconstitution of what was lost by abstracting the sign from the signer in the first instance.

In order to continue the development of this example, I need to employ a more precise system of marking quotations in order to keep track of who says what and in what kinds of context. To help with this, I use raised angle brackets \({}^\langle \ldots {}^\rangle\!\) on a par with ordinary quotation marks \({}^{\backprime\backprime} \ldots {}^{\prime\prime}\!\) to call attention to pieces of text as signs or expressions. The angle quotes are especially useful for embedded quotations and for text regarded as used or mentioned by interpreters other than myself, for instance, by the fictional characters \(\text{A}\!\) and \(\text{B}.\!\) Whenever possible, I save ordinary quotes for the outermost level, the one that interfaces with the context of informal discussion.

A notation like \({}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle, \text{B}, \text{C} {}^{\rangle ~ \prime\prime}\!\) is intended to indicate the construction of an extended (attributed, indexed, or situated) sign, in this case, by enclosing an initial sign \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) in a contextual envelope \({}^{\backprime\backprime ~ \langle\langle} ~\underline{~}~ {}^\rangle, ~\underline{~}~, ~\underline{~}~ {}^{\rangle ~ \prime\prime}\!\) and inscribing it with relevant items of situational data, as represented by the signs \({}^{\backprime\backprime} \text{B} {}^{\prime\prime}\!\) and \({}^{\backprime\backprime} \text{C} {}^{\prime\prime}.\!\)

  1. When a salient component of the situational data represents an observation of the agent \(\text{B}\!\) communicating the sign \({}^{\backprime\backprime} \text{A} {}^{\prime\prime},\!\) then the compressed form \({}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B}, \text{C} {}^{\rangle ~ \prime\prime}\!\) can be used to mark that fact.
  2. When there is no additional contextual information beyond the marking of a sign's source, the form \({}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime}\!\) suffices to say that \(\text{B}\!\) said \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}.\!\)

With this last modification, angle quotes become like ascribed quotes or attributed remarks, indexed with the name of the interpretive agent that issued the message in question. In sum, the notation \({}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime}\!\) is intended to situate the sign \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) in the context of its contemplated use and to index the sign \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) with the name of the interpreter that is considered to be using it on a given occasion.

The notation \({}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime},~\!\) read \({}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{quoth}~ \text{B} {}^{\prime\prime}\!\) or \({}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{used by}~ \text{B} {}^{\prime\prime},\!\) is an expression that indicates the use of the sign \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) by the interpreter \(\text{B}.\!\) The expression inside the outer quotes is referred to as an indexed quotation, since it is indexed by the name of the interpreter to which it is referred.

Since angle quotes with a blank index are equivalent to ordinary quotes, we have the following equivalence. [Not sure about this.]

\({}^{\backprime\backprime} ~ {}^\langle \text{A} {}^\rangle \text{B} ~ {}^{\prime\prime} ~=~ {}^{\langle\langle} \text{A} {}^\rangle \text{B} {}^\rangle\!\)

Enclosing a piece of text with raised angle brackets and following it with the name of an interpreter is intended to call to mind …

The augmentation of signs by the names of their interpreters preserves the original object domain but produces an extended syntactic domain. In our \(\text{A}\!\) and \(\text{B}\!\) example this gives the following domains.

\(\begin{array}{lll} O & = & \{ \text{A}, \text{B} \} \end{array}\)

\(\begin{array}{lllllll} S & = & \{ & {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}, & \\[4pt] & & & {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} & \} \\[10pt] I & = & \{ & {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}, & \\[4pt] & & & {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}, {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} & \} \end{array}\)

The situated sign or indexed expression \({}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}\!\) presents the sign or expression \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) as used by the interpreter \(\text{B}.\!\) In other words, the sign is indexed by the name of an interpreter to indicate a use of that sign by that interpreter. Thus, \({}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}\!\) augments \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) to form a new and more complete sign by including additional information about the context of its transmission, in particular, by the consideration of its source.


\(\text{Table 77.} ~~ \text{Augmented Sign Relation for Interpreters A and B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime} \end{matrix}\)


6.39. Prospective Indices : Pointers to Future Work

In the effort to unify dynamical, connectionist, and symbolic approaches to intelligent systems, indices supply important stepping stones between the sorts of signs that remain bound to circumscribed theaters of action and the kinds of signs that can function globally as generic symbols. Current technology presents an array of largely accidental discoveries that have been brought into being for implementing indexical systems. Bringing systematic study to bear on this variety of accessory devices and trying to discern within the wealth of incidental features their essential principles and effective ingredients could help to improve the traction this form of bridge affords.

In the points where this project addresses work on the indexical front, a primary task is to show how the actual connections promised by the definition of indexical signs can be translated into system-theoretic terms and implemented by means of the class of dynamic connections that can persist in realistic systems.

An offshoot of this investigation would be to explore how indices like pointer variables could be realized within “connectionist” systems. There is no reason in principle why this cannot be done, but I think that pragmatic reasons and practical success will force the contemplation of higher orders of connectivity than those currently fashioned in two-dimensional arrays of connections. To be specific, further advances will require the generative power of genuinely triadic relations to be exploited to the fullest possible degree.

To avert one potential misunderstanding of what this entails, computing with triadic relations is not really a live option unless the algebraic tools and logical calculi needed to do so are developed to greater levels of facility than they are at present. Merely officiating over the storage of “dead letters” in higher dimensional arrays will not do the trick. Turning static sign relations into the orders of dynamic sign processes that can support live inquiries will demand new means of representation and new methods of computation.

To fulfill their intended roles, a formal calculus for sign relations and the associated implementation must be able to address and restore the full dimensionalities of the existential and social matrices in which inquiry takes place. Informational constraints that define objective situations of interest need to be freed from the locally linear confines of the “dia-matrix” and reposted within the realm of the “tri-matrix”, that is, reconstituted in a manner that allows critical reflection on their form and content.

The descriptive and conceptual architectures needed to frame this task must allow space for interlacing forms of “open work”, projects that anticipate the desirability of higher order relations and build in the capability for higher order reflections at the very beginning, and do not merely hope against hope to arrange these capacities as afterthoughts.

6.40. Dynamic and Evaluative Frameworks

The sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) are lacking in several dimensions of realistic properties that would ordinarily be more fully developed in the kinds of sign relations that are found to be involved in inquiry. This section initiates a discussion of two such dimensions, the dynamic and the evaluative aspects of sign relations, and it treats the materials that are organized along these lines at two broad levels, either within or between particular examples of sign relations.

  1. The dynamic dimension deals with change. Thus, it details the forms of diversity that sign relations distribute in a temporal process. It is concerned with the transitions that take place from element to element within a sign relation and also with the changes that take place from one whole sign relation to another, thereby generating various types and levels of sign process.
  2. The evaluative dimension deals with goals. Thus, it details the forms of diversity that sign relations contribute to a definite purpose. It is concerned with the comparisons that can be made on a scale of values between the elements within a sign relation and also between whole sign relations themselves, with a view toward deciding which is better for a designated purpose.

At the primary level of analysis, one is concerned with the application of these two dimensions within particular sign relations. At every subsequent level of analysis, one deals with the dynamic transitions and evaluative comparisons that can be contemplated between particular sign relations. In order to cover all these dimensions, types, and levels of diversity in a unified way, there is need for a substantive term that can allow one to indicate any of the above objects of discussion and thought — including elements of sign relations, particular sign relations, and states of systems — and to regard it as an “object, sign, or state in a certain stage of construction”. I will use the word station for this purpose.

In order to organize the discussion of these two dimensions, both within and between particular sign relations, and to coordinate their ordinary relation to each other in practical situations, it pays to develop a combined form of dynamic evaluative framework (DEF), similar in design and utility to the objective frameworks set up earlier.

A dynamic evaluative framework (DEF) encompasses two dimensions of comparison between stations:

  1. A dynamic dimension, as swept out by a process of changing stations, permits comparison between stations in terms of before and after on a scale of temporal order.

    A terminal station on a dynamic dimension is called a stable station.

  2. An evaluative dimension permits comparison between stations on a scale of values.

    A terminal station on an evaluative dimension is called a canonical station or a standard station.

A station that is both stable and standard is called a normal station.

Consider the following analogies or correspondences that exist between different orders of sign relational structure:

  1. Just as a sign represents its object and becomes associated with more or less equivalent signs in the minds of interpretive agents, the corpus of signs that embodies a SOI represents in a collective way its own proper object, intended objective, or try at objectivity (TAO).
  2. Just as the relationship of a sign to its semantic objects and interpretive associates can be formalized within a single sign relation, the relation of a dynamically changing SOI to its reference environment, developmental goals, and desired characteristics of interpretive performance can be formalized by means of a higher order sign relation, one that further establishes a grounds of comparison for relating the growing SOI, not only to its former and future selves, but to a diverse company of other SOIs.

From an outside perspective the distinction between a sign and its object is usually regarded as obvious, though agents operating in the thick of a SOI often act as though they cannot see the difference. Nevertheless, as a rule in practice, a sign is not a good thing to be confused with its object. Even in the rare and usually controversial cases where an identity of substance is contemplated, usually only for the sake of argument, there is still a distinction of roles to be maintained between the sign and its object. Just so, …

Although there are aspects of inquiry processes that operate within the single sign relation, the characteristic features of inquiry do not come into full bloom until one considers the whole diversity of dynamically developing sign relations. Because it will be some time before this discussion acquires the formal power it needs to deal with higher order sign relations, these issues will need to be treated on an informal basis as they arise, and often in cursory and ad hoc manner.

6.41. Elective and Motive Forces

The \(\text{A}\!\) and \(\text{B}\!\) example, in the fragmentary aspects of its sign relations presented so far, is unrealistic in its simplification of semantic issues, lacking a full development of many kinds of attributes that almost always become significant in situations of practical interest. Just to mention two related features of importance to inquiry that are missing from this example, there is no sense of directional process and no dimension of differential value defined either within or between the semantic equivalence classes.

When there is a clear sense of dynamic tendency or purposeful direction driving the passage from signs to interpretants in the connotative project of a sign relation, then the study moves from sign relations, statically viewed, to genuine sign processes. In the pragmatic theory of signs, such processes are usually dignified with the name semiosis and their systematic investigation is called semiotics.

Further, when this dynamism or purpose is consistent and confluent with a differential value system defined on the syntactic domain, then the sign process in question becomes a candidate for the kind of clarity-gaining, canon-seeking process, capable of supporting learning and reasoning, that I classify as an inquiry driven system.

There is a mathematical turn of thought that I will often take in discussing these kinds of issues. Instead of saying that a system has no attribute of a particular type, I will say that it has the attribute, but in a degenerate or trivial sense. This is merely a strategy of classification that allows one to include null cases in a taxonomy and to make use of continuity arguments in passing from case to case in a class of examples. Viewed in this way, each of the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) can be taken to exhibit a trivial dynamic process and a trivial standard of value defined on the syntactic domain.

6.42. Sign Processes : A Start

To articulate the dynamic aspects of a sign relation, one can interpret it as determining a discrete or finite state transition system. In the usual ways of doing this, the states of the system are given by the elements of the syntactic domain, while the elements of the object domain correspond to input data or control parameters that affect transitions from signs to interpretant signs in the syntactic state space.

Working from these principles alone, there are numerous ways that a plausible dynamics can be invented for a given sign relation. I will concentrate on two principal forms of dynamic realization, or two ways of interpreting and augmenting sign relations as sign processes.

One form of realization lets each element of the object domain \(O\!\) correspond to the observed presence of an object in the environment of the systematic agent. In this interpretation, the object \(x\!\) acts as an input datum that causes the system \(Y\!\) to shift from whatever sign state it happens to occupy at a given moment to a random sign state in \([x]_Y.\!\) Expressed in a cognitive vein, \({}^{\backprime\backprime} Y ~\mathrm{notes}~ x {}^{\prime\prime}.\)

Another form of realization lets each element of the object domain \(O\!\) correspond to the autonomous intention of the systematic agent to denote an object, achieve an objective, or broadly speaking to accomplish any other purpose with respect to an object in its domain. In this interpretation, the object \(x\!\) is a control parameter that brings the system \(Y\!\) into line with realizing a target set \([x]_Y.\!\)

Tables 78 and 79 show how the sign relations for \(\text{A}\!\) and \(\text{B}\!\) can be filled out as finite state processes in conformity with the interpretive principles just described. Rather than letting the actions go undefined for some combinations of inputs in \(O\!\) and states in \(S,\!\) transitions have been added that take the interpreters from whatever else they might have been thinking about to the semantic equivalence classes of their objects. In either modality of realization, cognitive-oriented or control-oriented, the abstract structure of the resulting sign process is exactly the same.


\(\text{Table 78.} ~~ \text{Sign Process of Interpreter A}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)


\(\text{Table 79.} ~~ \text{Sign Process of Interpreter B}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{A} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \end{matrix}\)

\(\begin{matrix} {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{B} {}^{\prime\prime} \\ {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \end{matrix}\)


Treated in accord with these interpretations, the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) constitute partially degenerate cases of dynamic processes, in which the transitions are totally non-deterministic up to semantic equivalence classes but still manage to preserve those classes. Whether construed as present observation or projective speculation, the most significant feature to note about a sign process is how the contemplation of an object or objective leads the system from a less determined to a more determined condition.

On reflection, one observes that these processes are not completely trivial since they preserve the structure of their semantic partitions. In fact, each sign process preserves the entire topology — the family of sets closed under finite intersections and arbitrary unions — that is generated by its semantic equivalence classes. These topologies, \(\mathrm{Top}(\text{A})\!\) and \(\mathrm{Top}(\text{B}),\!\) can be viewed as partially ordered sets, \(\mathrm{Poset}(\text{A})\!\) and \(\mathrm{Poset}(\text{B}),\!\) by taking the inclusion ordering \((\subseteq)\!\) as \((\le).\!\) For each of the interpreters \(\text{A}\!\) and \(\text{B},\!\) as things stand in their respective orderings \(\mathrm{Poset}(\text{A})\!\) and \(\mathrm{Poset}(\text{B}),\!\) the semantic equivalence classes of \({}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!\) and \({}^{\backprime\backprime} \text{B} {}^{\prime\prime}\!\) are situated as intermediate elements that are incomparable to each other.

\(\begin{array}{lllll} \mathrm{Top}(\text{A}) & = & \mathrm{Poset}(\text{A}) & = & \{ \varnothing, \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \}, \{ {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}, S \}. \\[6pt] \mathrm{Top}(\text{B}) & = & \mathrm{Poset}(\text{B}) & = & \{ \varnothing, \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}, \{ {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime} \}, S \}. \end{array}\)

In anticipation of things to come, these orderings are germinal versions of the kinds of semantic hierarchies that will be used in this project to define the ontologies, perspectives, or world views corresponding to individual interpreters.

When it comes to discussing the stability properties of dynamic systems, the sets that remain invariant under iterated applications of a process are called its attractors or basins of attraction.

Note. More care needed here. Strongly and weakly connected components of digraphs?

The dynamic realizations of the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) augment their semantic equivalence relations in an “attractive” way. To describe this additional structure, I introduce a set of graph-theoretical concepts and notations.

The attractor of \(x\!\) in \(Y.\!\)

\(Y ~\text{at}~ x ~=~ \mathrm{At}[x]_Y ~=~ [x]_Y \cup \{ \text{arcs into}~ [x]_Y \}.\)

In effect, this discussion of dynamic realizations of sign relations has advanced from considering semiotic partitions as partitioning the set of points in \(S\!\) to considering attractors as partitioning the set of arcs in \(S \times I = S \times S.\!\)

6.43. Reflective Extensions

This section takes up the topic of reflective extensions in a more systematic fashion, starting from the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) once again and keeping its focus within their vicinity, but exploring the space of nearby extensions in greater detail.

Tables 80 and 81 show one way that the sign relations \(L(\text{A})\!\) and \(L(\text{B})\!\) can be extended in a reflective sense through the use of quotational devices, yielding the first order reflective extensions, \(\mathrm{Ref}^1 (\text{A})\!\) and \(\mathrm{Ref}^1 (\text{B}).\!\)


\({\text{Table 80.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A})}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)


\({\text{Table 81.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B})}\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \\ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \end{matrix}\)


The common world \(W\!\) of the reflective extensions \(\mathrm{Ref}^1 (\text{A})\!\) and \(\mathrm{Ref}^1 (\text{B})\!\) is the totality of objects and signs they contain, namely, the following set of 10 elements.

\(W = \{ \text{A}, \text{B}, {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle}, {}^{\langle\langle} \text{A} {}^{\rangle\rangle}, {}^{\langle\langle} \text{B} {}^{\rangle\rangle}, {}^{\langle\langle} \text{i} {}^{\rangle\rangle}, {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \}.\)

Raised angle brackets or supercilia \(({}^{\langle} \ldots {}^{\rangle})\!\) are here being used on a par with ordinary quotation marks \(({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!\) to construct a new sign whose object is precisely the sign they enclose.

Regarded as sign relations in their own right, \(\mathrm{Ref}^1 (\text{A})\!\) and \(\mathrm{Ref}^1 (\text{B})\!\) are formed on the following relational domains.

\(\begin{array}{ccccl} O & = & O^{(1)} \cup O^{(2)} & = & \{ \text{A}, \text{B} \} ~ \cup ~ \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \} \\[8pt] S & = & S^{(1)} \cup S^{(2)} & = & \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \} ~ \cup ~ \{ {}^{\langle\langle} \text{A} {}^{\rangle\rangle}, {}^{\langle\langle} \text{B} {}^{\rangle\rangle}, {}^{\langle\langle} \text{i} {}^{\rangle\rangle}, {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \} \\[8pt] I & = & I^{(1)} \cup I^{(2)} & = & \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \} ~ \cup ~ \{ {}^{\langle\langle} \text{A} {}^{\rangle\rangle}, {}^{\langle\langle} \text{B} {}^{\rangle\rangle}, {}^{\langle\langle} \text{i} {}^{\rangle\rangle}, {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \} \end{array}\)

It may be observed that \(S\!\) overlaps with \(O\!\) in the set of first-order signs or second-order objects, \(S^{(1)} = O^{(2)},\!\) exemplifying the extent to which signs have become objects in the new sign relations.

To discuss how the denotative and connotative aspects of a sign related are affected by its reflective extension it is useful to introduce a few abbreviations. For each sign relation \(L\!\) in \(\{ L_\text{A}, L_\text{B} \}\!\) the following operations may be defined.

\(\begin{array}{lllll} \mathrm{Den}^1 (L) & = & (\mathrm{Ref}^1 (L))_{SO} & = & \mathrm{proj}_{OS} (\mathrm{Ref}^1 (L)) \\[6pt] \mathrm{Con}^1 (L) & = & (\mathrm{Ref}^1 (L))_{SI} & = & \mathrm{proj}_{SI} (\mathrm{Ref}^1 (L)) \end{array}\!\)

The dyadic components of sign relations can be given graph-theoretic representations, namely, as digraphs (directed graphs), that provide concise pictures of their structural and potential dynamic properties. By way of terminology, a directed edge \((x, y)\!\) is called an arc from point \(x\!\) to point \(y,\!\) and a self-loop \((x, x)\!\) is called a sling at \(x.\!\)

The denotative components \(\mathrm{Den}^1 (L_\text{A})\!\) and \(\mathrm{Den}^1 (L_\text{B})\!\) can be viewed as digraphs on the 10 points of the world set \(W.\!\) The arcs of these digraphs are given as follows.

  1. \(\mathrm{Den}^1 (L_\text{A})\!\) has an arc from each point of \([\text{A}]_\text{A} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i}{}^{\rangle} \}\!\) to \(\text{A}\!\) and from each point of \([\text{B}]_\text{A} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}\!\) to \(\text{B}.\!\)
  2. \(\mathrm{Den}^1 (L_\text{B})\!\) has an arc from each point of \([\text{A}]_\text{B} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u}{}^{\rangle} \}\!\) to \(\text{A}\!\) and from each point of \([\text{B}]_\text{B} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \}\!\) to \(\text{B}.\!\)
  3. In the parts added by reflective extension \(\mathrm{Den}^1 (L_\text{A})\!\) and \(\mathrm{Den}^1 (L_\text{B})\!\) both have arcs from \({}^{\langle} s {}^{\rangle}\!\) to \(s,\!\) for each \(s \in S^{(1)}.\!\)

Taken as transition digraphs, \(\mathrm{Den}^1 (L_\text{A})\!\) and \(\mathrm{Den}^1 (L_\text{B})\!\) summarize the upshots, end results, or effective steps of computation that are involved in the respective evaluations of signs in \(S\!\) by \(\mathrm{Ref}^1 (\text{A})\!\) and \(\mathrm{Ref}^1 (\text{B}).\!\)

The connotative components \(\mathrm{Con}^1 (L_\text{A})~\!\) and \(\mathrm{Con}^1 (L_\text{B})~\!\) can be viewed as digraphs on the eight points of the syntactic domain \(S.\!\) The arcs of these digraphs are given as follows.

  1. \(\mathrm{Con}^1 (L_\text{A})\!\) inherits from \(L_\text{A}\!\) the structure of a semiotic equivalence relation on \(S^{(1)},\!\) having a sling on each point of \(S^{(1)},\!\) arcs in both directions between \({}^{\langle} \text{A} {}^{\rangle}\!\) and \({}^{\langle} \text{i}{}^{\rangle},\!\) and arcs in both directions between \({}^{\langle} \text{B} {}^{\rangle}~\!\) and \({}^{\langle} \text{u}{}^{\rangle}.~\!\) The reflective extension \(\mathrm{Ref}^1 (L_\text{A})\!\) adds a sling on each point of \(S^{(2)},\!\) creating a semiotic equivalence relation on \(S.\!\)
  2. \(\mathrm{Con}^1 (L_\text{B})~\!\) inherits from \(L_\text{B}\!\) the structure of a semiotic equivalence relation on \(S^{(1)},\!\) having a sling on each point of \(S^{(1)},\!\) arcs in both directions between \({}^{\langle} \text{A} {}^{\rangle}\!\) and \({}^{\langle} \text{u}{}^{\rangle},\!\) and arcs in both directions between \({}^{\langle} \text{B} {}^{\rangle}~\!\) and \({}^{\langle} \text{i}{}^{\rangle}.~\!\) The reflective extension \(\mathrm{Ref}^1 (L_\text{B})\!\) adds a sling on each point of \(S^{(2)},\!\) creating a semiotic equivalence relation on \(S.\!\)

Taken as transition digraphs, \(\mathrm{Con}^1 (L_\text{A})~\!\) and \(\mathrm{Con}^1 (L_\text{B})~\!\) highlight the associations between signs in \(\mathrm{Ref}^1 (L_\text{A})\!\) and \(\mathrm{Ref}^1 (L_\text{B}),\!\) respectively.

The semiotic equivalence relation given by \(\mathrm{Con}^1 (L_\text{A})\!\) for interpreter \(\text{A}\!\) has the following semiotic equations.

  \([ {}^{\langle} \text{A} {}^{\rangle} ]_\text{A}\!\) \(=\!\) \([ {}^{\langle} \text{i} {}^{\rangle} ]_\text{A}\!\)   \([ {}^{\langle} \text{B} {}^{\rangle} ]_\text{A}\!\) \(=\!\) \([ {}^{\langle} \text{u} {}^{\rangle} ]_\text{A}\!\)
or  \({}^{\langle} \text{A} {}^{\rangle}~\!\) \(=_\text{A}\!\)  \({}^{\langle} \text{i} {}^{\rangle}~\!\)    \({}^{\langle} \text{B} {}^{\rangle}~\!\) \(=_\text{A}\!\)  \({}^{\langle} \text{u} {}^{\rangle}~\!\)

These equations induce the following semiotic partition.

\( \{ \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \}, \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}, \{ {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \}, \{ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \}, \{ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \}, \{ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \} \}.\! \)

The semiotic equivalence relation given by \(\mathrm{Con}^1 (L_\text{B})~\!\) for interpreter \(\text{B}\!\) has the following semiotic equations.

  \([ {}^{\langle} \text{A} {}^{\rangle} ]_\text{B}\!\) \(=\!\) \([ {}^{\langle} \text{u} {}^{\rangle} ]_\text{B}\!\)   \([ {}^{\langle} \text{B} {}^{\rangle} ]_\text{B}\!\) \(=\!\) \([ {}^{\langle} \text{i} {}^{\rangle} ]_\text{B}\!\)
or  \({}^{\langle} \text{A} {}^{\rangle}~\!\) \(=_\text{B}\!\)  \({}^{\langle} \text{u} {}^{\rangle}~\!\)    \({}^{\langle} \text{B} {}^{\rangle}~\!\) \(=_\text{B}\!\)  \({}^{\langle} \text{i} {}^{\rangle}~\!\)

These equations induce the following semiotic partition.

\( \{ \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}, \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \}, \{ {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \}, \{ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \}, \{ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \}, \{ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \} \}.\! \)

Notice that the semiotic equivalences of nouns and pronouns for each interpreter do not extend to equivalences of their second-order signs, exactly as demanded by the literal character of quotations. Moreover, the new sign relations for interpreters \(\text{A}\!\) and \(\text{B}\!\) coincide in their reflective parts, since exactly the same triples are added to each set.

There are many ways to extend sign relations in an effort to increase their reflective capacities. The implicit goal of a reflective project is to achieve reflective closure, \(S \subseteq O,\!\) where every sign is an object.

Considered as reflective extensions, there is nothing unique about the constructions of \(\mathrm{Ref}^1 (\text{A})\!\) and \(\mathrm{Ref}^1 (\text{B})\!\) but their common pattern of development illustrates a typical approach toward reflective closure. In a sense it epitomizes the project of free, naive, or uncritical reflection, since continuing this mode of production to its closure would generate an infinite sign relation, passing through infinitely many higher orders of signs, but without examining critically to what purpose the effort is directed or evaluating alternative constraints that might be imposed on the initial generators toward this end.

At first sight it seems as though the imposition of reflective closure has multiplied a finite sign relation into an infinite profusion of highly distracting and largely redundant signs, all by itself and all in one step. But this explosion of orders happens only with the complicity of another requirement, that of deterministic interpretation.

There are two types of non-determinism, denotative and connotative, that can affect a sign relation.

  1. A sign relation \(L\!\) has a non-deterministic denotation if its dyadic component \({L_{SO}}\!\) is not a function \(L_{SO} : S \to O,\!\) in other words, if there are signs in \(S\!\) with missing or multiple objects in \(O.\!\)
  2. A sign relation \(L\!\) has a non-deterministic connotation if its dyadic component \(L_{SI}\!\) is not a function \(L_{SI} : S \to I,\!\) in other words, if there are signs in \(S\!\) with missing or multiple interpretants in \(I.\!\) As a rule, sign relations are rife with this variety of non-determinism, but it is usually felt to be under control so long as \(L_{SI}\!\) remains close to being an equivalence relation.

Thus, it is really the denotative type of indeterminacy that is felt to be a problem in this context.

The next two pairs of reflective extensions demonstrate that there are ways of achieving reflective closure that do not generate infinite sign relations.

As a flexible and fairly general strategy for describing reflective extensions, it is convenient to take the following tack. Given a syntactic domain \(S,\!\) there is an independent formal language \(F = F(S) = S \langle {}^{\langle\rangle} \rangle,\!\) called the free quotational extension of \(S,\!\) that can be generated from \(S\!\) by embedding each of its signs to any depth of quotation marks. Within \(F,\!\) the quoting operation can be regarded as a syntactic generator that is inherently free of constraining relations. In other words, for every \(s \in S,\!\) the sequence \(s, {}^{\langle} s {}^{\rangle}, {}^{\langle\langle} s {}^{\rangle\rangle}, \ldots\!\) contains nothing but pairwise distinct elements in \(F\!\) no matter how far it is produced. The set \(F(s) = s \langle {}^{\langle\rangle} \rangle \subseteq F\!\) that collects the elements of this sequence is called the subset of \(F\!\) generated from \(s\!\) by quotation.

Against this background, other varieties of reflective extension can be specified by means of semantic equations that are considered to be imposed on the elements of \(F.\!\) Taking the reflective extensions \(\mathrm{Ref}^1 (\text{A})\!\) and \(\mathrm{Ref}^1 (\text{B})\!\) as the first orders of a “free” project toward reflective closure, variant extensions can be described by relating their entries with those of comparable members in the standard sequences \(\mathrm{Ref}^n (\text{A})\!\) and \(\mathrm{Ref}^n (\text{B}).\!\)

A variant pair of reflective extensions, \(\mathrm{Ref}^1 (\text{A} | E_1)\!\) and \(\mathrm{Ref}^1 (\text{B} | E_1),\!\) is presented in Tables 82 and 83, respectively. These are identical to the corresponding free variants, \(\mathrm{Ref}^1 (\text{A})~\!\) and \(\mathrm{Ref}^1 (\text{B}),~\!\) with the exception of those entries that are constrained by the following system of semantic equations.

\(\begin{matrix} E_1 : & {}^{\langle\langle} \text{A} {}^{\rangle\rangle} = {}^{\langle} \text{A} {}^{\rangle}, & {}^{\langle\langle} \text{B} {}^{\rangle\rangle} = {}^{\langle} \text{B} {}^{\rangle}, & {}^{\langle\langle} \text{i} {}^{\rangle\rangle} = {}^{\langle} \text{i} {}^{\rangle}, & {}^{\langle\langle} \text{u} {}^{\rangle\rangle} = {}^{\langle} \text{u} {}^{\rangle}. \end{matrix}\)

This has the effect of making all levels of quotation equivalent.


\(\text{Table 82.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A} | E_1)\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)


\(\text{Table 83.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B} | E_1)\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)


Another pair of reflective extensions, \(\mathrm{Ref}^1 (\text{A} | E_2)\!\) and \(\mathrm{Ref}^1 (\text{B} | E_2),\!\) is presented in Tables 84 and 85, respectively. These are identical to the corresponding free variants, \(\mathrm{Ref}^1 (\text{A})~\!\) and \(\mathrm{Ref}^1 (\text{B}),~\!\) except for the entries constrained by the following semantic equations.

\(\begin{matrix} E_2 : & {}^{\langle\langle} \text{A} {}^{\rangle\rangle} = \text{A}, & {}^{\langle\langle} \text{B} {}^{\rangle\rangle} = \text{B}, & {}^{\langle\langle} \text{i} {}^{\rangle\rangle} = \text{i}, & {}^{\langle\langle} \text{u} {}^{\rangle\rangle} = \text{u}. \end{matrix}\)


\(\text{Table 84.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A} | E_2)\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{B} \\ \text{A} \\ \text{B} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{B} \\ \text{A} \\ \text{B} \end{matrix}\)


\(\text{Table 85.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B} | E_2)\!\)
\(\text{Object}\!\) \(\text{Sign}\!\) \(\text{Interpretant}\!\)

\(\begin{matrix} \text{A} \\ \text{A} \\ \text{A} \\ \text{A} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \\ {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{B} \\ \text{B} \\ \text{B} \\ \text{B} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} {}^{\langle} \text{A} {}^{\rangle} \\ {}^{\langle} \text{B} {}^{\rangle} \\ {}^{\langle} \text{i} {}^{\rangle} \\ {}^{\langle} \text{u} {}^{\rangle} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{B} \\ \text{B} \\ \text{A} \end{matrix}\)

\(\begin{matrix} \text{A} \\ \text{B} \\ \text{B} \\ \text{A} \end{matrix}\)


By calling attention to their intended status as semantic equations, meaning that signs are being set equal in the semantic equivalence classes they inhabit or the objects they denote, I hope to emphasize that these equations are able to say something significant about objects.

Question. Redo \(F(S)\!\) over \(W\!\)? Use \(W_F = O \cup F\!\)?

6.44. Reflections on Closure

The previous section dealt with a formal operation that was dubbed reflection and found that it was closely associated with the device of quotation that makes it possible to treat signs as objects by making or finding other signs that refer to them. Clearly, an ability to take signs as objects is one component of a cognitive capacity for reflection. But a genuine and less superficial species of reflection can do more than grasp just the isolated signs and the separate interpretants of the thinking process as objects — it can pause the fleeting procession of signs upon signs and seize their generic patterns of transition as valid objects of discussion. This involves the conception and composition of not just higher order signs but also higher type signs, orders of signs that aspire to catch whole sign relations up in one breath.

6.45. Intelligence ⇒ Critical Reflection

It is just at this point that the discussion of sign relations is forced to contemplate the prospects of intelligent interpretation. For starters, I consider an intelligent interpreter to be one that can pursue alternative interpretations of a sign or text and pick one that makes sense. If an interpreter can find all of the most sensible interpretations and order them according to a scale of meaningfulness, but without losing the time required to act on their import, then so much the better.

Intelligent interpreters are a centrally important species of intelligent agents in general, since hardly any intelligent action at all can be taken without the ability to interpret signs and texts, even if read only in the sense of “the text of nature”. In other words, making sense of dubious signs is a central component of all sensible action.

Thus, I regard the determining trait of intelligent agency to be its response to non-deterministic situations. Agents that find themselves at junctures of unavoidable uncertainty are required by objective features of the situation to gather together the available options and select among the multitude of possibilities a few choices that further their active purposes.

Reflection enables an interpreter to stand back from signs and view them as objects, that is, as objective possibilities for choice to be followed up in a critical and experimental fashion rather than pursued as automatic reactions whose habitual connections cannot be questioned.

The mark of an intelligent interpreter that is relevant in this context is the ability to face (encounter, countenance) a non-deterministic juncture of choices in a sign relation and to respond to it as such with actions appropriate to the uncertain nature of the situation.

[Variants]

An intelligent interpreter is one that can follow up several different interpretations at once, experimenting with the denotations and connotations that are available in a non-deterministic sign relation, …

An intelligent interpreter is one that can face a situation of non deterministic choice and choose an interpretation (denotation or connotation) that fits the objective and syntactic context.

An intelligent interpreter is one that can deal with non-deterministic situations, that is, one that can follow up several lines of possible meaning for signs and read between the lines to pick out meanings that are sensitive to both the objective situation and the syntactic context of interpretation.

An intelligent interpreter is one that can reflect critically on the process of interpretation. This involves a capacity for standing back from signs and interpretants and viewing them as objects, seeing their connections as objective possibilities for choice, to be compared with each other and tested against the objective and syntactic contexts, rather than taking the usual paths responding in a reflexive manner with the …

To do this it is necessary to interrupt the customary connections and favored associations of signs and interpretants in a sign relation and to consider a plurality of interpretations, not merely to pursue many lines of meaning in a parallel or experimental fashion, but to question seriously whether anything at all is meant by a sign.

… follow up alternatives in an experimental fashion, evaluate choices with a sensitivity to both the objective and syntactic contexts.

The mark of intelligence that is relevant to this context is the ability to comprehend a non deterministic situation of choice precisely as it is, …

If a species of determinism is nevertheless expected, then the extra measure of determination must be attributed to a worldly context of objects and signs extending beyond those taken into account by the sign relation in question, or else to powers of choice as yet unformalized in the character of interpreters.

This means that the recursions involved in the process of interpretation, besides having recourse to the inner resources of interpreters, will also recur to interfaces with objective situations and syntactic contexts. Interpretation, to be intelligent, must have the capacity to address the full scope of objects and signs and must be given the room to operate interactively with everything up to and including the undetermined horizons of the external world.

6.46. Looking Ahead

On the whole throughout this project, the “meta” issue that has been raised here will be treated at three different levels of sophistication.

  1. The way I have chosen to deal with this issue in the present case is not by injecting more features of the informal discussion into the dialogue of \(\text{A}\!\) and \(\text{B},\!\) but by trying to imagine how agents like \(\text{A}\!\) and \(\text{B}\!\) might be enabled to reflect on these aspects of their own discussion.
  2. In the series of examples that I will use to develop further aspects of the \(\text{A}\!\) and \(\text{B}\!\) dialogue, several different ways of extending the sign relations for \(\text{A}\!\) and \(\text{B}\!\) will be explored. The most pressing task is to capture facts of the following sort.

    \(\text{A}\!\) knows that \(\text{B}\!\) uses \({}^{\backprime\backprime} \text{i} {}^{\prime\prime}\!\) to denote \(\text{B}\!\) and \({}^{\backprime\backprime} \text{u} {}^{\prime\prime}\!\) to denote \(\text{A}.\!\)
    \(\text{B}\!\) knows that \(\text{A}\!\) uses \({}^{\backprime\backprime} \text{i} {}^{\prime\prime}\!\) to denote \(\text{A}\!\) and \({}^{\backprime\backprime} \text{u} {}^{\prime\prime}\!\) to denote \(\text{B}.\!\)

    Toward this aim, I will present a variety of constructions for motivating extended, indexed, or situated sign relations, all designed to meet the following requirements.

    1. To incorporate higher components of “meta-knowledge” about language use as it works in a community of interpreters, in reality the most basic ingredients of pragmatic competence.
    2. To amalgamate the fragmentary sign relations of individual interpreters into “broader-minded” sign relations, in the use and understanding of which a plurality of agents can share.

    Work at this level of concrete investigation will proceed in an incremental fashion, augmenting the discussion of A and B with features of increasing interest and relevance to inquiry. The plan for this series of developments is as follows.

    1. I start by gathering materials and staking out intermediate goals for investigation. This involves making a tentative foray into ways that dimensions of directed change and motivated value can be added to the sign relations initially given for \(\text{A}\!\) and \(\text{B}.\!\)
    2. With this preparation, I return to the dialogue of \(\text{A}\!\) and \(\text{B}\!\) and pursue ways of integrating their independent selections of information into a unified system of interpretation.
      1. First, I employ the sign relations \(L_\text{A}\!\) and \(L_\text{B}\!\) to illustrate two basic kinds of set theoretic merges, the ordinary or simple union and the indexed or situated union of extensional relations. On review, both forms of combination are observed to fall short of what is needed to constitute the desired characteristics of a shared sign relation.
      2. Next, I present two other ways of extending the sign relations \(L_\text{A}\!\) and \(L_\text{B}\!\) into a common system of interpretation. These extensions succeed in capturing further aspects of what interpreters know about their shared language use. Although motivated on different grounds, the alternative constructions that develop coincide in exactly the same abstract structure.
  3. As this project begins to take on sign relations that are complex enough to convey the impression of genuine inquiry processes, a fuller explication of this issue will become mandatory. Eventually, this will demand a concept of higher-order sign relations, whose objects, signs, and interpretants can all be complete sign relations in their own rights.

In principle, the successive grades of complexity enumerated above could be ascended in a straightforward way, if only the steps did not go straight up the cliffs of abstraction. As always, the kinds of intentional objects that are the toughest to face are those whose realization is so distant that even the gear needed to approach their construction is not yet in existence.

6.50. Revisiting the Source


ContentsPart 1Part 2Part 3Part 4Part 5Part 6Part 7Part 8AppendicesReferencesDocument History