Directory talk:Jon Awbrey/Papers/Differential Logic

MyWikiBiz, Author Your Legacy — Thursday November 14, 2024
Jump to navigationJump to search

Later Version

Note. Just now found this later version — file name: "Nexist -- Differential Logic", dated 31 Oct 2002 — with additional discussion, perhaps from the Sémiotique et Communication List. Will need to reconcile with other version.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Differential Logic

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

One of the first things that you can do, once you
have a really decent calculus for boolean functions
or propositional logic, whatever you want to call it,
is to compute the differentials of these functions or
propositions.

Now there are many ways to dance around this idea,
and I feel like I have tried them all, before one
gets down to acting on it, and there many issues
of interpretation and justification that we will
have to clear up after the fact, that is, before
we can be sure that it all really makes any sense,
but I think this time I'll just jump in, and show
you the form in which this idea first came to me.

Start with a proposition of the form x & y, which
I graph as two labels attached to a root node, so:

o---------------------------------------o
|                                       |
|                  x y                  |
|                   @                   |
|                                       |
o---------------------------------------o
|                x and y                |
o---------------------------------------o

Written as a string, this is just the concatenation "x y".

The proposition xy may be taken as a boolean function f(x, y)
having the abstract type f : B x B -> B, where B = {0, 1} is
read in such a way that 0 means "false" and 1 means "true".

In this style of graphical representation,
the value "true" looks like a blank label
and the value "false" looks like an edge.

o---------------------------------------o
|                                       |
|                                       |
|                   @                   |
|                                       |
o---------------------------------------o
|                 true                  |
o---------------------------------------o

o---------------------------------------o
|                                       |
|                   o                   |
|                   |                   |
|                   @                   |
|                                       |
o---------------------------------------o
|                 false                 |
o---------------------------------------o

Back to the proposition xy.  Imagine yourself standing
in a fixed cell of the corresponding venn diagram, say,
the cell where the proposition xy is true, as pictured:

o---------------------------------------o
|                                       |
|                o     o                |
|               / \   / \               |
|              /   \ /   \              |
|             /     ·     \             |
|            /     /%\     \            |
|           /     /%%%\     \           |
|          /     /%%%%%\     \          |
|         /     /%%%%%%%\     \         |
|        /     /%%%%%%%%%\     \        |
|       o  x  o%%%%%%%%%%%o  y  o       |
|        \     \%%%%%%%%%/     /        |
|         \     \%%%%%%%/     /         |
|          \     \%%%%%/     /          |
|           \     \%%%/     /           |
|            \     \%/     /            |
|             \     ·     /             |
|              \   / \   /              |
|               \ /   \ /               |
|                o     o                |
|                                       |
o---------------------------------------o

Now ask yourself:  What is the value of the
proposition xy at a distance of dx and dy
from the cell xy where you are standing?

Don't think about it -- just compute:

o---------------------------------------o
|                                       |
|              dx o   o dy              |
|                / \ / \                |
|             x o---@---o y             |
|                                       |
o---------------------------------------o
|         (x + dx) and (y + dy)         |
o---------------------------------------o

To make future graphs easier to draw in Ascii land,
I will use devices like @=@=@ and o=o=o to identify
several nodes into one, as in this next redrawing:

o---------------------------------------o
|                                       |
|              x  dx y  dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
|         (x + dx) and (y + dy)         |
o---------------------------------------o

However you draw it, these expressions follow because the
expression x + dx, where the plus sign indicates (mod 2)
addition in B, and thus corresponds to an exclusive-or
in logic, parses to a graph of the following form:

o---------------------------------------o
|                                       |
|                x    dx                |
|                 o---o                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
|                 x + dx                |
o---------------------------------------o

Next question:  What is the difference between
the value of the proposition xy "over there" and
the value of the proposition xy where you are, all
expressed as general formula, of course?  Here 'tis:

o---------------------------------------o
|                                       |
|        x  dx y  dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /                      |
|           \| |/         x y           |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
|      ((x + dx) & (y + dy)) - xy       |
o---------------------------------------o

Oh, I forgot to mention:  Computed over B,
plus and minus are the very same operation.
This will make the relationship between the
differential and the integral parts of the
resulting calculus slightly stranger than
usual, but never mind that now.

Last question, for now:  What is the value of this expression
from your current standpoint, that is, evaluated at the point
where xy is true?  Well, substituting 1 for x and 1 for y in
the graph amounts to the same thing as erasing those labels:

o---------------------------------------o
|                                       |
|           dx    dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /                      |
|           \| |/                       |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
|      ((1 + dx) & (1 + dy)) - 1·1      |
o---------------------------------------o

And this is equivalent to the following graph:

o---------------------------------------o
|                                       |
|                dx   dy                |
|                 o   o                 |
|                  \ /                  |
|                   o                   |
|                   |                   |
|                   @                   |
|                                       |
o---------------------------------------o
|               dx or dy                |
o---------------------------------------o

Enough for the moment.
Explanation to follow.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have just met with the fact that
the differential of the "and" is
the "or" of the differentials.

x and y  --Diff-->  dx or dy.

o---------------------------------------o
|                                       |
|                             dx   dy   |
|                              o   o    |
|                               \ /     |
|                                o      |
|      x y                       |      |
|       @       --Diff-->        @      |
|                                       |
o---------------------------------------o
|      x y      --Diff-->   ((dx)(dy))  |
o---------------------------------------o

It will be necessary to develop a more refined analysis of
this statement directly, but that is roughly the nub of it.

If the form of the above statement reminds you of DeMorgan's rule,
it is no accident, as differentiation and negation turn out to be
closely related operations.  Indeed, one can find discussions of
logical difference calculus in the Boole-DeMorgan correspondence
and Peirce also made use of differential operators in a logical
context, but the exploration of these ideas has been hampered
by a number of factors, not the least of which being a syntax
adequate to handle the complexity of expressions that evolve.

For my part, it was definitely a case of the calculus being smarter
than the calculator thereof.  The graphical pictures were catalytic
in their power over my thinking process, leading me so quickly past
so many obstructions that I did not have time to think about all of
the difficulties that would otherwise have inhibited the derivation.
It did eventually became necessary to write all this up in a linear
script, and to deal with the various problems of interpretation and
justification that I could imagine, but that took another 120 pages,
and so, if you don't like this intuitive approach, then let that be
your sufficient notice.

Let us run through the initial example again, this time attempting
to interpret the formulas that develop at each stage along the way.

We begin with a proposition or a boolean function f(x, y) = xy.

o---------------------------------------o
|                                       |
|                o     o                |
|               / \   / \               |
|              /   \ /   \              |
|             /     ·     \             |
|            /     /`\     \            |
|           /     /```\     \           |
|          /     /`````\     \          |
|         /     /```````\     \         |
|        /     /`````````\     \        |
|       o  x  o`````f`````o  y  o       |
|        \     \`````````/     /        |
|         \     \```````/     /         |
|          \     \`````/     /          |
|           \     \```/     /           |
|            \     \`/     /            |
|             \     ·     /             |
|              \   / \   /              |
|               \ /   \ /               |
|                o     o                |
|                                       |
o---------------------------------------o
|                                       |
|                  x y                  |
|                   @                   |
|                                       |
o---------------------------------------o
| f =              x y                  |
o---------------------------------------o

A function like this has an abstract type and a concrete type.
The abstract type is what we invoke when we write things like
f : B x B -> B or f : B^2 -> B.  The concrete type takes into
account the qualitative dimensions or the "units" of the case,
which can be explained as follows.

1.  Let X be the set of values {(x), x} = {not x, x}.

2.  Let Y be the set of values {(y), y} = {not y, y}.

Then interpret the usual propositions about x, y
as functions of the concrete type f : X x Y -> B.

We are going to consider various "operators" on these functions.
Here, an operator F is a function that takes one function f into
another function Ff.

The first couple of operators that we need to consider are logical analogues
of those that occur in the classical "finite difference calculus", namely:

1.  The "difference" operator [capital Delta], written here as D.

2.  The "enlargement" operator [capital Epsilon], written here as E.

These days, E is more often called the "shift" operator.

In order to describe the universe in which these operators operate,
it will be necessary to enlarge our original universe of discourse.
We mount up from the space U = X x Y to its "differential extension",
EU = U x dU = X x Y x dX x dY, with dX = {(dx), dx} and dY = {(dy), dy}.
The interpretations of these new symbols can be diverse, but the easiest
for now is just to say that dx means "change x" and dy means "change y".
To draw the differential extension EU of our present universe U = X x Y
as a venn diagram, it would take us four logical dimensions X, Y, dX, dY,
but we can project a suggestion of what it's about on the universe X x Y
by drawing arrows that cross designated borders, labeling the arrows as
dx when crossing the border between x and (x) and as dy when crossing
the border between y and (y), in either direction, in either case.

o---------------------------------------o
|                                       |
|                o     o                |
|               / \   / \               |
|              /   \ /   \              |
|             /     ·     \             |
|            / dy  /`\  dx \            |
|           /   ^ /```\ ^   \           |
|          /     \`````/     \          |
|         /     /`\```/`\     \         |
|        /     /```\`/```\     \        |
|       o  x  o`````o`````o  y  o       |
|        \     \`````````/     /        |
|         \     \```````/     /         |
|          \     \`````/     /          |
|           \     \```/     /           |
|            \     \`/     /            |
|             \     ·     /             |
|              \   / \   /              |
|               \ /   \ /               |
|                o     o                |
|                                       |
o---------------------------------------o

We can form propositions from these differential variables in the same way
that we would any other logical variables, for instance, interpreting the
proposition (dx (dy)) to say "dx => dy", in other words, however you wish
to take it, whether indicatively or injunctively, as saying something to
the effect that there is "no change in x without a change in y".

Given the proposition f(x, y) in U = X x Y,
the (first order) 'enlargement' of f is the
proposition Ef in EU that is defined by the
formula Ef(x, y, dx, dy) = f(x + dx, y + dy).

In the example f(x, y) = xy, we obtain:

Ef(x, y, dx, dy)  =  (x + dx)(y + dy).

o---------------------------------------o
|                                       |
|              x  dx y  dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
| Ef =       (x, dx) (y, dy)            |
o---------------------------------------o

Given the proposition f(x, y) in U = X x Y,
the (first order) 'difference' of f is the
proposition Df in EU that is defined by the
formula Df = Ef - f, or, written out in full,
Df(x, y, dx, dy) = f(x + dx, y + dy) - f(x, y).

In the example f(x, y) = xy, the result is:

Df(x, y, dx, dy)  =  (x + dx)(y + dy) - xy.

o---------------------------------------o
|                                       |
|        x  dx y  dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /                      |
|           \| |/         x y           |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
| Df =       ((x, dx)(y, dy), xy)       |
o---------------------------------------o

We did not yet go through the trouble to interpret this (first order)
"difference of conjunction" fully, but were happy simply to evaluate
it with respect to a single location in the universe of discourse,
namely, at the point picked out by the singular proposition xy,
in as much as if to say, at the place where x = 1 and y = 1.
This evaluation is written in the form Df|xy or Df|<1, 1>,
and we arrived at the locally applicable law that states
that f = xy = x & y  =>  Df|xy = ((dx)(dy)) = dx or dy.

o---------------------------------------o
|                                       |
|                 dx dy                 |
|                   ^                   |
|                o  |  o                |
|               / \ | / \               |
|              /   \|/   \              |
|             /dy   |   dx\             |
|            /(dx) /|\ (dy)\            |
|           /   ^ /`|`\ ^   \           |
|          /     \``|``/     \          |
|         /     /`\`|`/`\     \         |
|        /     /```\|/```\     \        |
|       o  x  o`````o`````o  y  o       |
|        \     \`````````/     /        |
|         \     \```````/     /         |
|          \     \`````/     /          |
|           \     \```/     /           |
|            \     \`/     /            |
|             \     ·     /             |
|              \   / \   /              |
|               \ /   \ /               |
|                o     o                |
|                                       |
o---------------------------------------o
|                                       |
|                dx   dy                |
|                 o   o                 |
|                  \ /                  |
|                   o                   |
|                   |                   |
|                   @                   |
|                                       |
o---------------------------------------o
| Df|xy =       ((dx)(dy))              |
o---------------------------------------o

The picture illustrates the analysis of the inclusive disjunction ((dx)(dy))
into the exclusive disjunction:  dx(dy) + dy(dx) + dx dy, a proposition that
may be interpreted to say "change x or change y or both".  And this can be
recognized as just what you need to do if you happen to find yourself in
the center cell and desire a detailed description of ways to depart it.

Jon Awbrey --

Formerly Of:
Center Cell,
Chateau Dif.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Last time we computed what will variously be called
the "difference map", the "difference proposition",
or the "local proposition" Df_p for the proposition
f(x, y) = xy at the point p where x = 1 and y = 1.

In the universe U = X x Y, the four propositions
xy, x(y), (x)y, (x)(y) that indicate the "cells",
or the smallest regions of the venn diagram, are
called "singular propositions".  These serve as
an alternative notation for naming the points
<1, 1>, <1, 0>, <0, 1>, <0, 0>, respectively.

Thus, we can write Df_p = Df|p = Df|<1, 1> = Df|xy,
so long as we know the frame of reference in force.

Sticking with the example f(x, y) = xy, let us compute the
value of the difference proposition Df at all of the points.

o---------------------------------------o
|                                       |
|        x  dx y  dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /                      |
|           \| |/         x y           |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
| Df =      ((x, dx)(y, dy), xy)        |
o---------------------------------------o

o---------------------------------------o
|                                       |
|           dx    dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /                      |
|           \| |/                       |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
| Df|xy =      ((dx)(dy))               |
o---------------------------------------o

o---------------------------------------o
|                                       |
|              o                        |
|           dx |  dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /         o            |
|           \| |/          |            |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
| Df|x(y) =      (dx) dy                |
o---------------------------------------o

o---------------------------------------o
|                                       |
|        o                              |
|        |  dx    dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /         o            |
|           \| |/          |            |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
| Df|(x)y =      dx (dy)                |
o---------------------------------------o

o---------------------------------------o
|                                       |
|        o     o                        |
|        |  dx |  dy                    |
|        o---o o---o                    |
|         \  | |  /                     |
|          \ | | /       o   o          |
|           \| |/         \ /           |
|            o=o-----------o            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
| Df|(x)(y) =     dx dy                 |
o---------------------------------------o

The easy way to visualize the values of these graphical
expressions is just to notice the following equivalents:

o---------------------------------------o
|                                       |
|  x                                    |
|  o-o-o-...-o-o-o                      |
|   \           /                       |
|    \         /                        |
|     \       /                         |
|      \     /                x         |
|       \   /                 o         |
|        \ /                  |         |
|         @         =         @         |
|                                       |
o---------------------------------------o
|  (x, , ... , , )  =        (x)        |
o---------------------------------------o

o---------------------------------------o
|                                       |
|                o                      |
| x_1 x_2   x_k  |                      |
|  o---o-...-o---o                      |
|   \           /                       |
|    \         /                        |
|     \       /                         |
|      \     /                          |
|       \   /                           |
|        \ /             x_1 ... x_k    |
|         @         =         @         |
|                                       |
o---------------------------------------o
| (x_1, ..., x_k, ()) = x_1 · ... · x_k |
o---------------------------------------o

Laying out the arrows on the augmented venn diagram,
one gets a picture of a "differential vector field".

o---------------------------------------o
|                                       |
|                 dx dy                 |
|                   ^                   |
|                o  |  o                |
|               / \ | / \               |
|              /   \|/   \              |
|             /dy   |   dx\             |
|            /(dx) /|\ (dy)\            |
|           /   ^ /`|`\ ^   \           |
|          /     \``|``/     \          |
|         /     /`\`|`/`\     \         |
|        /     /```\|/```\     \        |
|       o  x  o`````o`````o  y  o       |
|        \     \`````````/     /        |
|         \  o---->```<----o  /         |
|          \  dy \``^``/ dx  /          |
|           \(dx) \`|`/ (dy)/           |
|            \     \|/     /            |
|             \     |     /             |
|              \   /|\   /              |
|               \ / | \ /               |
|                o  |  o                |
|                   |                   |
|                dx | dy                |
|                   o                   |
|                                       |
o---------------------------------------o

This really just constitutes a depiction of
the interpretations in EU = X x Y x dX x dY
that satisfy the difference proposition Df,
namely, these:

1.   x  y  dx  dy
2.   x  y  dx (dy)
3.   x  y (dx) dy
4.   x (y)(dx) dy
5.  (x) y  dx (dy)
6.  (x)(y) dx  dy

By inspection, it is fairly easy to understand Df
as telling you what you have to do from each point
of U in order to change the value borne by f(x, y).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have been studying the action of the difference operator D,
also known as the "localization operator", on the proposition
f : X x Y -> B that is commonly known as the conjunction x·y.
We described Df as a (first order) differential proposition,
that is, a proposition of the type Df : X x Y x dX x dY -> B.
Abstracting from the augmented venn diagram that illustrates
how the "models", or the "satisfying interpretations", of Df
distribute within the extended universe EU = X x Y x dX x dY,
we can depict Df in the form of a "digraph" or directed graph,
one whose points are labeled with the elements of  U =  X x Y
and whose arrows are labeled with the elements of dU = dX x dY.

o---------------------------------------o
|                                       |
|                 x · y                 |
|                                       |
|                   o                   |
|                  ^^^                  |
|                 / | \                 |
|      (dx)· dy  /  |  \  dx ·(dy)      |
|               /   |   \               |
|              /    |    \              |
|             v     |     v             |
|   x ·(y)   o      |      o   (x)· y   |
|                   |                   |
|                   |                   |
|                dx · dy                |
|                   |                   |
|                   |                   |
|                   v                   |
|                   o                   |
|                                       |
|                (x)·(y)                |
|                                       |
o---------------------------------------o
|                                       |
|  f    =     x  y                      |
|                                       |
| Df    =     x  y  · ((dx)(dy))        |
|                                       |
|       +     x (y) ·  (dx) dy          |
|                                       |
|       +    (x) y  ·   dx (dy)         |
|                                       |
|       +    (x)(y) ·   dx  dy          |
|                                       |
o---------------------------------------o

Any proposition worth its salt, as they say,
has many equivalent ways to look at it, any
of which may reveal some unsuspected aspect
of its meaning.  We will encounter more and
more of these alternative readings as we go.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The enlargement operator E, also known as the "shift operator",
has many interesting and very useful properties in its own right,
so let us not fail to observe a few of the more salient features
that play out on the surface of our simple example, f(x, y) = xy.

Introduce a suitably generic definition of the extended universe of discourse:

Let U = X_1 x ... x X_k and EU = U x dU = X_1 x ... x X_k x dX_1 x ... x dX_k.

For a proposition f : X_1 x ... x X_k -> B,
the (first order) 'enlargement' of f is the
proposition Ef : EU -> B that is defined by:

Ef(x_1, ..., x_k, dx_1, ..., dx_k)  =  f(x_1 + dx_1, ..., x_k + dx_k).

It should be noted that the so-called "differential variables" dx_j
are really just the same kind of boolean variables as the other x_j.
It is conventional to give the additional variables these brands of
inflected names, but whatever extra connotations we might choose to
attach to these syntactic conveniences are wholly external to their
purely algebraic meanings.

For the example f(x, y) = xy, we obtain:

Ef(x, y, dx, dy)   =   (x + dx)(y + dy).

Given that this expression uses nothing more than the "boolean ring"
operations of addition (+) and multiplication (·), it is permissible
to "multiply things out" in the usual manner to arrive at the result:

Ef(x, y, dx, dy)   =   x·y  +  x·dy  +  y·dx  +  dx·dy.

To understand what this means in logical terms, for instance, as expressed
in a boolean expansion or a "disjunctive normal form" (DNF), it is perhaps
a little better to go back and analyze the expression the same way that we
did for Df.  Thus, let us compute the value of the enlarged proposition Ef
at each of the points in the universe of discourse U = X x Y.

o---------------------------------------o
|                                       |
|              x  dx y  dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
| Ef =       (x, dx)·(y, dy)            |
o---------------------------------------o

o---------------------------------------o
|                                       |
|                 dx    dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
| Ef|xy =       (dx)·(dy)               |
o---------------------------------------o

o---------------------------------------o
|                                       |
|                    o                  |
|                 dx |  dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
| Ef|x(y) =     (dx)· dy                |
o---------------------------------------o

o---------------------------------------o
|                                       |
|              o                        |
|              |  dx    dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
| Ef|(x)y =      dx ·(dy)               |
o---------------------------------------o

o---------------------------------------o
|                                       |
|              o     o                  |
|              |  dx |  dy              |
|              o---o o---o              |
|               \  | |  /               |
|                \ | | /                |
|                 \| |/                 |
|                  @=@                  |
|                                       |
o---------------------------------------o
| Ef|(x)(y) =    dx · dy                |
o---------------------------------------o

Given the sort of data that arises from this form of analysis,
we can now fold the disjoined ingredients back into a boolean
expansion or a DNF that is equivalent to the proposition Ef.

Ef  =  xy · Ef_xy  +  x(y) · Ef_x(y)  +  (x)y · Ef_(x)y  +  (x)(y) · Ef_(x)(y).

Here is a summary of the result, illustrated by means of a digraph picture,
where the "no change" element (dx)(dy) is drawn as a loop at the point x·y.

o---------------------------------------o
|                                       |
|                 x · y                 |
|               (dx)·(dy)               |
|                 -->--                 |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                  ^^^                  |
|                 / | \                 |
|                /  |  \                |
|     (dx)· dy  /   |   \  dx ·(dy)     |
|              /    |    \              |
|             /     |     \             |
|   x ·(y)   o      |      o   (x)· y   |
|                   |                   |
|                   |                   |
|                dx · dy                |
|                   |                   |
|                   |                   |
|                   o                   |
|                                       |
|                (x)·(y)                |
|                                       |
o---------------------------------------o
|                                       |
|  f    =     x  y                      |
|                                       |
| Ef    =     x  y  · (dx)(dy)          |
|                                       |
|       +     x (y) · (dx) dy           |
|                                       |
|       +    (x) y  ·  dx (dy)          |
|                                       |
|       +    (x)(y) ·  dx  dy           |
|                                       |
o---------------------------------------o

We may understand the enlarged proposition Ef
as telling us all the different ways to reach
a model of f from any point of the universe U.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

To broaden our experience with simple examples, let us now contemplate the
sixteen functions of concrete type X x Y -> B and abstract type B x B -> B.
For future reference, I will set here a few tables that detail the actions
of E and D and on each of these functions, allowing us to view the results
in several different ways.

By way of initial orientation, Table 0 lists equivalent expressions for the
sixteen functions in a number of different languages for zeroth order logic.

Table 0.  Propositional Forms On Two Variables
o---------o---------o---------o----------o------------------o----------o
| L_1     | L_2     | L_3     | L_4      | L_5              | L_6      |
|         |         |         |          |                  |          |
| Decimal | Binary  | Vector  | Cactus   | English          | Vulgate  |
o---------o---------o---------o----------o------------------o----------o
|         |       x = 1 1 0 0 |          |                  |          |
|         |       y = 1 0 1 0 |          |                  |          |
o---------o---------o---------o----------o------------------o----------o
|         |         |         |          |                  |          |
| f_0     | f_0000  | 0 0 0 0 |    ()    | false            |    0     |
|         |         |         |          |                  |          |
| f_1     | f_0001  | 0 0 0 1 |  (x)(y)  | neither x nor y  | ~x & ~y  |
|         |         |         |          |                  |          |
| f_2     | f_0010  | 0 0 1 0 |  (x) y   | y and not x      | ~x &  y  |
|         |         |         |          |                  |          |
| f_3     | f_0011  | 0 0 1 1 |  (x)     | not x            | ~x       |
|         |         |         |          |                  |          |
| f_4     | f_0100  | 0 1 0 0 |   x (y)  | x and not y      |  x & ~y  |
|         |         |         |          |                  |          |
| f_5     | f_0101  | 0 1 0 1 |     (y)  | not y            |      ~y  |
|         |         |         |          |                  |          |
| f_6     | f_0110  | 0 1 1 0 |  (x, y)  | x not equal to y |  x +  y  |
|         |         |         |          |                  |          |
| f_7     | f_0111  | 0 1 1 1 |  (x  y)  | not both x and y | ~x v ~y  |
|         |         |         |          |                  |          |
| f_8     | f_1000  | 1 0 0 0 |   x  y   | x and y          |  x &  y  |
|         |         |         |          |                  |          |
| f_9     | f_1001  | 1 0 0 1 | ((x, y)) | x equal to y     |  x =  y  |
|         |         |         |          |                  |          |
| f_10    | f_1010  | 1 0 1 0 |      y   | y                |       y  |
|         |         |         |          |                  |          |
| f_11    | f_1011  | 1 0 1 1 |  (x (y)) | not x without y  |  x => y  |
|         |         |         |          |                  |          |
| f_12    | f_1100  | 1 1 0 0 |   x      | x                |  x       |
|         |         |         |          |                  |          |
| f_13    | f_1101  | 1 1 0 1 | ((x) y)  | not y without x  |  x <= y  |
|         |         |         |          |                  |          |
| f_14    | f_1110  | 1 1 1 0 | ((x)(y)) | x or y           |  x v  y  |
|         |         |         |          |                  |          |
| f_15    | f_1111  | 1 1 1 1 |   (())   | true             |    1     |
|         |         |         |          |                  |          |
o---------o---------o---------o----------o------------------o----------o

The next four Tables expand the expressions of Ef and Df
in two different ways, for each of the sixteen functions.
Notice that the functions are given in a different order,
here being collected into a set of seven natural classes.

Table 1.  Ef Expanded Over Ordinary Features {x, y}
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
|      |     f      |  Ef | xy   | Ef | x(y)  | Ef | (x)y  | Ef | (x)(y)|
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_0  |     ()     |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_1  |   (x)(y)   |   dx  dy   |   dx (dy)  |  (dx) dy   |  (dx)(dy)  |
|      |            |            |            |            |            |
| f_2  |   (x) y    |   dx (dy)  |   dx  dy   |  (dx)(dy)  |  (dx) dy   |
|      |            |            |            |            |            |
| f_4  |    x (y)   |  (dx) dy   |  (dx)(dy)  |   dx  dy   |   dx (dy)  |
|      |            |            |            |            |            |
| f_8  |    x  y    |  (dx)(dy)  |  (dx) dy   |   dx (dy)  |   dx  dy   |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_3  |   (x)      |   dx       |   dx       |  (dx)      |  (dx)      |
|      |            |            |            |            |            |
| f_12 |    x       |  (dx)      |  (dx)      |   dx       |   dx       |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_6  |   (x, y)   |  (dx, dy)  | ((dx, dy)) | ((dx, dy)) |  (dx, dy)  |
|      |            |            |            |            |            |
| f_9  |  ((x, y))  | ((dx, dy)) |  (dx, dy)  |  (dx, dy)  | ((dx, dy)) |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_5  |      (y)   |       dy   |      (dy)  |       dy   |      (dy)  |
|      |            |            |            |            |            |
| f_10 |       y    |      (dy)  |       dy   |      (dy)  |       dy   |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_7  |   (x  y)   | ((dx)(dy)) | ((dx) dy)  |  (dx (dy)) |  (dx  dy)  |
|      |            |            |            |            |            |
| f_11 |   (x (y))  | ((dx) dy)  | ((dx)(dy)) |  (dx  dy)  |  (dx (dy)) |
|      |            |            |            |            |            |
| f_13 |  ((x) y)   |  (dx (dy)) |  (dx  dy)  | ((dx)(dy)) | ((dx) dy)  |
|      |            |            |            |            |            |
| f_14 |  ((x)(y))  |  (dx  dy)  |  (dx (dy)) | ((dx) dy)  | ((dx)(dy)) |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_15 |    (())    |    (())    |    (())    |    (())    |    (())    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o


Table 2.  Df Expanded Over Ordinary Features {x, y}
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
|      |     f      |  Df | xy   | Df | x(y)  | Df | (x)y  | Df | (x)(y)|
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_0  |     ()     |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_1  |   (x)(y)   |   dx  dy   |   dx (dy)  |  (dx) dy   | ((dx)(dy)) |
|      |            |            |            |            |            |
| f_2  |   (x) y    |   dx (dy)  |   dx  dy   | ((dx)(dy)) |  (dx) dy   |
|      |            |            |            |            |            |
| f_4  |    x (y)   |  (dx) dy   | ((dx)(dy)) |   dx  dy   |   dx (dy)  |
|      |            |            |            |            |            |
| f_8  |    x  y    | ((dx)(dy)) |  (dx) dy   |   dx (dy)  |   dx  dy   |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_3  |   (x)      |   dx       |   dx       |   dx       |   dx       |
|      |            |            |            |            |            |
| f_12 |    x       |   dx       |   dx       |   dx       |   dx       |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_6  |   (x, y)   |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |
|      |            |            |            |            |            |
| f_9  |  ((x, y))  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_5  |      (y)   |       dy   |       dy   |       dy   |       dy   |
|      |            |            |            |            |            |
| f_10 |       y    |       dy   |       dy   |       dy   |       dy   |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_7  |   (x  y)   | ((dx)(dy)) |  (dx) dy   |   dx (dy)  |   dx  dy   |
|      |            |            |            |            |            |
| f_11 |   (x (y))  |  (dx) dy   | ((dx)(dy)) |   dx  dy   |   dx (dy)  |
|      |            |            |            |            |            |
| f_13 |  ((x) y)   |   dx (dy)  |   dx  dy   | ((dx)(dy)) |  (dx) dy   |
|      |            |            |            |            |            |
| f_14 |  ((x)(y))  |   dx  dy   |   dx (dy)  |  (dx) dy   | ((dx)(dy)) |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_15 |    (())    |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o


Table 3.  Ef Expanded Over Differential Features {dx, dy}
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
|      |     f      |   T_11 f   |   T_10 f   |   T_01 f   |   T_00 f   |
|      |            |            |            |            |            |
|      |            | Ef| dx·dy  | Ef| dx(dy) | Ef| (dx)dy | Ef|(dx)(dy)|
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_0  |     ()     |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_1  |   (x)(y)   |    x  y    |    x (y)   |   (x) y    |   (x)(y)   |
|      |            |            |            |            |            |
| f_2  |   (x) y    |    x (y)   |    x  y    |   (x)(y)   |   (x) y    |
|      |            |            |            |            |            |
| f_4  |    x (y)   |   (x) y    |   (x)(y)   |    x  y    |    x (y)   |
|      |            |            |            |            |            |
| f_8  |    x  y    |   (x)(y)   |   (x) y    |    x (y)   |    x  y    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_3  |   (x)      |    x       |    x       |   (x)      |   (x)      |
|      |            |            |            |            |            |
| f_12 |    x       |   (x)      |   (x)      |    x       |    x       |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_6  |   (x, y)   |   (x, y)   |  ((x, y))  |  ((x, y))  |   (x, y)   |
|      |            |            |            |            |            |
| f_9  |  ((x, y))  |  ((x, y))  |   (x, y)   |   (x, y)   |  ((x, y))  |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_5  |      (y)   |       y    |      (y)   |       y    |      (y)   |
|      |            |            |            |            |            |
| f_10 |       y    |      (y)   |       y    |      (y)   |       y    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_7  |   (x  y)   |  ((x)(y))  |  ((x) y)   |   (x (y))  |   (x  y)   |
|      |            |            |            |            |            |
| f_11 |   (x (y))  |  ((x) y)   |  ((x)(y))  |   (x  y)   |   (x (y))  |
|      |            |            |            |            |            |
| f_13 |  ((x) y)   |   (x (y))  |   (x  y)   |  ((x)(y))  |  ((x) y)   |
|      |            |            |            |            |            |
| f_14 |  ((x)(y))  |   (x  y)   |   (x (y))  |  ((x) y)   |  ((x)(y))  |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_15 |    (())    |    (())    |    (())    |    (())    |    (())    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|                   |            |            |            |            |
| Fixed Point Total |      4     |      4     |      4     |     16     |
|                   |            |            |            |            |
o-------------------o------------o------------o------------o------------o


Table 4.  Df Expanded Over Differential Features {dx, dy}
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
|      |     f      | Df| dx·dy  | Df| dx(dy) | Df| (dx)dy | Df|(dx)(dy)|
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_0  |     ()     |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_1  |   (x)(y)   |  ((x, y))  |    (y)     |    (x)     |     ()     |
|      |            |            |            |            |            |
| f_2  |   (x) y    |   (x, y)   |     y      |    (x)     |     ()     |
|      |            |            |            |            |            |
| f_4  |    x (y)   |   (x, y)   |    (y)     |     x      |     ()     |
|      |            |            |            |            |            |
| f_8  |    x  y    |  ((x, y))  |     y      |     x      |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_3  |   (x)      |    (())    |    (())    |     ()     |     ()     |
|      |            |            |            |            |            |
| f_12 |    x       |    (())    |    (())    |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_6  |   (x, y)   |     ()     |    (())    |    (())    |     ()     |
|      |            |            |            |            |            |
| f_9  |  ((x, y))  |     ()     |    (())    |    (())    |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_5  |      (y)   |    (())    |     ()     |    (())    |     ()     |
|      |            |            |            |            |            |
| f_10 |       y    |    (())    |     ()     |    (())    |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_7  |   (x  y)   |  ((x, y))  |     y      |     x      |     ()     |
|      |            |            |            |            |            |
| f_11 |   (x (y))  |   (x, y)   |    (y)     |     x      |     ()     |
|      |            |            |            |            |            |
| f_13 |  ((x) y)   |   (x, y)   |     y      |    (x)     |     ()     |
|      |            |            |            |            |            |
| f_14 |  ((x)(y))  |  ((x, y))  |    (y)     |    (x)     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_15 |    (())    |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o

If the medium is truly the message,
the blank slate is the innate idea.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

If you think that I linger in the realm of logical difference calculus
out of sheer vacillation about getting down to the differential proper,
it is probably out of a prior expectation that you derive from the art
or the long-engrained practice of real analysis.  But the fact is that
ordinary calculus only rushes on to the sundry orders of approximation
because the strain of comprehending the full import of E and D at once
whelm over its discrete and finite powers to grasp them.  But here, in
the fully serene idylls of ZOL, we find ourselves fit with the compass
of a wit that is all we'd ever wish to explore their effects with care.

So let us do just that.

I will first rationalize the novel grouping of propositional forms
in the last set of Tables, as that will extend a gentle invitation
to the mathematical subject of "group theory", and demonstrate its
relevance to differential logic in a strikingly apt and useful way.
The data for that account is contained in Table 3.

Table 3.  Ef Expanded Over Differential Features {dx, dy}
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
|      |     f      |   T_11 f   |   T_10 f   |   T_01 f   |   T_00 f   |
|      |            |            |            |            |            |
|      |            | Ef| dx·dy  | Ef| dx(dy) | Ef| (dx)dy | Ef|(dx)(dy)|
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_0  |     ()     |     ()     |     ()     |     ()     |     ()     |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_1  |   (x)(y)   |    x  y    |    x (y)   |   (x) y    |   (x)(y)   |
|      |            |            |            |            |            |
| f_2  |   (x) y    |    x (y)   |    x  y    |   (x)(y)   |   (x) y    |
|      |            |            |            |            |            |
| f_4  |    x (y)   |   (x) y    |   (x)(y)   |    x  y    |    x (y)   |
|      |            |            |            |            |            |
| f_8  |    x  y    |   (x)(y)   |   (x) y    |    x (y)   |    x  y    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_3  |   (x)      |    x       |    x       |   (x)      |   (x)      |
|      |            |            |            |            |            |
| f_12 |    x       |   (x)      |   (x)      |    x       |    x       |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_6  |   (x, y)   |   (x, y)   |  ((x, y))  |  ((x, y))  |   (x, y)   |
|      |            |            |            |            |            |
| f_9  |  ((x, y))  |  ((x, y))  |   (x, y)   |   (x, y)   |  ((x, y))  |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_5  |      (y)   |       y    |      (y)   |       y    |      (y)   |
|      |            |            |            |            |            |
| f_10 |       y    |      (y)   |       y    |      (y)   |       y    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_7  |   (x  y)   |  ((x)(y))  |  ((x) y)   |   (x (y))  |   (x  y)   |
|      |            |            |            |            |            |
| f_11 |   (x (y))  |  ((x) y)   |  ((x)(y))  |   (x  y)   |   (x (y))  |
|      |            |            |            |            |            |
| f_13 |  ((x) y)   |   (x (y))  |   (x  y)   |  ((x)(y))  |  ((x) y)   |
|      |            |            |            |            |            |
| f_14 |  ((x)(y))  |   (x  y)   |   (x (y))  |  ((x) y)   |  ((x)(y))  |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|      |            |            |            |            |            |
| f_15 |    (())    |    (())    |    (())    |    (())    |    (())    |
|      |            |            |            |            |            |
o------o------------o------------o------------o------------o------------o
|                   |            |            |            |            |
| Fixed Point Total |      4     |      4     |      4     |     16     |
|                   |            |            |            |            |
o-------------------o------------o------------o------------o------------o

The shift operator E can be understood as enacting a substitution operation
on the proposition that is given as its argument.  In our immediate example,
we have the following data and definition:

E : (U -> B)  ->  (EU -> B),

E :  f(x, y)  ->   Ef(x, y, dx, dy),

Ef(x, y, dx, dy)  =  f(x + dx, y + dy).

Therefore, if we evaluate Ef at particular values of dx and dy,
for example, dx = i and dy = j, where i, j are in B, we obtain:

E_ij : (U -> B)  ->  (U -> B),

E_ij :    f      ->   E_ij f,

E_ij f  =  Ef | <dx = i, dy = j>  =  f(x + i, y + j).

The notation is a little bit awkward, but the data of the Table should
make the sense clear.  The important thing to observe is that E_ij has
the effect of transforming each proposition f : U -> B into some other
proposition f' : U -> B.  As it happens, the action is one-to-one and
onto for each E_ij, so the gang of four operators {E_ij : i, j in B}
is an example of what is called a "transformation group" on the set
of sixteen propositions.  Bowing to a longstanding local and linear
tradition, I will therefore redub the four elements of this group
as T_00, T_01, T_10, T_11, to bear in mind their transformative
character, or nature, as the case may be.  Abstractly viewed,
this group of order four has the following operation table:

o----------o----------o----------o----------o----------o
|          %          |          |          |          |
|    ·     %   T_00   |   T_01   |   T_10   |   T_11   |
|          %          |          |          |          |
o==========o==========o==========o==========o==========o
|          %          |          |          |          |
|   T_00   %   T_00   |   T_01   |   T_10   |   T_11   |
|          %          |          |          |          |
o----------o----------o----------o----------o----------o
|          %          |          |          |          |
|   T_01   %   T_01   |   T_00   |   T_11   |   T_10   |
|          %          |          |          |          |
o----------o----------o----------o----------o----------o
|          %          |          |          |          |
|   T_10   %   T_10   |   T_11   |   T_00   |   T_01   |
|          %          |          |          |          |
o----------o----------o----------o----------o----------o
|          %          |          |          |          |
|   T_11   %   T_11   |   T_10   |   T_01   |   T_00   |
|          %          |          |          |          |
o----------o----------o----------o----------o----------o

It happens that there are just two possible groups of 4 elements.
One is the cyclic group Z_4 (German "Zyklus"), which this is not.
The other is Klein's four-group V_4 (German "Vier"), which it is.

More concretely viewed, the group as a whole pushes the set
of sixteen propositions around in such a way that they fall
into seven natural classes, called "orbits".  One says that
the orbits are preserved by the action of the group.  There
is an "Orbit Lemma" of immense utility to "those who count"
which, depending on your upbringing, you may associate with
the names of Burnside, Cauchy, Frobenius, or some subset or
superset of these three, vouching that the number of orbits
is equal to the mean number of fixed points, in other words,
the total number of points (in our case, propositions) that
are left unmoved by the separate operations, divided by the
order of the group.  In this instance, T_00 operates as the
group identity, fixing all 16 propositions, while the other
three group elements fix 4 propositions each, and so we get:
Number of orbits  =  (4 + 4 + 4 + 16) / 4  =  7.  Amazing!

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have been contemplating functions of the type f : U -> B
studying the action of the operators E and D on this family.
These functions, that we may identify for our present aims
with propositions, inasmuch as they capture their abstract
forms, are logical analogues of "scalar potential fields".
These are the sorts of fields that are so picturesquely
presented in elementary calculus and physics textbooks
by images of snow-covered hills and parties of skiers
who trek down their slopes like least action heroes.
The analogous scene in propositional logic presents
us with forms more reminiscent of plateaunic idylls,
being all plains at one of two levels, the mesas of
verity and falsity, as it were, with nary a niche
to inhabit between them, restricting our options
for a sporting gradient of downhill dynamics to
just one of two, standing still on level ground
or falling off a bluff.

We are still working well within the logical analogue of the
classical finite difference calculus, taking in the novelties
that the logical transmutation of familiar elements is able to
bring to light.  Soon we will take up several different notions
of approximation relationships that may be seen to organize the
space of propositions, and these will allow us to define several
different forms of differential analysis applying to propositions.
In time we will find reason to consider more general types of maps,
having concrete types of the form X_1 x ... x X_k -> Y_1 x ... x Y_n
and abstract types B^k -> B^n.  We will think of these mappings as
transforming universes of discourse into themselves or into others,
in short, as "transformations of discourse".

Before we continue with this intinerary, however, I would like to highlight
another sort of "differential aspect" that concerns the "boundary operator"
or the "marked connective" that serves as one of the two basic connectives
in the cactus language for ZOL.

For example, consider the proposition f of concrete type f : X x Y x Z -> B
and abstract type f : B^3 -> B that is written "(x, y, z)" in cactus syntax.
Taken as an assertion in what Peirce called the "existential interpretation",
(x, y, z) says that just one of x, y, z is false.  It is useful to consider
this assertion in relation to the conjunction xyz of the features that are
engaged as its arguments.  A venn diagram of (x, y, z) looks like this:

o-----------------------------------------------------------o
| U                                                         |
|                                                           |
|                      o-------------o                      |
|                     /               \                     |
|                    /                 \                    |
|                   /                   \                   |
|                  /                     \                  |
|                 /                       \                 |
|                o            x            o                |
|                |                         |                |
|                |                         |                |
|                |                         |                |
|                |                         |                |
|                |                         |                |
|             o--o----------o   o----------o--o             |
|            /    \%%%%%%%%%%\ /%%%%%%%%%%/    \            |
|           /      \%%%%%%%%%%o%%%%%%%%%%/      \           |
|          /        \%%%%%%%%/ \%%%%%%%%/        \          |
|         /          \%%%%%%/   \%%%%%%/          \         |
|        /            \%%%%/     \%%%%/            \        |
|       o              o--o-------o--o              o       |
|       |                 |%%%%%%%|                 |       |
|       |                 |%%%%%%%|                 |       |
|       |                 |%%%%%%%|                 |       |
|       |                 |%%%%%%%|                 |       |
|       |                 |%%%%%%%|                 |       |
|       o        y        o%%%%%%%o        z        o       |
|        \                 \%%%%%/                 /        |
|         \                 \%%%/                 /         |
|          \                 \%/                 /          |
|           \                 o                 /           |
|            \               / \               /            |
|             o-------------o   o-------------o             |
|                                                           |
|                                                           |
o-----------------------------------------------------------o

In relation to the center cell indicated by the conjunction xyz,
the region indicated by (x, y, z) is comprised of the "adjacent"
or the "bordering" cells.  Thus they are the cells that are just
across the boundary of the center cell, as if reached by way of
Leibniz's "minimal changes" from the point of origin, here, xyz.

The same sort of boundary relationship holds for any cell of origin that
one might elect to indicate, say, by means of the conjunction of positive
or negative basis features u_1 · ... · u_k, with u_j = x_j or u_j = (x_j),
for j = 1 to k.  The proposition (u_1, ..., u_k) indicates the disjunctive
region consisting of the cells that are just next door to u_1 · ... · u_k.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might conceivably have
| practical bearings you conceive the objects of your
| conception to have.  Then, your conception of those
| effects is the whole of your conception of the object.
|
| Charles Sanders Peirce, "The Maxim of Pragmatism, CP 5.438.

One other subject that it would be opportune to mention at this point,
while we have an object example of a mathematical group fresh in mind,
is the relationship between the pragmatic maxim and what are commonly
known in mathematics as "representation principles".  As it turns out,
with regard to its formal characteristics, the pragmatic maxim unites
the aspects of a representation principle with the attributes of what
would ordinarily be known as a "closure principle".  We will consider
the form of closure that is invoked by the pragmatic maxim on another
occasion, focusing here and now on the topic of group representations.

Let us return to the example of the so-called "four-group" V_4.
We encountered this group in one of its concrete representations,
namely, as a "transformation group" that acts on a set of objects,
in this particular case a set of sixteen functions or propositions.
Forgetting about the set of objects that the group transforms among
themselves, we may take the abstract view of the group's operational
structure, say, in the form of the group operation table copied here:

o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    ·    %    e    |    f    |    g    |    h    |
|         %         |         |         |         |
o=========o=========o=========o=========o=========o
|         %         |         |         |         |
|    e    %    e    |    f    |    g    |    h    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    f    %    f    |    e    |    h    |    g    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    g    %    g    |    h    |    e    |    f    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    h    %    h    |    g    |    f    |    e    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o

This table is abstractly the same as, or isomorphic to, the versions with
the E_ij operators and the T_ij transformations that we discussed earlier.
That is to say, the story is the same -- only the names have been changed.
An abstract group can have a multitude of significantly and superficially
different representations.  Even after we have long forgotten the details
of the particular representation that we may have come in with, there are
species of concrete representations, called the "regular representations",
that are always readily available, as they can be generated from the mere
data of the abstract operation table itself.

For example, select a group element from the top margin of the Table,
and "consider its effects" on each of the group elements as they are
listed along the left margin.  We may record these effects as Peirce
usually did, as a logical "aggregate" of elementary dyadic relatives,
that is to say, a disjunction or a logical sum whose terms represent
the ordered pairs of <input : output> transactions that are produced
by each group element in turn.  This yields what is usually known as
one of the "regular representations" of the group, specifically, the
"first", the "post-", or the "right" regular representation.  It has
long been conventional to organize the terms in the form of a matrix:

Reading "+" as a logical disjunction:

G  =  e  +  f  +  g  + h,

And so, by expanding effects, we get:

G  =  e:e  +  f:f  +  g:g  +  h:h

   +  e:f  +  f:e  +  g:h  +  h:g

   +  e:g  +  f:h  +  g:e  +  h:f

   +  e:h  +  f:g  +  g:f  +  h:e

More on the pragmatic maxim as a representation principle later.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might conceivably have
| practical bearings you conceive the objects of your
| conception to have.  Then, your conception of those
| effects is the whole of your conception of the object.
|
| Charles Sanders Peirce, "The Maxim of Pragmatism, CP 5.438.

The genealogy of this conception of pragmatic representation is very intricate.
I will delineate some details that I presently fancy I remember clearly enough,
subject to later correction.  Without checking historical accounts, I will not
be able to pin down anything like a real chronology, but most of these notions
were standard furnishings of the 19th Century mathematical study, and only the
last few items date as late as the 1920's.

The idea about the regular representations of a group is universally known
as "Cayley's Theorem", usually in the form:  "Every group is isomorphic to
a subgroup of Aut(S), the group of automorphisms of an appropriate set S".
There is a considerable generalization of these regular representations to
a broad class of relational algebraic systems in Peirce's earliest papers.
The crux of the whole idea is this:

| Consider the effects of the symbol, whose meaning you wish to investigate,
| as they play out on "all" of the different stages of context on which you
| can imagine that symbol playing a role.

This idea of contextual definition is basically the same as Jeremy Bentham's
notion of "paraphrasis", a "method of accounting for fictions by explaining
various purported terms away" (Quine, in Van Heijenoort, page 216).  Today
we'd call these constructions "term models".  This, again, is the big idea
behind Schönfinkel's combinators {S, K, I}, and hence of lambda calculus,
and I reckon you know where that leads.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

Continuing to draw on the reduced example of group representations,
I would like to draw out a few of the finer points and problems of
regarding the maxim of pragmatism as a principle of representation.

Let us revisit the example of an abstract group that we had befour:

Table 1.  Klein Four-Group V_4
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    ·    %    e    |    f    |    g    |    h    |
|         %         |         |         |         |
o=========o=========o=========o=========o=========o
|         %         |         |         |         |
|    e    %    e    |    f    |    g    |    h    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    f    %    f    |    e    |    h    |    g    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    g    %    g    |    h    |    e    |    f    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    h    %    h    |    g    |    f    |    e    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o

I presented the regular post-representation
of the four-group V_4 in the following form:

Reading "+" as a logical disjunction:

   G  =  e  +  f  +  g  + h

And so, by expanding effects, we get:

   G  =  e:e  +  f:f  +  g:g  +  h:h

      +  e:f  +  f:e  +  g:h  +  h:g

      +  e:g  +  f:h  +  g:e  +  h:f

      +  e:h  +  f:g  +  g:f  +  h:e

This presents the group in one big bunch,
and there are occasions when one regards
it this way, but that is not the typical
form of presentation that we'd encounter.
More likely, the story would go a little
bit like this:

I cannot remember any of my math teachers
ever invoking the pragmatic maxim by name,
but it would be a very regular occurrence
for such mentors and tutors to set up the
subject in this wise:  Suppose you forget
what a given abstract group element means,
that is, in effect, 'what it is'.  Then a
sure way to jog your sense of 'what it is'
is to build a regular representation from
the formal materials that are necessarily
left lying about on that abstraction site.

Working through the construction for each
one of the four group elements, we arrive
at the following exegeses of their senses,
giving their regular post-representations:

   e  =  e:e  +  f:f  +  g:g  +  h:h

   f  =  e:f  +  f:e  +  g:h  +  h:g

   g  =  e:g  +  f:h  +  g:e  +  h:f

   h  =  e:h  +  f:g  +  g:f  +  h:e

So if somebody asks you, say, "What is g?",
you can say, "I don't know for certain but
in practice its effects go a bit like this:
Converting e to g, f to h, g to e, h to f".

I will have to check this out later on, but my impression is
that Peirce tended to lean toward the other brand of regular,
the "second", the "left", or the "ante-representation" of the
groups that he treated in his earliest manuscripts and papers.
I believe that this was because he thought of the actions on
the pattern of dyadic relative terms like the "aftermath of".

Working through this alternative for each
one of the four group elements, we arrive
at the following exegeses of their senses,
giving their regular ante-representations:

   e  =  e:e  +  f:f  +  g:g  +  h:h

   f  =  f:e  +  e:f  +  h:g  +  g:h

   g  =  g:e  +  h:f  +  e:g  +  f:h

   h  =  h:e  +  g:f  +  f:g  +  e:h

Your paraphrastic interpretation of what this all
means would come out precisely the same as before.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Erratum

Oops!  I think that I have just confounded two entirely different issues:
1.  The substantial difference between right and left regular representations.
2.  The inessential difference between two conventions of presenting matrices.
I will sort this out and correct it later, as need be.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

Let me return to Peirce's early papers on the algebra of relatives
to pick up the conventions that he used there, and then rewrite my
account of regular representations in a way that conforms to those.

Peirce expresses the action of an "elementary dual relative" like so:

| [Let] A:B be taken to denote
| the elementary relative which
| multiplied into B gives A.
|
| Peirce, 'Collected Papers', CP 3.123.

And though he is well aware that it is not at all necessary to arrange
elementary relatives into arrays, matrices, or tables, when he does so
he tends to prefer organizing dyadic relations in the following manner:

|  A:A   A:B   A:C  |
|                   |
|  B:A   B:B   B:C  |
|                   |
|  C:A   C:B   C:C  |

That conforms to the way that the last school of thought
I matriculated into stipulated that we tabulate material:

|  e_11  e_12  e_13  |
|                    |
|  e_21  e_22  e_23  |
|                    |
|  e_31  e_32  e_33  |

So, for example, let us suppose that we have the small universe {A, B, C},
and the 2-adic relation m = "mover of" that is represented by this matrix:

m  =

|  m_AA (A:A)   m_AB (A:B)   m_AC (A:C)  |
|                                        |
|  m_BA (B:A)   m_BB (B:B)   m_BC (B:C)  |
|                                        |
|  m_CA (C:A)   m_CB (C:B)   m_CC (C:C)  |

Also, let m be such that
A is a mover of A and B,
B is a mover of B and C,
C is a mover of C and A.

In sum:

m  =

|  1 · (A:A)   1 · (A:B)   0 · (A:C)  |
|                                     |
|  0 · (B:A)   1 · (B:B)   1 · (B:C)  |
|                                     |
|  1 · (C:A)   0 · (C:B)   1 · (C:C)  |

For the sake of orientation and motivation,
compare with Peirce's notation in CP 3.329.

I think that will serve to fix notation
and set up the remainder of the account.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

I am beginning to see how I got confused.
It is common in algebra to switch around
between different conventions of display,
as the momentary fancy happens to strike,
and I see that Peirce is no different in
this sort of shiftiness than anyone else.
A changeover appears to occur especially
whenever he shifts from logical contexts
to algebraic contexts of application.

In the paper "On the Relative Forms of Quaternions" (CP 3.323),
we observe Peirce providing the following sorts of explanation:

| If X, Y, Z denote the three rectangular components of a vector, and W denote
| numerical unity (or a fourth rectangular component, involving space of four
| dimensions), and (Y:Z) denote the operation of converting the Y component
| of a vector into its Z component, then
|
|     1  =  (W:W) + (X:X) + (Y:Y) + (Z:Z)
|
|     i  =  (X:W) - (W:X) - (Y:Z) + (Z:Y)
|
|     j  =  (Y:W) - (W:Y) - (Z:X) + (X:Z)
|
|     k  =  (Z:W) - (W:Z) - (X:Y) + (Y:X)
|
| In the language of logic (Y:Z) is a relative term whose relate is
| a Y component, and whose correlate is a Z component.  The law of
| multiplication is plainly (Y:Z)(Z:X) = (Y:X), (Y:Z)(X:W) = 0,
| and the application of these rules to the above values of
| 1, i, j, k gives the quaternion relations
|
|     i^2  =  j^2  =  k^2  =  -1,
|
|     ijk  =  -1,
|
|     etc.
|
| The symbol a(Y:Z) denotes the changing of Y to Z and the
| multiplication of the result by 'a'.  If the relatives be
| arranged in a block
|
|     W:W     W:X     W:Y     W:Z
|
|     X:W     X:X     X:Y     X:Z
|
|     Y:W     Y:X     Y:Y     Y:Z
|
|     Z:W     Z:X     Z:Y     Z:Z
|
| then the quaternion w + xi + yj + zk
| is represented by the matrix of numbers
|
|     w       -x      -y      -z
|
|     x        w      -z       y
|
|     y        z       w      -x
|
|     z       -y       x       w
|
| The multiplication of such matrices follows the same laws as the
| multiplication of quaternions.  The determinant of the matrix =
| the fourth power of the tensor of the quaternion.
|
| The imaginary x + y(-1)^(1/2) may likewise be represented by the matrix
|
|      x      y
|
|     -y      x
|
| and the determinant of the matrix = the square of the modulus.
|
| Charles Sanders Peirce, 'Collected Papers', CP 3.323.
|'Johns Hopkins University Circulars', No. 13, p. 179, 1882.

This way of talking is the mark of a person who opts
to multiply his matrices "on the rignt", as they say.
Yet Peirce still continues to call the first element
of the ordered pair (I:J) its "relate" while calling
the second element of the pair (I:J) its "correlate".
That doesn't comport very well, so far as I can tell,
with his customary reading of relative terms, suited
more to the multiplication of matrices "on the left".

So I still have a few wrinkles to iron out before
I can give this story a smooth enough consistency.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

I have been planning for quite some time now to make my return to Peirce's
skyshaking "Description of a Notation for the Logic of Relatives" (1870),
and I can see that it's just about time to get down tuit, so let this
current bit of rambling inquiry function as the preamble to that.
All we need at the present, though, is a modus vivendi/operandi
for telling what is substantial from what is inessential in
the brook between symbolic conceits and dramatic actions
that we find afforded by means of the pragmatic maxim.

Back to our "subinstance", the example in support of our first example.
I will now reconstruct it in a way that may prove to be less confusing.

Let us make up the model universe $1$ = A + B + C and the 2-adic relation
n = "noder of", as when "X is a data record that contains a pointer to Y".
That interpretation is not important, it's just for the sake of intuition.
In general terms, the 2-adic relation n can be represented by this matrix:

n  =

|  n_AA (A:A)   n_AB (A:B)   n_AC (A:C)  |
|                                        |
|  n_BA (B:A)   n_BB (B:B)   n_BC (B:C)  |
|                                        |
|  n_CA (C:A)   n_CB (C:B)   n_CC (C:C)  |

Also, let n be such that
A is a noder of A and B,
B is a noder of B and C,
C is a noder of C and A.

Filling in the instantial values of the "coefficients" n_ij,
as the indices i and j range over the universe of discourse:

n  =

|  1 · (A:A)   1 · (A:B)   0 · (A:C)  |
|                                     |
|  0 · (B:A)   1 · (B:B)   1 · (B:C)  |
|                                     |
|  1 · (C:A)   0 · (C:B)   1 · (C:C)  |

In Peirce's time, and even in some circles of mathematics today,
the information indicated by the elementary relatives (I:J), as
I, J range over the universe of discourse, would be referred to
as the "umbral elements" of the algebraic operation represented
by the matrix, though I seem to recall that Peirce preferred to
call these terms the "ingredients".  When this ordered basis is
understood well enough, one will tend to drop any mention of it
from the matrix itself, leaving us nothing but these bare bones:

n  =

|  1  1  0  |
|           |
|  0  1  1  |
|           |
|  1  0  1  |

However the specification may come to be written, this
is all just convenient schematics for stipulating that:

n  =  A:A  +  B:B  +  C:C  +  A:B  +  B:C  +  C:A

Recognizing !1! = A:A + B:B + C:C to be the identity transformation,
the 2-adic relation n = "noder of" may be represented by an element
!1! + A:B + B:C + C:A of the so-called "group ring", all of which
just makes this element a special sort of linear transformation.

Up to this point, we are still reading the elementary relatives of
the form I:J in the way that Peirce reads them in logical contexts:
I is the relate, J is the correlate, and in our current example we
read I:J, or more exactly, n_ij = 1, to say that I is a noder of J.
This is the mode of reading that we call "multiplying on the left".

In the algebraic, permutational, or transformational contexts of
application, however, Peirce converts to the alternative mode of
reading, although still calling I the relate and J the correlate,
the elementary relative I:J now means that I gets changed into J.
In this scheme of reading, the transformation A:B + B:C + C:A is
a permutation of the aggregate $1$ = A + B + C, or what we would
now call the set {A, B, C}, in particular, it is the permutation
that is otherwise notated as:

( A B C )
<       >
( B C A )

This is consistent with the convention that Peirce uses in
the paper "On a Class of Multiple Algebras" (CP 3.324-327).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

We have been contemplating the virtues and the utilities of
the pragmatic maxim as a hermeneutic heuristic, specifically,
as a principle of interpretation that guides us in finding a
clarifying representation for a problematic corpus of symbols
in terms of their actions on other symbols or their effects on
the syntactic contexts in which we conceive to distribute them.
I started off considering the regular representations of groups
as constituting what appears to be one of the simplest possible
applications of this overall principle of representation.

There are a few problems of implementation that have to be worked out
in practice, most of which are cleared up by keeping in mind which of
several possible conventions we have chosen to follow at a given time.
But there does appear to remain this rather more substantial question:

Are the effects we seek relates or correlates, or does it even matter?

I will have to leave that question as it is for now,
in hopes that a solution will evolve itself in time.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

There a big reasons and little reasons for caring about this humble example.
The little reasons we find all under our feet.  One big reason I can now
quite blazonly enounce in the fashion of this not so subtle subtitle:

Obstacles to Applying the Pragmatic Maxim

No sooner do you get a good idea and try to apply it
than you find that a motley array of obstacles arise.

It seems as if I am constantly lamenting the fact these days that people,
and even admitted Peircean persons, do not in practice more consistently
apply the maxim of pragmatism to the purpose for which it is purportedly
intended by its author.  That would be the clarification of concepts, or
intellectual symbols, to the point where their inherent senses, or their
lacks thereof, would be rendered manifest to all and sundry interpreters.

There are big obstacles and little obstacles to applying the pragmatic maxim.
In good subgoaling fashion, I will merely mention a few of the bigger blocks,
as if in passing, and then get down to the devilish details that immediately
obstruct our way.

Obstacle 1.  People do not always read the instructions very carefully.
There is a tendency in readers of particular prior persuasions to blow
the problem all out of proportion, to think that the maxim is meant to
reveal the absolutely positive and the totally unique meaning of every
preconception to which they might deign or elect to apply it.  Reading
the maxim with an even minimal attention, you can see that it promises
no such finality of unindexed sense, but ties what you conceive to you.
I have lately come to wonder at the tenacity of this misinterpretation.
Perhaps people reckon that nothing less would be worth their attention.
I am not sure.  I can only say the achievement of more modest goals is
the sort of thing on which our daily life depends, and there can be no
final end to inquiry nor any ultimate community without a continuation
of life, and that means life on a day to day basis.  All of which only
brings me back to the point of persisting with local meantime examples,
because if we can't apply the maxim there, we can't apply it anywhere.

And now I need to go out of doors and weed my garden for a time ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

Obstacles to Applying the Pragmatic Maxim

Obstacle 2.  Applying the pragmatic maxim, even with a moderate aim, can be hard.
I think that my present example, deliberately impoverished as it is, affords us
with an embarassing richness of evidence of just how complex the simple can be.

All the better reason for me to see if I can finish it up before moving on.

Expressed most simply, the idea is to replace the question of "what it is",
which modest people know is far too difficult for them to answer right off,
with the question of "what it does", which most of us know a modicum about.

In the case of regular representations of groups we found
a non-plussing surplus of answers to sort our way through.
So let us track back one more time to see if we can learn
any lessons that might carry over to more realistic cases.

Here is is the operation table of V_4 once again:

Table 1.  Klein Four-Group V_4
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    ·    %    e    |    f    |    g    |    h    |
|         %         |         |         |         |
o=========o=========o=========o=========o=========o
|         %         |         |         |         |
|    e    %    e    |    f    |    g    |    h    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    f    %    f    |    e    |    h    |    g    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    g    %    g    |    h    |    e    |    f    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o
|         %         |         |         |         |
|    h    %    h    |    g    |    f    |    e    |
|         %         |         |         |         |
o---------o---------o---------o---------o---------o

A group operation table is really just a device for
recording a certain 3-adic relation, to be specific,
the set of triples of the form <x, y, z> satisfying
the equation x·y = z where · is the group operation.

In the case of V_4 = (G, ·), where G is the "underlying set"
{e, f, g, h}, we have the 3-adic relation L(V_4) c G x G x G
whose triples are listed below:

|   <e, e, e>
|   <e, f, f>
|   <e, g, g>
|   <e, h, h>
|
|   <f, e, f>
|   <f, f, e>
|   <f, g, h>
|   <f, h, g>
|
|   <g, e, g>
|   <g, f, h>
|   <g, g, e>
|   <g, h, f>
|
|   <h, e, h>
|   <h, f, g>
|   <h, g, f>
|   <h, h, e>

It is part of the definition of a group that the 3-adic
relation L c G^3 is actually a function L : G x G -> G.
It is from this functional perspective that we can see
an easy way to derive the two regular representations.
Since we have a function of the type L : G x G -> G,
we can define a couple of substitution operators:

1.  Sub(x, <_, y>) puts any specified x into
    the empty slot of the rheme <_, y>, with
    the effect of producing the saturated
    rheme <x, y> that evaluates to x·y.

2.  Sub(x, <y, _>) puts any specified x into
    the empty slot of the rheme <y, >, with
    the effect of producing the saturated
    rheme <y, x> that evaluates to y·x.

In (1), we consider the effects of each x in its
practical bearing on contexts of the form <_, y>,
as y ranges over G, and the effects are such that
x takes <_, y> into x·y, for y in G, all of which
is summarily notated as x = {(y : x·y) : y in G}.
The pairs (y : x·y) can be found by picking an x
from the left margin of the group operation table
and considering its effects on each y in turn as
these run across the top margin.  This aspect of
pragmatic definition we recognize as the regular
ante-representation:

    e  =  e:e  +  f:f  +  g:g  +  h:h

    f  =  e:f  +  f:e  +  g:h  +  h:g

    g  =  e:g  +  f:h  +  g:e  +  h:f

    h  =  e:h  +  f:g  +  g:f  +  h:e

In (2), we consider the effects of each x in its
practical bearing on contexts of the form <y, _>,
as y ranges over G, and the effects are such that
x takes <y, _> into y·x, for y in G, all of which
is summarily notated as x = {(y : y·x) : y in G}.
The pairs (y : y·x) can be found by picking an x
from the top margin of the group operation table
and considering its effects on each y in turn as
these run down the left margin.  This aspect of
pragmatic definition we recognize as the regular
post-representation:

    e  =  e:e  +  f:f  +  g:g  +  h:h

    f  =  e:f  +  f:e  +  g:h  +  h:g

    g  =  e:g  +  f:h  +  g:e  +  h:f

    h  =  e:h  +  f:g  +  g:f  +  h:e

If the ante-rep looks the same as the post-rep,
now that I'm writing them in the same dialect,
that is because V_4 is abelian (commutative),
and so the two representations have the very
same effects on each point of their bearing.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

So long as we're in the neighborhood, we might as well take in
some more of the sights, for instance, the smallest example of
a non-abelian (non-commutative) group.  This is a group of six
elements, say, G = {e, f, g, h, i, j}, with no relation to any
other employment of these six symbols being implied, of course,
and it can most easily be represented as the permutation group
on a set of three letters, say, X = {A, B, C}, usually notated
as G = Sym(X) or more abstractly and briefly, as Sym(3) or S_3.
Here are the permutation (= substitution) operations in Sym(X):

Table 1.  Permutations or Substitutions in Sym_{A, B, C}
o---------o---------o---------o---------o---------o---------o
|         |         |         |         |         |         |
|    e    |    f    |    g    |    h    |    i    |    j    |
|         |         |         |         |         |         |
o=========o=========o=========o=========o=========o=========o
|         |         |         |         |         |         |
|  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |
|         |         |         |         |         |         |
|  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |
|  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |
|         |         |         |         |         |         |
|  A B C  |  C A B  |  B C A  |  A C B  |  C B A  |  B A C  |
|         |         |         |         |         |         |
o---------o---------o---------o---------o---------o---------o

Here is the operation table for S_3, given in abstract fashion:

Table 2.  Symmetric Group S_3

|                        ^
|                     e / \ e
|                      /   \
|                     /  e  \
|                  f / \   / \ f
|                   /   \ /   \
|                  /  f  \  f  \
|               g / \   / \   / \ g
|                /   \ /   \ /   \
|               /  g  \  g  \  g  \
|            h / \   / \   / \   / \ h
|             /   \ /   \ /   \ /   \
|            /  h  \  e  \  e  \  h  \
|         i / \   / \   / \   / \   / \ i
|          /   \ /   \ /   \ /   \ /   \
|         /  i  \  i  \  f  \  j  \  i  \
|      j / \   / \   / \   / \   / \   / \ j
|       /   \ /   \ /   \ /   \ /   \ /   \
|      (  j  \  j  \  j  \  i  \  h  \  j  )
|       \   / \   / \   / \   / \   / \   /
|        \ /   \ /   \ /   \ /   \ /   \ /
|         \  h  \  h  \  e  \  j  \  i  /
|          \   / \   / \   / \   / \   /
|           \ /   \ /   \ /   \ /   \ /
|            \  i  \  g  \  f  \  h  /
|             \   / \   / \   / \   /
|              \ /   \ /   \ /   \ /
|               \  f  \  e  \  g  /
|                \   / \   / \   /
|                 \ /   \ /   \ /
|                  \  g  \  f  /
|                   \   / \   /
|                    \ /   \ /
|                     \  e  /
|                      \   /
|                       \ /
|                        v

By the way, we will meet with the symmetric group S_3 again
when we return to take up the study of Peirce's early paper
"On a Class of Multiple Algebras" (CP 3.324-327), and also
his late unpublished work "The Simplest Mathematics" (1902)
(CP 4.227-323), with particular reference to the section
that treats of "Trichotomic Mathematics" (CP 4.307-323).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

By way of collecting a short-term pay-off for all the work --
not to mention the peirce-spiration -- that we sweated out
over the regular representations of the Klein 4-group V_4,
let us write out as quickly as possible in "relative form"
a minimal budget of representations of the symmetric group
on three letters, S_3 = Sym(3).  After doing the usual bit
of compare and contrast among these divers representations,
we will have enough concrete material beneath our abstract
belts to tackle a few of the presently obscur'd details of
Peirce's early "Algebra + Logic" papers.

Table 1.  Permutations or Substitutions in Sym {A, B, C}
o---------o---------o---------o---------o---------o---------o
|         |         |         |         |         |         |
|    e    |    f    |    g    |    h    |    i    |    j    |
|         |         |         |         |         |         |
o=========o=========o=========o=========o=========o=========o
|         |         |         |         |         |         |
|  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |
|         |         |         |         |         |         |
|  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |
|  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |
|         |         |         |         |         |         |
|  A B C  |  C A B  |  B C A  |  A C B  |  C B A  |  B A C  |
|         |         |         |         |         |         |
o---------o---------o---------o---------o---------o---------o

Writing this table in relative form generates
the following "natural representation" of S_3.

    e  =  A:A + B:B + C:C

    f  =  A:C + B:A + C:B

    g  =  A:B + B:C + C:A

    h  =  A:A + B:C + C:B

    i  =  A:C + B:B + C:A

    j  =  A:B + B:A + C:C

I have without stopping to think about it written out this natural
representation of S_3 in the style that comes most naturally to me,
to wit, the "right" way, whereby an ordered pair configured as X:Y
constitutes the turning of X into Y.  It is possible that the next
time we check in with CSP that we will have to adjust our sense of
direction, but that will be an easy enough bridge to cross when we
come to it.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Consider what effects that might 'conceivably'
| have practical bearings you 'conceive' the
| objects of your 'conception' to have.  Then,
| your 'conception' of those effects is the
| whole of your 'conception' of the object.
|
| Charles Sanders Peirce,
| "Maxim of Pragmaticism", CP 5.438.

To construct the regular representations of S_3,
we pick up from the data of its operation table:

Table 1.  Symmetric Group S_3

|                        ^
|                     e / \ e
|                      /   \
|                     /  e  \
|                  f / \   / \ f
|                   /   \ /   \
|                  /  f  \  f  \
|               g / \   / \   / \ g
|                /   \ /   \ /   \
|               /  g  \  g  \  g  \
|            h / \   / \   / \   / \ h
|             /   \ /   \ /   \ /   \
|            /  h  \  e  \  e  \  h  \
|         i / \   / \   / \   / \   / \ i
|          /   \ /   \ /   \ /   \ /   \
|         /  i  \  i  \  f  \  j  \  i  \
|      j / \   / \   / \   / \   / \   / \ j
|       /   \ /   \ /   \ /   \ /   \ /   \
|      (  j  \  j  \  j  \  i  \  h  \  j  )
|       \   / \   / \   / \   / \   / \   /
|        \ /   \ /   \ /   \ /   \ /   \ /
|         \  h  \  h  \  e  \  j  \  i  /
|          \   / \   / \   / \   / \   /
|           \ /   \ /   \ /   \ /   \ /
|            \  i  \  g  \  f  \  h  /
|             \   / \   / \   / \   /
|              \ /   \ /   \ /   \ /
|               \  f  \  e  \  g  /
|                \   / \   / \   /
|                 \ /   \ /   \ /
|                  \  g  \  f  /
|                   \   / \   /
|                    \ /   \ /
|                     \  e  /
|                      \   /
|                       \ /
|                        v

Just by way of staying clear about what we are doing,
let's return to the recipe that we worked out before:

It is part of the definition of a group that the 3-adic
relation L c G^3 is actually a function L : G x G -> G.
It is from this functional perspective that we can see
an easy way to derive the two regular representations.

Since we have a function of the type L : G x G -> G,
we can define a couple of substitution operators:

1.  Sub(x, <_, y>) puts any specified x into
    the empty slot of the rheme <_, y>, with
    the effect of producing the saturated
    rheme <x, y> that evaluates to x·y.

2.  Sub(x, <y, _>) puts any specified x into
    the empty slot of the rheme <y, >, with
    the effect of producing the saturated
    rheme <y, x> that evaluates to y·x.

In (1), we consider the effects of each x in its
practical bearing on contexts of the form <_, y>,
as y ranges over G, and the effects are such that
x takes <_, y> into x·y, for y in G, all of which
is summarily notated as x = {(y : x·y) : y in G}.
The pairs (y : x·y) can be found by picking an x
from the left margin of the group operation table
and considering its effects on each y in turn as
these run along the right margin.  This produces
the regular ante-representation of S_3, like so:

e   =   e:e  +  f:f  +  g:g  +  h:h  +  i:i  +  j:j

f   =   e:f  +  f:g  +  g:e  +  h:j  +  i:h  +  j:i

g   =   e:g  +  f:e  +  g:f  +  h:i  +  i:j  +  j:h

h   =   e:h  +  f:i  +  g:j  +  h:e  +  i:f  +  j:g

i   =   e:i  +  f:j  +  g:h  +  h:g  +  i:e  +  j:f

j   =   e:j  +  f:h  +  g:i  +  h:f  +  i:g  +  j:e

In (2), we consider the effects of each x in its
practical bearing on contexts of the form <y, _>,
as y ranges over G, and the effects are such that
x takes <y, _> into y·x, for y in G, all of which
is summarily notated as x = {(y : y·x) : y in G}.
The pairs (y : y·x) can be found by picking an x
on the right margin of the group operation table
and considering its effects on each y in turn as
these run along the left margin.  This generates
the regular post-representation of S_3, like so:

e   =   e:e  +  f:f  +  g:g  +  h:h  +  i:i  +  j:j

f   =   e:f  +  f:g  +  g:e  +  h:i  +  i:j  +  j:h

g   =   e:g  +  f:e  +  g:f  +  h:j  +  i:h  +  j:i

h   =   e:h  +  f:j  +  g:i  +  h:e  +  i:g  +  j:f

i   =   e:i  +  f:h  +  g:j  +  h:f  +  i:e  +  j:g

j   =   e:j  +  f:i  +  g:h  +  h:g  +  i:f  +  j:e

If the ante-rep looks different from the post-rep,
it is just as it should be, as S_3 is non-abelian
(non-commutative), and so the two representations
differ in the details of their practical effects,
though, of course, being representations of the
same abstract group, they must be isomorphic.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| the way of heaven and earth
| is to be long continued
| in their operation
| without stopping
|
| i ching, hexagram 32

You may be wondering what happened to the announced subject
of "Differential Logic", and if you think that we have been
taking a slight "excursion" -- to use my favorite euphemism
for "digression" -- my reply to the charge of a scenic rout
would need to be both "yes and no".  What happened was this.
At the sign-post marked by Sigil 7, we made the observation
that the shift operators E_ij form a transformation group
that acts on the propositions of the form f : B^2 -> B.
Now group theory is a very attractive subject, but it
did not really have the effect of drawing us so far
off our initial course as you may at first think.
For one thing, groups, in particular, the groups
that have come to be named after the Norwegian
mathematician Marius Sophus Lie, have turned
out to be of critical utility in the solution
of differential equations.  For another thing,
group operations afford us examples of triadic
relations that have been extremely well-studied
over the years, and this provides us with quite
a bit of guidance in the study of sign relations,
another class of triadic relations of significance
for logical studies, in our brief acquaintance with
which we have scarcely even started to break the ice.
Finally, I could hardly avoid taking up the connection
between group representations, a very generic class of
logical models, and the all-important pragmatic maxim.

Biographical Data for Marius Sophus Lie (1842-1899):

http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Lie.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Bein' on the twenty-third of June,
|      As I sat weaving all at my loom,
| Bein' on the twenty-third of June,
|      As I sat weaving all at my loom,
| I heard a thrush, singing on yon bush,
|      And the song she sang was The Jug of Punch.

We've seen a couple of groups, V_4 and S_3, represented in various ways, and
we've seen their representations presented in a variety of different manners.
Let us look at one other stylistic variant for presenting a representation
that is frequently seen, the so-called "matrix representation" of a group.

Recalling the manner of our acquaintance with the symmetric group S_3,
we began with the "bigraph" (bipartite graph) picture of its natural
representation as the set of all permutations or substitutions on
the set X = {A, B, C}.

Table 1.  Permutations or Substitutions in Sym {A, B, C}
o---------o---------o---------o---------o---------o---------o
|         |         |         |         |         |         |
|    e    |    f    |    g    |    h    |    i    |    j    |
|         |         |         |         |         |         |
o=========o=========o=========o=========o=========o=========o
|         |         |         |         |         |         |
|  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |
|         |         |         |         |         |         |
|  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |
|  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |
|         |         |         |         |         |         |
|  A B C  |  C A B  |  B C A  |  A C B  |  C B A  |  B A C  |
|         |         |         |         |         |         |
o---------o---------o---------o---------o---------o---------o

Then we rewrote these permutations -- being functions f : X --> X
they can also be recognized as being 2-adic relations f c X  x  X --
in "relative form", in effect, in the manner to which Peirce would
have made us accostumed had he been given a relative half-a-chance:

    e  =  A:A + B:B + C:C

    f  =  A:C + B:A + C:B

    g  =  A:B + B:C + C:A

    h  =  A:A + B:C + C:B

    i  =  A:C + B:B + C:A

    j  =  A:B + B:A + C:C

These days one is much more likely to encounter the natural representation
of S_3 in the form of a "linear representation", that is, as a family of
linear transformations that map the elements of a suitable vector space
into each other, all of which would in turn usually be represented by
a set of matrices like these:

Table 2.  Matrix Representations of Permutations in Sym(3)
o---------o---------o---------o---------o---------o---------o
|         |         |         |         |         |         |
|    e    |    f    |    g    |    h    |    i    |    j    |
|         |         |         |         |         |         |
o=========o=========o=========o=========o=========o=========o
|         |         |         |         |         |         |
|  1 0 0  |  0 0 1  |  0 1 0  |  1 0 0  |  0 0 1  |  0 1 0  |
|  0 1 0  |  1 0 0  |  0 0 1  |  0 0 1  |  0 1 0  |  1 0 0  |
|  0 0 1  |  0 1 0  |  1 0 0  |  0 1 0  |  1 0 0  |  0 0 1  |
|         |         |         |         |         |         |
o---------o---------o---------o---------o---------o---------o

The key to the mysteries of these matrices is revealed by noting that their
coefficient entries are arrayed and overlayed on a place mat marked like so:

    | A:A  A:B  A:C |
    | B:A  B:B  B:C |
    | C:A  C:B  C:C |

Of course, the place-settings of convenience at different symposia may vary.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| In the beginning was the three-pointed star,
| One smile of light across the empty face;
| One bough of bone across the rooting air,
| The substance forked that marrowed the first sun;
| And, burning ciphers on the round of space,
| Heaven and hell mixed as they spun.
|
| Dylan Thomas, "In The Beginning", Verse 1

I'm afrayed that this thread is just bound to keep
encountering its manifold of tensuous distractions,
but I'd like to try and return now to the topic of
inquiry, espectrally viewed in differential aspect.

Here's one picture of how it begins,
one angle on the point of departure:

o-----------------------------------------------------------o
|                                                           |
|                                                           |
|                      o-------------o                      |
|                     /               \                     |
|                    /                 \                    |
|                   /                   \                   |
|                  /                     \                  |
|                 /                       \                 |
|                o                         o                |
|                |                         |                |
|                |                         |                |
|                |       Observation       |                |
|                |                         |                |
|                |                         |                |
|             o--o----------o   o----------o--o             |
|            /    \          \ /          /    \            |
|           /      \   d_I ^  o  ^ d_E   /      \           |
|          /        \       \/ \/       /        \          |
|         /          \      /\ /\      /          \         |
|        /            \    /  @  \    /            \        |
|       o              o--o---|---o--o              o       |
|       |                 |   |   |                 |       |
|       |                 |   v   |                 |       |
|       |   Expectation   |  d_O  |    Intention    |       |
|       |                 |       |                 |       |
|       |                 |       |                 |       |
|       o                 o       o                 o       |
|        \                 \     /                 /        |
|         \                 \   /                 /         |
|          \                 \ /                 /          |
|           \                 o                 /           |
|            \               / \               /            |
|             o-------------o   o-------------o             |
|                                                           |
|                                                           |
o-----------------------------------------------------------o

From what we must assume was a state of "Unconscious Nirvana" (UN),
since we do not acutely become conscious until after we are exiled
from that garden of our blissful innocence, where our Expectations,
our Intentions, our Observations all subsist in a state of perfect
harmony, one with every barely perceived other, something intrudes
on that scene of paradise to knock us out of that blessed isle and
to trouble our countenance forever after at the retrospect thereof.

The least disturbance, it being provident and prudent both to take
that first up, will arise in just one of three ways, in accord with
the mode of discord that importunes on our equanimity, whether it is
Expectation, Intention, Observation that incipiently incites the riot,
departing as it will from congruence with the other two modes of being.

In short, we cross just one of the three lines that border on the center,
or perhaps it is better to say that the objective situation transits one
of the chordal bounds of harmony, for the moment marked as d_E, d_I, d_O
to note the fact one's Expectation, Intention, Observation, respectively,
is the mode that we duly indite as the one that's sounding the sour note.

A difference between Expectation and Observation is experienced
as a "Surprise", a phenomenon that cries out for an Explanation.

A discrepancy between Intention and Observation is experienced
as a "Problem", of the species that calls for a Plan of Action.

I can remember that I once thought up what I thought up an apt
name for a gap between Expectation and Intention, but I cannot
recall what it was, nor yet find the notes where I recorded it.

At any rate, the modes of experiencing a surprising phenomenon
or a problematic situation, as described just now, are already
complex modalities, and will need to be analyzed further if we
want to relate them to the minimal changes d_E, d_I, d_O.  Let
me think about that for a little while and see what transpires.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| In the beginning was the pale signature,
| Three-syllabled and starry as the smile;
| And after came the imprints on the water,
| Stamp of the minted face upon the moon;
| The blood that touched the crosstree and the grail
| Touched the first cloud and left a sign.
|
| Dylan Thomas, "In The Beginning", Verse 2



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 26

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| In the beginning was the mounting fire
| That set alight the weathers from a spark,
| A three-eyed, red-eyed spark, blunt as a flower;
| Life rose and spouted from the rolling seas,
| Burst in the roots, pumped from the earth and rock
| The secret oils that drive the grass.
|
| Dylan Thomas, "In The Beginning", Verse 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

>>>>>

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Work Area

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

problem about writing

   e  =  e:e  +  f:f  +  g:g  +  h:h

no recursion intended
need for a work-around
ways way explaining it away

action on signs not objects

math def of rep

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Differential Discussions

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BU = Benjamin Udell

BU: Your exposition of differential logic is over my head, YET --

Apologies to all for posting so many notes at once,
but I've found that it's best to break this stuff
up into easy pieces, and I wanted to get to the
part about the pragmatic maxim before everyone
lapsed into a coma.  Too late, most likely.

I just thought that it was about time that I supply a concrete example
in support of all those wild claims I've been making about how crucial
Peirce's mathematical way of looking at logic is to the future of both
subjects.  From my perspective, his logic is not some museum curiosity,
but a living force and a working tool, a resource whose full potential
is yet to be fully explored.  By way of illustrating the power of this
approach, I will exposit here the subject of differential logic along
lines that a slight extension of Peirce's Alpha Graphs makes possible.
The basic idea of differential logic was hinted at by Leibniz, exists
in explicit form as far back as the Boole-DeMorgan correspondence, it
was familiar to Babbage, and is well-known to circuit engineers today,
but its full development has been hobbled by the recalcitrant calculus
with which today's logic teachers still shackle today's logic students.

BU: I'm wondering whether you could do me (or maybe a few of us) the
    favor of temporarily morphing into E.T. Bell & explaining to a
    mathematically ill educated person like me, what differential
    logic involves.  (E.g., does this have something to do with
    1st- vs. 2nd-order logic?) I also mean analogously as in
    the following examples:

Oh gee, could I play John Taine instead?
Bell was a bit notorious for tailoring
the facts as befit the better story.

We are building the differential extension of "Zeroth Order Logic" (ZOL),
that is to say, starting with propositional calculus or sentential logic.

BU: Ex.: Measure theory is used for probability theory.  The basic thing is
    to find the relative sizes of different portions of the area under the
    curve (the total area is usually set at unity).  (If I've got that right!)
    This is finding the definite integrals representing the portions.
    (Actually I've probably got this wrong.)

This is square measure theory in a venn diagram world.
You may find it useful to stroll through this gallery:

http://suo.ieee.org/ontology/msg03585.html

BU: Ex.: In optimization sometimes one looks for the minimum or maximum
    of a curve.  This amounts to finding the point(s) of the curve where
    the slope is zero.  Sometimes one wants to find the intersections of
    various curves;  in any case sometimes one seeks to find points on
    curves, points which have certain specified properties in terms
    of the curve, such as being a minimum, a maximum, a point of
    intersection, etc.

The slope is a derivative, df/dx, which is a number in the relevant field,
being the coefficient that sits next to the differential factor dx in the
appropriate differential expansion.  It turns out to be a bit more useful
to preserve the whole differential term.  Since our field is B = GF(2),
the derivative is either 0 or 1, so the term dx is either there or not.

BU: I do see you mentioning finding of differentials, but I don't know
    whether that's the basic point.  Also, I'm a little confused in my
    ignorance, since I thought that if you're talking about discrete
    objects (statements), you'd be talking about differences rather
    than differentials.  In any case I'm not sure how to think about
    a differential or a difference between two statements.

We are starting with the logical analogue of "finite difference calculus",
and will work up to the logical analogue of true differentials bit by bit.

The definition that you want to keep in mind is the concept of
a differential as a locally linear approximation to a function.
This is a notion that can very often make sense even when all
of the familiar formulas for it fail to carry over by means
of the usual brands of automatic analogues.

Think of a proposition, a shaded region in a venn diagram,
as if the shaded region were a mesa of height 1, and view
that as a potential function or a probability density on
the universe of discourse.  Then think about gradients.

To be continuous ---
If not exactly
Uniformly ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BU = Benjamin Udell
JA = Jon Awbrey

BU: I found this at Semeion, Research Center of Sciences of Communication:

http://www.semeion.it/GLOSSTH1.htm

| Differential Logic:  is a different logic to build up
| complex systems.  Its inspiration is biology.  According
| to the differential logic, a unit develops dividing itself
| into more units and, in doing so, radically changes the state
| of its information.  This logic is not tautological, because
| during the process the systems increases its quantity of
| organization.

| Differential System:  is a system whose development happens in the same
| way as biological systems; that is, through differentiation of its units.

BU: Is this the same differential logic that you're talking about?

I think that they are speaking of "differentiation" in the sense
of embryology or developmental biology.  That happens to be a big
interest of mine in a remotely related way -- the data structures,
one of whose alternate nicknames is "conifers", that I use in my
"learning and reasoning" program, were partly influenced by the
way that so-called "growth cones" ramify throughout the nervous
system in the development of neural tissue during neurogenesis
and epigenetic learning.  Other than that, there's no terribly
close conscious connection with what I'm doing with diff log
at the moment.

JA: We are building the differential extension of "Zeroth Order Logic" (ZOL),
    that is to say, starting with propositional calculus or sentential logic.

BU: Ex.: Measure theory is used for probability theory.
    The basic thing is to find the relative sizes of
    different portions of the area under the curve
    (the total area is usually set at unity).
    (If I've got that right!)  This is finding
    the definite integrals representing the
    portions.  (Actually  I've probably got
    this wrong.)

JA: This is square measure theory in a venn diagram world.
    You may find it useful to stroll through this gallery:

JA: http://suo.ieee.org/ontology/msg03585.html

BU: If it's square measure theory, ultimately the interest will be
    in some kind of logical analog of mathematical integration?

I just mean that propositions are (modeled as, regarded as) step-functions,
functions having the form f : X -> B, where X is the universe of discourse
and B = {0, 1}.  If B is regarded as a "field", a space with some analogue
of the usual four functions (add, subtract, multiply, divide), then it is
called the "galois field of order 2" and notated as GF(2).  In set theory
these are called "characteristic functions" and in statistics they are
known as "indicator functions" because they characterize or indicate
the subset of X where f evaluates to 1.  This subset is the inverse
image of 1 under f, horribly notated in Asciiland as f^(-1)(1) c X,
and various other folks call it the "antecedent", the "fiber", or
the "pre-image" of 1 under f.  I tend to use the "fiber" language,
and also make use of the "fiber bars" [|...|] that allow of the
more succinct form [| f |] = f^(-1)(1) = {x in X : f(x) = 1}.

  B
  ^
1 +    ******    ***********
  |    *    *    *         *   
  |    *    *    *         *
0 o*****----******---------***********> X

BU: Ex.: In optimization sometimes one looks for the minimum or maximum of
    a curve.  This amounts to finding the point(s) of the curve where the
    slope is zero.  Sometimes one wants to find the intersections of various
    curves;  in any case sometimes one seeks to find points on curves, points
    which have certain specified properties in terms of the curve, such as being
    a minimum, a maximum, a point of intersection, etc.

In mathematics one tends to take spaces and the functions on spaces in tandem,
considering ordered pairs like (X, X -> K), where X is the space of interest,
K is a space with a special relation to X, typically its "field of scalars",
and (X -> K) is the set of all pertinent functions from X to K.

In differential logic, we try to exploit what analogies
we can find between real settings like (X, X -> R) and
boolean settings like (Y, Y -> B), where R is the set
of real numbers and B = {0, 1}.  At the entry level
of generality, standard tricks of the trade permit
us to "coordinate" X as a k-dimensional real space
R^k and Y as a k-dimensional boolean space B^k,
and so we begin by cranking the analogy mill
forth and back between (R^k, R^k -> R) and
(B^k, B^k -> B).

Starting to nod off ...
will have to get to
the rest tomorrow.

JA: The slope is a derivative, df/dx, which is a number in the relevant field,
    being the coefficient that sits next to the differential factor dx in the
    appropriate differential expansion.  It turns out to be a bit more useful
    to preserve the whole differential term.  Since our field is B = GF(2),
    the derivative is either 0 or 1, so the term dx is either there or not.

BU: Huh?

BU: I do see you mentioning finding of differentials,
    but I don't know whether that's the basic point.
    Also, I'm a little confused in my ignorance,
    since I thought that if you're talking about
    discrete objects (statements), you'd be talking
    about differences rather than differentials.
    In any case I'm not sure how to think about
    a differential or a difference between two
    statements.

JA: We are starting with the logical analogue of "finite difference calculus",
    and will work up to the logical analogue of true differentials bit by bit.

JA: The definition that you want to keep in mind is the concept of
    a differential as a locally linear approximation to a function.
    This is a notion that can very often makes sense even when all
    of the familiar formulas for it fail to carry over by means of
    the usual brands of automatic analogues.

JA: Think of a proposition, a shaded region in a venn diagram,
    as if the shaded region were a mesa of height 1, and view
    that as a potential function or a probability density on
    the universe of discourse.  Then think about gradients.

BU: potential function?  gradients?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

IS = Inna Semetsky
JA = Jon Awbrey

IS: You mentioned circuit engineers in one of your posts.  Computer technology
    is based on designing circuits aiming at information processing.  With this
    in mind, how then Peirce's philosophy differ from the so called computational
    brand of contemporary cognitive science who equate "mind" with the information
    processing device, and posit that there is nothing else to it.

That discussion was rendered a hopeless muddle by the fact that
cognitive science folks never read anything beyond a ten-year
window on their own literature, if that much, and so they
fell into using the term "functionalism" in a way that
was almost exactly the opposite of the way that it
had always been used before.

At any rate, the interesting part of the Whole Idea
goes back to Aristotle's dictum that "soul is form",
In that form it might be something worth discussing.

IS: Indeed difference may be considered as an "error"
    between input and output, and manipulated upon by
    further differentiations to feed into "the process"
    again and again.  I was very impressed with your posts
    on differential logic (I admit that I just skimmed them)
    but couldn't help thinking that all this "and", "or",
    "if ... then", and other functions of Boolean algebra
    indeed can be, and are being, constructed electronically.
    Yet I would hate to think that what cognitivists are doing --
    even unknowingly -- is employing Peirce's semiotics.  They
    use Boolean logic alright.  Is it all that is there in Peirce?

There is a differential aspect to inquiry.  Inquiry begins with uncertainty,
a condition of high cognitive entropy, if you will.  Differences generalize
to distributions.  The more uniform the distribution the higher the entropy.
Uncertainties are commonly associated with several categories of difference:

1.  A difference between expectation and observation is called a "surprise".
2.  A difference between  intention  and observation is called a "problem".
3.  A difference between expectation and  intention  is called a (I forget).

The cybernetic notion of an error-controlled regulator is a special case of this.
These are some of the main reasons that I thought a differential logic was needed.

IS: While on the subject:  I mentioned not once that part of my research is a
    peculiar connection between Deleuze philosophy and american pragmatism,
    not the least of which is the notion of difference.  Deleuze has been
    designated as "difference engineer" and his major opus is called
    "Difference and Repetition".

Five or six years ago, while taking a bit of a break from my normal routine,
I'd started on a collection of readings along these very lines, mostly just
picking them out by free association:  Deleuze, 'Difference and Repetition',
'The Fold';  Derrida, 'Writing and Difference';  Lyotard, 'The Differend';
Giroux, 'Border Crossings', and so on.  But I have no really clear sense of
what it was all about any more.  A lot of this writing always strikes me as
very insightful and intuitive, while I am reading it, and then the next one
says something radically different, that also strikes me as very insightful
and intuitive, so after a while I tend to become just a little indifferent.
But I see that I have long passages marked in the margins of the 'The Fold',
so perhaps the Leibniz link is something that I will have recourse to again.
Of course, 'Timaeus' and Kierkegaard 'On Repetition' are eternal favorites.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Differential Logic

01.  http://suo.ieee.org/ontology/msg04040.html
02.  http://suo.ieee.org/ontology/msg04041.html
03.  http://suo.ieee.org/ontology/msg04045.html
04.  http://suo.ieee.org/ontology/msg04046.html
05.  http://suo.ieee.org/ontology/msg04047.html
06.  http://suo.ieee.org/ontology/msg04048.html
07.  http://suo.ieee.org/ontology/msg04052.html
08.  http://suo.ieee.org/ontology/msg04054.html
09.  http://suo.ieee.org/ontology/msg04055.html
10.  http://suo.ieee.org/ontology/msg04067.html
11.  http://suo.ieee.org/ontology/msg04068.html
12.  http://suo.ieee.org/ontology/msg04069.html
13.  http://suo.ieee.org/ontology/msg04070.html
14.  http://suo.ieee.org/ontology/msg04072.html
15.  http://suo.ieee.org/ontology/msg04073.html
16.  http://suo.ieee.org/ontology/msg04074.html
17.  http://suo.ieee.org/ontology/msg04077.html
18.  http://suo.ieee.org/ontology/msg04079.html
19.  http://suo.ieee.org/ontology/msg04080.html
20.  http://suo.ieee.org/ontology/msg04268.html
21.  http://suo.ieee.org/ontology/msg04269.html
22.  http://suo.ieee.org/ontology/msg04272.html
23.  http://suo.ieee.org/ontology/msg04273.html
24.  http://suo.ieee.org/ontology/msg04290.html
25.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o