Preamble

In my last post I gave the philosophical prerequisites needed to understand my approach to logic and mathematics. Before getting into the subject at hand, here's a clarifying remark inspired by some comments on my last post. When I say that mathematics is a social activity, I mean it's an activity that is social. Politics is also a social activity. This means that saying, for example, that mathematics is incomplete or inconsistent makes as much sense as saying that politics is incomplete or inconsistent (in the same way a formal logic might be). It's very common for someone to make mention of mathematics having some property (e.g. Gödelian incompleteness) when, in fact, it's a specific logic (such as ZFC or the totality of formal logic) which has this property. This odd synecdoche causes people to frequently confuse a part of mathematics with the whole. No one logic or collection of logics constitutes all of mathematics. The intuitions used to justify an axiom are part of mathematics, but not part of any axiom system.

With that out of the way, the point of this post is to give explicit meanings to the most basic logical connectives.

Logical Theories

A logical theory is a sort of collection of precommitments with two main kinds of judgments, (read " is a proposition") and (read " is true"). Within a logical theory, all but one of our precommitments will pertain to True. Note that any given precommitment will be about one particular thing, as previously explained. The propositional connectives we will initially deal with will be (falsum), (verum), (and), (or), and (implies). Each will have a precommitment telling us precisely what's required for making a declaration of truth pertaining to that connective, and nowhere else will there be rules for making such declarations. Our one remaining precommitment will be the acting definition of Prop. Briefly, the precommitment for Prop says that is a proposition when we've said what's required for declaring .

These precommitments will be variously referred to as "meaning explanations", descending from the usage of Martin-Löf, which explain the rules for reasoning about a judgment. For a judgment , a meaning explanation should be a series of statements of the form:

We may declare when ...

The meaning of the judgment is, then, as follows:

We may declare when we understand the requirements for declaring .

So, assuming that we follow our precommitments, we will only ever declare when we already have prescribed the rules by which we reason about . This is part of the definition of . Now, the judgment can only be declared in the case that we know the requirements for making such a declaration, meaning we should always be able to declare in such a circumstance.

The judgments True and Prop are called "categorical", signifying that they lack assumptions or generality. For the sake of addressing assumptions, we introduce a new form of judgment, which we call a "hypothetical judgment". A hypothetical judgment is made under a hypothesis. Our primary hypothetical judgment will be , pronounced " under the assumption ". Its meaning explanation is as follows:

We may declare when we are allowed to declare after declaring .

Hypothetical judgment may be iterated, and will be used as notation for

We introduce the notion of a "general judgment", which is judgment with respect to a variable, , pronounced "for an arbitrary , ". The meaning explanation for general judgments is as follows:

We may declare when we are allowed to declare (i.e. the substitution of for in the expression ) for any expression .

We assume that the bar symbol binds as loosely as possible. Additionally, when making multiple general judgments, we'll use the notation for .

Note that this judgment relies essentially on the notion of substitution and variable binding. This, along with a full exposition on tokens and expressions will be addressed in my next post.

One thing worth clarifying is the difference between and . That first is more like a scheme for a judgment. It's not meant to be declared on its own. Rather, has to be replaced with a concrete expression by the person asserting the judgment. The second is a complete judgment which can be made as-is.

One principle worth noting is the following: in order to have declared , we must have already declared the conditions under which such a declaration could be made. This means we must be able to declare . This allows us to validate .

I would like to digress at this point to critique the knowledge-theoretic interpretation of judgments. In Type Theory and its Meaning Explanations, for example, we read the following;

To know is to know that is a proposition, which is to know what would count as a direct verification of .

Though it may not be obvious before it's pointed out, this description is simply untrue. We may know what counts as a direct verification of without knowing that is a proposition. There is an extra step missing, being the act of realization. In order to know , we must first realize it. Let me demonstrate the importance of this distinction with an example. Here is a knowledge-theoretic meaning-explanation of sorts for prime factors;

To know that and are prime factors of is to know that is prime, that is prime, and that .

Based on that, let me ask you this. What are the prime factors of ? If, by chance you know a few arithmetic tricks, then replace this with a number obtained from multiplying larger primes, but without looking it up, very few people would be able to answer this quickly. Now what's ? Most people should be able to answer fairly quickly, as well as recognize that and are prime numbers. Now, I'll ask again, what are the prime factors of ?

Notice what knowledge was present. Knowledge that and were prime and that was present, but knowledge that and are prime factors of wasn't. In this case, the realization required computational effort. For more sophisticated mathematical notions, that computational effort may even be insurmountable.

It is not generally the case that knowing and that implies allows us to know . That would take extra effort in realization, and so the knowledge-theoretic account of meaning is flawed. I've replaced references to knowledge with an appeal to precommitments and allowances, and that seems to eliminate this problem while making the account more clear and closer to practice. Additionally, we side-step issues of computer knowledge. It will become clear that the system described in this work is implementable on a computer, but we would need to address the baggage of the knowledge-theoretic account. We are begged to answer for the assertion that a computer apparently knows something, bringing into question the meaning of knowledge itself. Thankfully, this problem evaporates along with the original references to knowledge.

Basic Propositions

Now that the most commonly used judgments are out of the way, we can start populating our logical theories with specific examples of propositions.

Perhaps the thematically first proposition is trivial truth . It has a simple precommitment with only a single rule;

We may declare .

In other words, we can assert regardless of circumstance. We might say that the judgment of is trivial. Since we have a precommitment telling us the requirements for declaring , we are free to declare .

The simplest proposition that we may consider is falsum . To define this as a proposition, we must specify the requirements for declaring : there are none. In other words, we use the empty precommitment as part of our definition of . Since we have a precommitment which, trivially, tells us the requirements for making these declarations, we are free to assert .

Next, let's define conjunction. This will be the first time we rely on a hypothetical judgment in making a precommitment. Our goal is to declare the conditions under which we can declare . Our full precommitment looks like;

We may declare when we've declared and .

Consider what's necessary in order to follow this precommitment, we will need to assume that we've previously stated the conditions for declaring and , that is, we must already be allowed to declare and . Upon realizing this, we verify the following judgment;

We can also declare the following, which summarizes the truth-theoretic content of our precommitment;

We can even characterize our precommitment as the smallest one validating this judgment.

I would also like to emphasize that the above precommitment is the entire definition of the proposition. We have introduced syntax and have specified the canonical methods for introduction. There is nothing more to be done.

It is more common in works of logic to "define" a piece of the theory in terms of some collection of inference rules which themselves are taken as a foundation which is not itself justified. By a meaning explanation account, such rules must be validated, are taken as derived, as something that must be realized, not given.

Throughout this work, I will make reference to a judgment being validated. A validation is not necessarily part of our logic proper. Rather, it is an account of how we may come to realize that a judgment can be made. We may employ rather sophisticated reasoning in making a realization, in validating a judgment.

Moving on to disjunction, we can state the precommitment for as;

We may declare when we've declared .
We may declare when we've declared .

As with , in order for this precommitment to be followed, we must have already declared what's required for declaring and . That is, we must already be allowed to declare and . Upon realizing this, we verify the following declaration;

The precommitment pertaining to can be characterized as the smallest precommitment validating the following two judgments;

To end this section, we should address implication and define the circumstances under which is true.

We may declare when we've declared .

It's worthwhile to ponder what exactly is required to follow this precommitment, as it's more complicated than the previous ones. Consider the requirements for declaring . In order to state this, we must have stated two previous things. Firstly, we must have said what's required for declaring , that is, we must be able to declare . Secondly, under the assumption of , we must have stated the requirements for declaring , that is, we must be able to declare when we've already declared : we must be able to declare . From this reasoning, we can validate the following judgment;

Upon declaring the precommitment for , we can make the following judgment, characterizing the truth-theoretic content of our precommitment;

Witnesses

So far we have taken the Martin-Löf approach of prescribing the meaning of a proposition by specifying the conditions under which it can be introduced. This has been called the verificationist approach. In particular, it should be clear that, assuming we've accomplished the requirements for declaring , in declaring we are saying that we have specified precisely what means.

One question that lingers is the notion of proof. I'll say variously that we can prove something, but I've not said what a proof actually is. That is, we've given meanings to the propositions themselves but not to their proofs. Fundamentally, the difference between logic and mathematics in general is that logic cares if something is true while mathematics cares why something is true. This may seem confusing at first, but we will see by the end of the fourth post of this series that the ability to do mathematics emerges from specifying and differentiating between proofs.

Before jumping in, we should discuss the notion of an elimination rule. To start, let's look at conjunction. There are two methods for eliminating . Firstly,

How might one verify this? Consider the meaning of . In order to make that conclusion, it must have already been the case that we can conclude . This is similarly the case with , allowing us to verify the second elimination rule;

There is a duel perspective to verificationism called pragmatism. In pragmatism, instead of prescribing the meaning of something in terms of how it's introduced, we prescribe meaning by how it's eliminated. So we'd have a precommitment for which looks something like;

We may declare when we may declare .
We may declare when we may declare .

And it is precisely the elimination rules of that contain the truth-theoretic content of the pragmatist precommitment.

Remember, a precommitment is only a list of rules for making judgments. We do intend to make cheap the ontological commitment to the ideas who's meaning is encoded in these precommitments, and to that end the verificationist approach seems, prima facie, more palatable. However, as we will see in a much later post on codata, the pragmatist approach is necessary to make sense of, for example, languages with infinite sentences. For now, we will focus on how these dual perspectives interact.

One thing that we can see by using both the introduction and elimination rules is that information is always traceable through the rules. Consider the derivation;

Assume we have declared and . We can conclude . After that, we know that, in order to declare , we must have been able to declare , and so we do, we declare .

For any given declaration, there must be some justification for making it. Typically, such justifications boil down to appeals to some precommitment. It's obvious that the justifications used to justify both instances of are, on some level, the same. We can see the traceability of this justification through the derivation, that the only information present at the conclusion of the derivation was already present at the outset. This property is called soundness, and essentially acts as our justification for calling the idea of coherent: we cannot come to know something which we did not already know purely by reasoning about , and so is the case with all logical and mathematical operators.

We can express this traceability of information with a reduction, of sorts, that the above derivation should reduce, in some sense, to

Assume we have declared .

This, fundamentally, is the motivation for type theory. We can represent the rules of a commitment themselves with symbols in the same way we have done for the propositions. This also allows us to be precise with our claims of a validated derivation. We may say that " is a derivation validated by the precommitment for ", and be sure that this is true based on the form of . We will denote this relation between and as , pronounced " witnesses ". When such a judgment can be made, we call a type, and refer to our broader theory as a type theory rather than a logical theory.

We could try specifying a meaning like this;

We may declare when is the name of a rule in the precommitment for .

But, as stated in the introduction, this is insufficient. We may have, instead of a rule directly, a plan for deriving a rule or series of rules for witnessing , an algorithm which, when run, will give a canonical witness in the same way is a natural number, but not a canonical one. Reduction to a canonical form will be represented by a judgment , pronounced " evaluates to ".

We may declare when, in any declared judgment, may be replaced with another expression , creating a new judgment which can still be declared.

The meaning explanation for must be accordingly modified to take into account the computational behavior of expressions. We may now have non-canonical propositions, that is, expressions that evaluate to a proposition but don't themselves have associated precommitments. We will call this judgment "". Furthermore, we need a notion of canonical truth. Within our precommitments defining canonical propositions, we will use the judgment , and outside we will have for non-canonical propositions. So we now have the three meaning explanations;

For Prop:

We may declare when we understand what's required to declare .

For true:

We may declare when there is a such that we may declare and .

For prop:

We may declare when there is a such that we may declare and .

As an example we will update the justification for the following derivation:

The verification of this rule must change to correspond to our new understanding of prop. is already in a canonical form. That is, we can declare so long as we can declare . From there, we need to establish what we need in order to declare . In order to state this, we must have stated two previous things. Firstly, we must have said what's required for declaring , where we may declare that . That is, we must be able to declare . Secondly, under the assumption of , we must have stated the requirements for declaring , where we may declare that . That is, we must be able to declare when we've already declared : we must be able to declare .

We will also modify previous meanings to make them more flexable. For example;

We may declare when we've declared and .

This allows our connectives to operate on non-canonical propositions.

We will make a new judgment, , which modifies our prop judgment. The only real distinction is that types have names for each precommitment rule. We have two precommitments for the canonical and noncanonical forms.

For Type:

We may declare when we understand the requirements for declaring for all applicable expressions .

For type:

We may declare when there is a such that we may declare and .

When a name appears within a precommitment, we call it a canonical witness of the type. Here, is used to denote that something is a canonical witness. The token, however, is intended to relate non-canonical expressions as well, so we have the further meaning explanation in the case that is non-canonical;

We may declare when we have an and such that we may declare , , and .

We may now modify our meaning of with a named witness;

We may declare when we may declare and .

Note that and do not have to be canonical. Even if and are non-canonical, is canonical, as it's subject to the rules of the precommitment.

It's useful to start declaring non-canonical witnesses at this point. We do this simply by declaring a sequence of tokens, along with reduction rules. For example we have;

Note the new kind of meaning, signified by . Note that these aren't the meanings themselves. Instead, we have a meaning like;

In any declared judgment, may be replaced with , creating a new judgment which can still be declared.

This models the meaning of , and allows us to validate . However, that's quite long-winded, so I'll just state the reduction rules themselves.

Each reduction rule constitutes a new precommitment. We ought to be concerned by the ontological cost of such a commitment. We can, at least, see that so long as our source expression is not in any kind of canonical form, we're neither introducing a new idea nor modifying an old one. If all of our reduction rules are of this form, the precommitment is free of cost, it doesn't make our ideas incoherent. We may think of the reduction rules as the precommitment defining the meaning of the operators they reduce. If we were to assert a reduction rule for an operator which already has a meaning explanation, such as , for instance, then we'd be breaking the precommitment which is its meaning explanation.

As long as they do not conflict with any previous precommitment, we may freely assert any reductions we want, including looping and exploding expressions. These declarations, however, may not interface well with our previous ones, they do have to answer to the meaning explanations of our theory in order to be useful. For example, we may want to validate a derivation such as this;

To validate this rule, we observe that, in order for to have been concluded, it must reduce to a canonical (by the meaning of on non-canonical terms). The only canonical witnesses of are of the form , where and . This brings our target judgment to;

Since we've declared that , we may alter our goal to

which is easily declarable.

We use the expression to stand for the proof tree expressed by the reasoning in the earlier derivation. The previous validation acts as a precise restatement of that derivation where our justifications were named by the witnesses. The reduction rule gives a precise description for how to extract the assumption we initially made.

A First Pass at Lambda Expressions

In the case of implication, we can easily transform the previous meaning explanation for the sake of making a type;

We may declare when we may declare .

For example, we may validate the following judgment;

To do so, we must be able to state

which we, in fact, have already done. However, there's an extra consideration which must be made. For an expression may be the case that is a non-canonical witness of and that actually mentions . Consider that, in order for to truly be a proof of , we must be able to construct a witness of out of , assuming only that is a witness of . That is to say, if we know more about the specific structure of , then it may not be fully general, it may be creating a witness of out of a special case of rather than s in general. As a consequence, the meaning explanation above fails to capture what we intend. To demonstrate this, consider the following meaning for ;

We may declare .
We may declare when we may declare .

We'll consider these kinds of constructs more deeply later on, but for now, consider that a such that could have been declared in two canonical ways; one which contains a witness to , and one which doesn't. Now, consider the judgment;

Assuming our previous meaning explanation was correct, then it should be the case that , but this obviously shouldn't hold in general. Lambda expressions should only be able to bind variables, treated as witnesses of unknown structure, but not expressions containing constants. As a consequence, the previously stated meaning explanation is insufficient for characterizing functionality, the type-theoretic generalization of implication will need something more powerful than hypothetical judgment. A simple solution presents itself. We may use a generalized judgment;

We may declare when we've declared .

where that signifies that may be mentioned in , but not in or . This kind of abstract binding will be dealt with in the next post, and functional contexts will be defined in the post after that.

In the case of propositional implication, we may validate the following elimination rule;

Consider that, in order to have concluded , we must have been able to conclude , rendering our goal;

which is easily declarable.

Using this reasoning, we can make the following deduction;

Assume we have declared and may declare by some means. We can, by the meaning of conclude . From the elimination rule, we may conclude .

It's clear that is simply an internalization of the judgment . We may conclude that the above deduction should, in some sense, be equivalent to;

Assume we have declared and may declare by some means. By the meaning of hypothetical judgments, this means precisely that we may declare when we've declared , and so we do, we declare .

In the expression , we have that is a recipe of sorts for building our conclusion out of our assumption . We define a new reduction rule, application which we will denote with the following reduction rule;

We may validate the following judgment;

To do so, we note that we need to be able to state in the case that we've stated and , by the meaning of hypothetical judgments. In order to have stated , we must have some such that and . The only canonical witness of is of the form , so our goal becomes .

We may reduce our goal, and also point out that, in order to have concluded , we must be able to declare .

In order to declare , we must have been able to declare for any expression . In particular, this allows us to use in place of , allowing us to judge . This makes our goal

This is easily declarable.

Consider that, if we stuck with our previous meaning that didn't make use of hypothetical judgment, we would eventually get stuck on;

We've solved that major problem with the meaning explanation, but we are begged to answer for our specification of variable usage. This can be clarified by an understanding of free variables, as described in the next post.

If we take and on their own, ignoring any idea that it has anything to do with logical formulas, we get a self-contained model for computation called the lambda calculus. It is, on its own, Turing complete. It was the second such system discovered which has this property all the way back in the 1930s by Alonzo Church. At that time, the connection to logic that I showed here was not known, only to be described in 1969 by William Alvin Howard. In many ways the lambda calculus can be thought of as the computational component of deduction, describing precisely, through substitution alone, how justifications in deductive inferences flow through an argument.

Disjunction and Generalized Eliminators

We may straightforwardly alter the meaning precommitment for to incorporate witnesses;

We may declare when we've declared .
We may declare when we've declared .

Characterizing the elimination rule for is a bit more complicated. Consider the following judgment.

to validate it, we consider what requirements are necessary to conclude . There are only two; either we declared or we declared . This splits our goal into two separate cases which we must validate individually;

Both are easily declarable. Skipping some intuition, we declare the following;

case acts as a non-canonical witness for elimination. To see why the computation rules hold, consider the following judgment;

This should remind you of the introduction rule for implication. We first note that, in-order to declare we must have already a where . To validate our goal, we do a similar case split on , which can only be of the form where or where . We'll focus on the first of these cases, since the other is more-or-less the same. We have our new goal;

By the reduction rule for case, the conclusion reduces to . We can see from the assumption that we may conclude , which allows us to trivially finish validating our goal.

Note the used in the syntax for case, similar to . This will be our standard notation for variable binding outside of contexts and generalized judgments. This will be discussed in detail in the next chapter.

We may alter our elimination implementation to rely on implication. We define a modified case;

We may make a similar judgment through the fact that we can internalize generalized judgments using .

We can employ a similar trick to unify our two rules for into a single generalized elimination rule. For , we needed two functions to account for all possibilities. The issue with the eliminators is not that they don't account for all possibilities, after all, there is only one, the pair. The issue is that neither one on their own accounts for all data. If we look at ;

we notice that it throws away half of our pair. Likewise with . We need to have functions which account for all possibilities and all data if we want to have a unified eliminator.

Generically, we need a function which potentially operates on all internal data of our type constructor. For we only have the two types it has as input. So, to eliminate a in general, we need a single function . To figure out what the return value will be, we just issue the components of the pairs to . So we have;

From which we can verify;

We can recover the original eliminators by setting to be and since and .

Furthermore, we can obtain from our old eliminators by defining it as . This is just a different form of an eliminator we already had, not a more powerful principle.

We could also make a generalized eliminator like so;

It's this sort of eliminator which we'll make most often as we move onto mathematics in general.

Verum and Falsum

It's far easier to deal with verum and falsum when it comes to witnesses. The meaning explanation for trivial truth is simply;

We may declare .

and the meaning explanation for trivial falsehood is still the empty precommitment.

For the elimination of , we consider the following;

Consider what's necessary to declare . We would need to be able to declare whenever we can declare . By the meaning of , we may always declare , meaning that we must always be able to declare . We reduce our goal to;

which is trivially verifiable. Using this, we can construct the following;

Using a similar strategy to before, we may verify;

Of course, this is somewhat trivial, as the generalized judgment which we're assuming covers . All this leads us to conclude that we can't really eliminate out of since it doesn't contain any information.

Regarding , we have something more interesting. Consider the following;

how might this be verified? Consider all the cases from which we can conclude . Well, there are none, we're done. We've verified this judgment for all the cases we may consider.

Some reader's unfamiliar with logic may find this reasoning a bit difficult to wrap they're head around, but it's called the principal of explosion, and is completely consistent with our previous forms of reasoning. For we needed to verify our elimination on two different cases. For , it was one case. For , it's zero, allowing us to trivially finish verification.

For the lack of witnesses to , we may verify, through similar reasoning as before, the following;

Defined Connectives

There are a handful of connectives which I've neglected to give meaning explanations to. Most significantly are negation, and the biconditional . They are defined in terms of what we have already defined. We simply assert the following;

Starting with the biconditional, we can validate the following judgments;

which sums up most use cases for .

is more interesting. We can investigate it by considering how excluded middle interacts with the logic I've so far presented. Excluded middle states that every proposition is either true or false, that . This is NOT declarable in the logic I set out. Consider how this might interact with our proof theory. If it were declarable then there should be some uniform algorithm which, given any arbitrary proposition , would return either where or where . There are, indeed, some logics which have this property, which will be discussed in a later post. For now, we can prove . How would we verify this judgment?

We can reduce to . To prove this, we start with a lambda term , where , and we need to find a . Well, if we constructed a proof of , then we could apply it to to get our . The only way we can get a proof of a is to use either or . Off the bat, there's no clear way to give a proof of , but maybe there's a way to prove , so let's go with .

wants a proof of , so we start with a lambda term , where and we need to find a . Hey, now we have a proof of , so we have . We can issue that into to get , which is our .

Rolling that into our lambda term, we have . We then have . We can issue this to , obtaining , which is our goal. Rolling this into our outermost lambda term, we finally arrive at the following judgment;

which contains all the information necessary to recover our reasoning, compressed into a single line. More importantly, that expression can itself be reasoned about. It's not just an elegant presentation of our justification, it's data as well.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:48 PM

This is a fine introduction to constructive logic. And, indeed, I suspect that constructive logic could be popular in this community, if it was better known.

Still, I don't really understand what the purpose of this series is. Your first post made some bold claims regarding what mathematics is (and the title hints at something like that too), which I don't think were sufficiently explained, and it's strange to see none of that in part 2. Was that central to your goal or just a curiosity? Are we coming back to that in later parts?

My goal is mostly to collect my understanding of mathematics into some coherent collection that people can read. Mostly, I just want to expose people to an alternative way of thinking, as I've seen people from this community say things that rely on a narrow view of foundations. My plan for the series is not to be especially philosophical. The first post consists only of things which I think are obvious. As far as I can tell, the main reason people thought it might have been controversial is that they didn't understand what I was getting at in the first place.

The content of this post is actually the meat of my series. I don't think the philosophical discussion has much practical significance beyond settling one's mind. I'm far more interested in doing mathematics than talking about the doing of mathematics.

I will eventually write a post pertaining to truth-maker semantics, and I'll revisit some of the content from my first post then, but I wouldn't consider the content of the first post central, or even all that necessary, for the series.

And does "mathematics" mean "constructive logic"? You criticise people's narrow view of foundation, but aren't you just replacing that with a different but equally narrow view? I think some philosophical discussion is required.