Mathematical Certainty, Its Basic Assumptions and the Truth-Claim of Modern Science

The pitfalls of a pseudo-mathematical method, which can make no progress except by making everything a function of a single variable and assuming that all the partial differentials vanish, could not be better illustrated. For it is no good to admit later on that there are in fact other variables, and yet to proceed without re-writing everything that has been written up to that point.

Keynes, John Maynard (2010-12-30). The General Theory of Employment, Interest and Money (p. 232). Signalman Publishing.


This is a devastating idea from the perspective of modern (i.e. Cartesian) science. If true, in fact, modern science as a whole becomes no more than a failed experiment in demonstrating the truth of its initial assumptions, along with their necessity. We will need to determine the truth or falsity of Keynes claim later, though, since firstly we need to know what those assumptions were and in what manner and to what extent they have determined both the content and method of science since the time of Descartes. The validity of the assumptions that underlie the Cartesian claim to the certainty of mathematical science in fact are the basis of science’s claim to truth, and that claim remains as dependent on those assumptions today as it was in the 16th century.

The quote doesn’t, though, talk about modern mathematical scientific method the way it is usually described, raising the question as to why it should be so devastating. The manner in which scientific method is usually described appears to be so self-evidently valid that the notion that intelligent men such as Aristotle, Augustine, Aquinas, Bacon, and Ockham, the last two who are credited with much of its invention, would have in fact disputed its validity seems difficult to understand. The sequence of observation, hypothesis generation, refinement into theory and testing of the theory via repeated experiment seems an obvious way to determine truth, and therefore develop knowledge. That it hinges on assumptions that would have been contradicted even by those to whom its development has been largely attributed is initially difficult to comprehend. To make the further claim that the rationale of Cartesian science is itself irrational appears to be itself evidentially ridiculous, since that very rationale is credited with originating plenty of knowledge, and as knowledge it is evidentially true, since false knowledge cannot be knowledge at all.

It’s questionable, though, whether the knowledge we evidentially do have in fact originated via the posited method. It’s questionable as well, in fact, whether the majority of what we term ‘research’ uses the posited method at all. Looked at without the presumption that scientific method is in use, for instance, in the field of medicine, and you will find three main methodologies are actually being followed:

1. The method of looking for similarities in apparently disparate phenomena.

2. The method of testing similar phenomena to see if similar actions produce similar results.

3. Trial and error, often in the form of dumb luck.

For example, if we take the means by which researchers attempt to find medications to treat specific conditions, initially a clue is taken by a similarity between phenomena that had appeared to be disparate, for instance its noted that someone who walks barefoot in an area that had been cleared of mushrooms by the application of the extract of a poisonous plant appears to have less discoloration on their feet, a discoloration whose origin hadn’t been known.

Early attempts to apply the plant extract directly on others’ who have the same discoloration produce results that are better than nothing (testing vs placebo), but not enough better to be given out, since problems occur in the test patients that are as bad or worse than the discoloration, for instance breathing difficulties, rashes, etc.

Samples are taken, not of the poisonous extract as it is when it was removed from the original plant, but as it is after being applied to an area of soil and left for a few days. These samples are similar to the original plant extract, but not exactly the same, since some type of process has affected the extract during its mingling with the soil. These samples are of no direct use, however, since the amount of sampled residue is tiny compared to the amount of plant extract utilized to produce the residue on the soil. Some small tests are done, though, using the residue, and by volume it not only appears to be more effective on the discoloration than the original extract, in as much quantity as can be reasonably produced and tested, there are no signs of imminent rashes or breathing difficulties among the test subjects.

By whatever means available, attempts are made to produce a similar solution to the residue in the laboratory. Various notions, or ‘theories’ as to what affects what might be used to determine where to look first for available means, but the methodology can hardly be referred to as observation-hypothesis-theory-experiment-repeat experiment-validate, it’s more along the lines of observation-test-guess at a similar substance-test-repeat until the problem is solved.

Once the problem is solved relatively well, the theory that the discoloration originated from a species of life similar in some sense to mushrooms, in time perhaps the technology available might allow researchers to actually see and identify such species of what are now well known as fungi. The idea that the ‘theory of fungi’ produced the wonderful medicinal innovation of getting rid of athlete’s foot simply and effectively, though, is a retrospectively made assumption that is  more mythical than actual. 

Historically, the discovery of medicines from Lithium as a means of controlling manic depressives to anti-depressants didn’t even follow that much logic, but were discovered on the even more ancient methodology of dumb luck.  Lithium was given to manic depressives simply as a means of calming down the more agitated, the doctor involved happened to notice that it calmed manics down far more than its effects as a tranquillizer on other people would have suggested, but that it also had the opposite effect on manic depressives in the depressive phase.  Anti-depressants were discovered when testing antihistamines on a (conveniently available) population of patients hospitalized for depression, and certain antihistamines that didn’t affect their allergies all that much did have a beneficial effect on their depression.


Nuclear fission, the atomic bomb, is often pointed to as ‘proof’ of atomic theory, even of quantum theory. The reality, though, is that we can only produce the desired effect with specific isotopes of an unusual material, uranium. Looked at historically, the behaviour of those isotopes of uranium was the specific origin of modern atomic radiation theory. Within modern radiation theory, though, there is no demonstrated reason why specific isotopes of uranium work and nothing else we can find does. The theory does provide a model that allows for specific isotopes of an element to be intrinsically unstable, but provides no specific reason that they should be, other than the evidence that was present before any theorizing started. Nor does it provide any reason why other isotopes of uranium are not nearly as unstable, nor why radioactive isotopes of other elements are either phenomenally rare or nonexistent. With an unprejudiced eye, the ‘validation’ of atomic theory via nuclear fission looks like an almost ludicrous example of confirmation bias.


If the ‘proofs’ that are often cited first for the validity of scientific method are not altogether convincing when looked at more closely, then we have some reason to apply Descartes own procedure of doubt, and apply it specifically to the assumptions on which he based his certainty. A prior question, though, is what does modern science actually do that is so specifically different from, for instance, the methods used by mankind until the doubting of Descartes doubted all other methods away. The mythical Aristotle that never bothered looking at evidence can be safely assumed to be precisely that, and any scientist or believer in modern science looks rather a fool when they claim that the father of all the sciences was such a simpleton. That a well respected 20th century philosopher and co-author of the Principia Mathematica, a certain Alfred North Whitehead, could with justification claim that all of western science, modern or not, was no more than a series of footnotes to Aristotle, should annoy those that belittle Aristotle and other brilliant minds that happened to live prior to the Cartesian ‘revolution’ (or the more mythical Galilean revolution) sufficiently to actually look at the history of science and draw some actually relevant conclusions. In any case, in terms of whether the rationale of science is rational, whether the method of modern science is valid, we need to look at what it actually does, and whether the assumptions that led it to do what it actually does are rational assumptions.


Stephen Hawking’s epistemology of ‘model-dependent realism’, while not sufficient to do the job he requires of it (since science has apparently obviated philosophy, there were no philosophers available to point out its insufficiency, sad really), can be of more use in giving us a clue as to what the methodology of modern science in fact is, rather than the mythical hypothesis-theory-validation sequence that really applies to virtually all human investigation of reality since such investigations began. Modern, Cartesian science builds models that are supposed to ‘stand-in’ for reality, specifically mathematical models. Those models should (in theory) predict the outcome of very controlled and isolated tests – the repeated experiments researchers are so fond of but never actually do – and if the predictions are validated by the repeated controlled and isolated tests, the model is deemed sufficiently accurate to stand, at least until more accurate tests can be devised. The difficulty, though, is that a predictively accurate model, even if one can be devised and appropriate tests performed, doesn’t demonstrate that nature really is that way, only that when tested in specific ways it behaves that way. Newton’s models, for instance, held up to observation for nearly a hundred years, at which point more accurate observations on larger scales and smaller scales than Newton could observe showed increasing discrepancies with the predictions of his models, and newer models had to be devised. One of the fundamental issues, therefore, with the truth-claim of modern science is that it very rarely makes any. Since two models can (and in many cases do) have equivalent predictive accuracy, the preference for the simpler model is completely arbitrary, if neither have any ontological verifiability the choice of the simpler model could even be seen as contrary to the evidence of the history of science itself, where the majority of accepted theories have eventually failed precisely due to their being too simple. There is a further problem, and this is the problem that the quote I began with presented, that a sufficiently complex model of a sufficiently complex system must, in many cases, account for all the variables in the system being modeled. The only thing that can generally do so, however, is the original system itself. Mathematics, in particular, is radically unsuited to the task, because what mathematics considers ‘complex’ doesn’t even qualify as simple in comparison with most real world systems. ‘Complex’ mathematics can handle such complex systems as 8 bits with 1 parity bit. From the mathematical point of view that is a complex system, complex enough in fact to already demonstrate strong emergence, something biologists are still arguing about the possibility of. From the standpoint of the software developer, it barely registers as complex enough to pay attention to.


When the methodology of modern science is accurately presented it appears far more questionable as a means of studying reality in general (and not simply its behaviour under very specific conditions). Now the question of how modern science came to operate in such a manner, and what assumptions underlay the belief of the founders of modern science that what they were doing was inherently valid as a basis for knowledge can properly be raised. Since the rationality of science is dependent on the logicality of its logic, how that logic developed and the assumptions that it contains needs to first be examined.


Logic, up until the 13th century, had not predominantly changed since it was first written up by Aristotle. This logic, commonly known as deductive logic, had a specific and somewhat peculiar purpose – winning the then popular game of question/answer dialectics. Aristotle’s contribution to logic, the notion that essentially joined together in a coherent framework what had previously been disparate elements, centered on the syllogism, a way of treating a series of statements to demonstrate whether they were consistent or not (in the game, proving inconsistency was the primary method of winning).


Aristotle’s syllogism and the deductive logic built around it was so successful that it actually made the game it was intended for much less amusing, and the game of question/answer dialectics subsequently fell out of fashion, but the logic remained unchanged as the main arbiter of correctness of argument until the 13th century, and the contribution of a Franciscan friar by the name of Roger Bacon (not his more famous namesake, Francis).


Roger Bacon saw that the limitation of deductive logic was that there was no reference outside the statements made. Aristotle’s logic certainly demonstrated whether a series of statements was logically consistent, but a series of logically consistent statements can still be logically and consistently wrong. Bacon’s contribution was known as inductive logic, which essentially creates a syllogism with reference to a demonstrated fact.


This was the germination of modern scientific method. The reference to fact or evidence was in reality made by Aristotle though, when he considered matters of concrete science, but it wasn’t codified as a logic. There is still a major difference, though, between experiential science and experimental science, and the validity of experimental science rests on a foundation that would not have been accepted by Aristotle.


We now need to see how the application of inductive logic led to a certainty of the valid basis of experimental science. The next figure in the history was another Franciscan friar by the name of William of Ockham (sometimes spelled Occam), most famous for ‘Ockham’s Razor’, but was actually far more influential than merely being the originator of a tool for choosing between two equally predictive theories.


Ockham’s contribution to philosophy is generally known as nominalism. However we will go through Ockham’s story a little to see how his history and beliefs shaped what became known as nominalism.


Ockham had studied theology at Oxford, to the level of becoming a Venerabilis Inceptor (Venerable Beginner), which was often his nickname (he never did earn the equivalent of a Masters in theology, which would have allowed him to teach. A contemporary better known at the time, Meister Eckhart, had the (German) title of Master precisely due to having earned a Master level degree in theology. Ockham’s early writings on the value of poverty earned him an investigation on the charge of heresy although charges were never actually laid, and he had to spend time in Avignon while under investigation. At this time the valid Papacy was generally considered to be the French Papacy (there was a political argument underway over whether the French Papacy could legitimately lay claim to apostolic succession), and the current French Pope, Pope John XXII, did charge the Franciscan Michael of Cesena for preaching similar views to those Ockham had argued for in his early writings. Asked to look at the matter, Ockham determined that not only was Michael not guilty of heresy, but that Pope John XXII himself was guilty not of simple heresy (an honest mistake) but of stubborn heresy i.e. living and teaching heresy that he knew was heresy, and had therefore effectively abdicated the Papal office. Unsurprisingly Avignon became a hostile environment for Ockham. He escaped with Michael of Cesena and a few others of like mind first to Italy, where the Holy Roman Emperor Louis was in a dispute with the French Papacy, and eventually returned with Louis to his native town of Munich in Bavaria. Ockham was excommunicated by Pope John XXII for leaving Avignon without permission, and spent the rest of his life in Munich engaged primarily in political writing.


To get back to what in Ockham’s teachings affected the foundation of Cartesian science, which was of course not the advocacy of mendicant poverty for which he was accused of heresy. Although Ockham is known as a theologian, philosopher and logician, his logic was simultaneously derivative and idiosyncratic. His philosophy and theology inherited the idiosyncrasies of his logic, while adding more.


The figure, aside from Aristotle, who had the most influence on Ockham’s logic was Porphyry, who while being arguably a greater logician than Ockham, is for this purpose largely relevant as an intermediate figure between Aristotle and Roger Bacon, and Ockham himself. Ockham’s Summa of Logic was largely based on a somewhat idiosyncratic interpretation of Aristotle, combined with a somewhat idiosyncratic interpretation of Porphyry, itself largely a development and clarification of Bacon’s work on inductive logic. Important elements of Ockham’s logic can also be found in certain other works, but the most notable, the Treatise on Predestination, is important to the development of Cartesian science in a number of ways that go beyond simply the method of his logic.


The Summa of Logic, while treating both deductive and inductive logic, treats logic in the same order and under the same major headings as the deductive logic of Aristotle (since inductive logic essentially introduces observable evidence as a term in the syllogism, from the formal perspective they are largely the same). The following excerpt from the Stanford Encyclopedia of Philosophy is sufficiently accurate to describe the main points in Ockham’s logic.


3.2 Signification, Connotation, Supposition


Part I of the Summa of Logic also introduces a number of semantic notions that play an important role throughout much of Ockham’s philosophy. None of these notions is original with Ockham, although he develops them with great sophistication and employs them with skill.


The most basic such notion is “signification.” For the Middle Ages, a term “signifies” what it makes us think of. This notion of signification was unanimously accepted; although there was great dispute over what terms signified, there was agreement over the criterion. Ockham, unlike many (but no means all) other medieval logicians, held that terms do not in general signify thought, but can signify anything at all (including things not presently existing). The function of language, therefore, is not so much to communicate thoughts from one mind to another, but to convey information about the world.


In Summa of Logic I.33, Ockham acknowledges four different kinds of signification, although the third and fourth kinds are not clearly distinguished. In his first sense, a term signifies whatever things it is truly predicable of by means of a present-tensed, assertoric copula. That is, a term t signifies a thing x if and only if ‘This is a t’ is true, pointing to x. In the second sense, t signifies x if and only if ‘This is (or was, or will be, or can be) a t’ is true, pointing to x.[17] These first two senses of signification are together called “primary” signification.


In the third and fourth senses, terms can also be said to signify certain things they are not truly predicable of, no matter the tense or modality of the copula. For instance, the word ‘brave’ not only makes us think of brave people (whether presently existing or not); it also makes us think of the bravery in virtue of which we call them “brave.” Thus, ‘brave’ signifies and is truly predicable of brave people, but also signifies bravery, even though it is not truly predicable of bravery. (Bravery is not brave.) This kind of signification is called “secondary” signification. To a first approximation, then, we can say that what a term secondarily signifies is exactly what it signifies but does not primarily signify. Again to a first approximation, we can say that a “connotative” term is just a term that has a secondary signification, and that such a connotative term “connotes” exactly what it secondarily signifies; in short, connotation is just secondary signification.


The theory of supposition was the centerpiece of late medieval semantic theory. Supposition is not the same as signification. First of all, terms signify wherever we encounter them, whereas they have supposition only in the context of a proposition. But the differences go beyond that. Whereas signification is a psychological, cognitive relation, the theory of supposition is, at least in part, a theory of reference. For Ockham, there are three main kinds of supposition:


Personal supposition, in which a term supposits for (refers to) what it signifies (in either of the first two senses of signification described above). For example, in ‘Every dog is a mammal’, both ‘dog’ and ‘mammal’ have personal supposition.

Simple supposition, in which a term supposits for a concept it does not signify. Thus, in ‘Dog is a species’ or ‘Dog is a universal’, the subject ‘dog’ has simple supposition. For Ockham the nominalist, the only real universals are universal concepts in the mind and, derivatively, universal spoken or written terms expressing those concepts.

Material supposition, in which a term supposits for a spoken or written expression it does not signify. Thus, in ‘Dog has three letters’, the subject ‘dog’ has material supposition.[20]


Personal supposition, which was the main focus, was divided into various subkinds, distinguished in terms of a theory of “descent to singulars” and “ascent from singulars.” A quick example will give the flavor: In ‘Every dog is a mammal’, ‘dog’ is said to have “confused and distributive” personal supposition insofar as


It is possible to “descend to singulars” as follows: “Every dog is a mammal; therefore, Fido is a mammal, and Rover is a mammal, and Bowser is a mammal …,” and so on for all dogs.

It is not possible to “ascend from any one singular” as follows: “Fido is a mammal; therefore, every dog is a mammal.”


Although the mechanics of this part of supposition theory are well understood, in Ockham and in other authors, its exact purpose remains an open question. Although at first the theory looks like an account of truth conditions for quantified propositions, it will not work for that purpose. And although the theory was sometimes used as an aid to spotting and analyzing fallacies, this was never done systematically and the theory is in any event ill suited for that purpose.


3.3 Mental Language, Connotation and Definitions


Ockham was the first philosopher to develop in some detail the notion of “mental language” and to put it to work for him. Aristotle, Boethius and several others had mentioned it before, but Ockham’s innovation was to systematically transpose to the fine-grained analysis of human thought both the grammatical categories of his time, such as those of noun, verb, adverb, singular, plural and so on, and — even more importantly — the central semantical ideas of signification, connotation and supposition introduced in the previous section.[22] Written words for him are “subordinated” to spoken words, and spoken words in turn are “subordinated” to mental units called “concepts”, which can be combined into syntactically structured mental propositions, just as spoken and written words can be combined into audible or visible sentences.


Whereas the signification of terms in spoken and written language is purely conventional and can be changed by mutual agreement (hence English speakers say ‘dog’ whereas in French it is chien), the signification of mental terms is established by nature, according to Ockham, and cannot be changed at will. Concepts, in other words, are natural signs: my concept of dog naturally signifies dogs. How this “natural signification” is to be accounted for in the final analysis for Ockham is not entirely clear, but it seems to be based both on the fact that simple concepts are normally caused within the mind by their objects (my simple concept of dog originated in me as an effect of my perceptual encounter with dogs), and on the fact that concepts are in some way “naturally similar” to their objects.


This arrangement provides an account of synonymy and equivocation in spoken and written language. Two simple terms (whether from the same or different spoken or written languages) are synonymous if they are ultimately subordinated to the same concept; a single given term of spoken or written language is equivocal if it is ultimately subordinated to more than one concept.


This raises an obvious question: Is there synonymy or equivocation in mental language itself? (If there is, it will obviously have to be accounted for in some other way than for spoken/written language.) A great deal of modern secondary literature has been devoted to this question. Trentman [1970] was the first to argue that no, there is no synonymy or equivocation in mental language. On the contrary, mental language for Ockham is a kind of lean, stripped down, “canonical” language with no frills or inessentials, a little like the “ideal languages” postulated by logical atomists in the first part of the twentieth century. Spade [1980] likewise argued in greater detail, on both theoretical and textual grounds, that there is no synonymy or equivocation in mental language. More recently, Panaccio [1990, 2004], Tweedale [1992] (both on largely textual grounds), and Chalmers [1999] (on mainly theoretical grounds) have argued for a different interpretation, which now tends to be more widely accepted. What comes out at this point is that Ockham’s mental language is not to be seen as a logically ideal language and that it does incorporate both some redundancies and some ambiguities.


The question is complicated, but it goes to the heart of much of what Ockham is up to. In order to see why, let us return briefly to the theory of connotation. Connotation was described above in terms of primary and secondary signification. But in Summa of Logic I.10, Ockham himself draws the distinction between absolute and connotative terms by means of the theory of definition.


For Ockham, there are two kinds of definitions: real definitions and nominal definitions. A real definition is somehow supposed to reveal the essential metaphysical structure of what it defines; nominal definitions do not do that. As Ockham sets it up, all connotative terms have nominal definitions, never real definitions, and absolute terms (although not all of them) have real definitions, never nominal definitions. (Some absolute terms have no definitions at all.)


As an example of a real definition, consider: ‘Man is a rational animal’ or ‘Man is a substance composed of a body and an intellective soul’. Each of these traditional definitions is correct, and each in its own way expresses the essential metaphysical structure of a human being. But notice: the two definitions do not signify (make us think of) exactly the same things. The first one makes us think of all rational things (in virtue of the first word of the definiens) plus all animals (whether rational or not, in virtue of the second word of the definiens). The second definition makes us think of, among other things, all substances (in virtue of the word ‘substance’ in the definiens), whereas the first one does not. It follows therefore that an absolute term can have several distinct real definitions that don’t always signify exactly the same things. They will primarily signify—be truly predicable of—exactly the same things, since they will primarily signify just what the term they define primarily signifies. But they can also (secondarily) signify other things as well.


Nominal definitions, Ockham says, are different: There is one and only one nominal definition for any given connotative term. While a real definition is expected to provide a structural description of certain things (which can be done in various ways, as we just saw), a nominal definition, by contrast, is supposed to unfold in a precise way the signification of the connotative term it serves to define, and this can only be done, Ockham thinks, by explicitly mentioning, in the right order and with the right connections, which kind of things are primarily signified by this term and which are secondarily signified. The nominal definition of the connotative term “brave”, to take a simple example, is “a living being endowed with bravery”; this reveals that “brave” primarily signifies certain living beings (referred to by the first part of the definition) and that it secondarily signifies — or connotes — singular qualities of bravery (referred to by the last part of the definition). Any non-equivalent nominal definition is bound to indicate a different signification and would, consequently, be unsuitable if the original one was correct.

There are a number of innovations in Ockham’s work that are crucial in the development of Cartesian certainty and mathematical science. While not completely original, the notion of a ‘mental language’ that precedes verbal and written language, was first developed into a theory of cognition itself by Ockham. It is currently generally accepted that Ockham’s notion of a mental language incorporates artifacts of verbal and written languages such as synonymy and equivocation, and is not an ‘ideal’ language that would thereby be free of redundancy and ambiguity. While redundancy is not a major issue for scientific certainty, ambiguity has been the enemy of scientific epistemology from the middle ages to the present, and the implication that mental language is free of ambiguity is an implication that became explicit in thinkers following on from his work. Ockham himself didn’t engage in the types of reduction that eliminate ‘irrelevant’ particulars found in the work of the ‘nominalists’ that followed.


A second innovation was the notion of ‘real’ and ‘nominal’ definitions. Since only ‘absolute’ terms could have the former, it became important to distinguish between absolute and non-absolute terms. The key here is the term ‘real’, which is used in the same sense as Kant uses the term, and not as a synonym for ‘actual’ or ‘non-imaginary’. ‘Real’ for Ockham is predicated on a substantialist metaphysics, just as it is for Kant. A famous statement by Kant indicates precisely what is meant by the term. The occasion for Kant’s statement, ironically as it turns out, is his refutation of Descartes’ ontological proof of God’s existence. Before getting to Kant’s statement, we need to look at the proof itself, and in what way Kant’s statement uses this precise metaphysical notion of ‘real’ to refute the proof.


Descartes proof is one of a number of attempts at an ontological proof for God’s existence, the earliest usually ascribed to St. Anselm in the 11th century. A simple version goes something like this:


There is no less contradiction in conceiving a supremely perfect being who lacks existence than there is in conceiving a triangle whose interior angles do not sum to 180 degrees. Hence, he supposes, since we do conceive a supremely perfect being—we do have the idea of a supremely perfect being—we must conclude that a supremely perfect being exists.


Kant’s famous refutation, then, runs as follows:


Being is not a real predicate. Being indicates absolute position.


Without getting into the question of whether Kant’s refutation is itself valid, and whether it also applies to Anselm’s original proof, of which Descartes’ version is a (possibly invalid) reduction, we need to understand what Kant’s refutation is actually saying, and why it therefore is considered a valid (or not) refutation to begin with. As Russell famously commented, it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them. The question of whether Descartes’ reduction is valid is in itself an interesting question in terms of what types of reduction are valid and what are not, which is of importance a little later on in this history.


Kant’s argument is that being, or existence, which are not differentiated in this particular quote, is not a ‘real’ predicate, instead he refers to it as indicating ‘absolute position’. Thought in terms of the notion to which he is referring, what he is saying doesn’t appear to be particularly obvious using the current meanings of ‘real’, ‘absolute’ and ‘position’. Kant is denying a specific ontological proof, but in doing so intends it to be applicable to ontological proofs in general. An ontological proof attempts to demonstrate that the matter to be proven contains a syllogism that renders the matter necessarily true: it’s lack of truth would create a set of inconsistent statements. If none of the posited statements can be denied, and denial of the conclusion would require such a denial, then the conclusion must in fact be the case. This may seem a purely deductive proof, and in general ontological proofs are purely deductive, simply because inductive proofs introduce contingency that would defeat the intent of demonstrating a truth not simply as true, but as necessarily true, however the correctness syllogism remains intact if one of the terms is evidentially true rather than necessarily true, it simply becomes a contingent truth. In attempting to not only refute Descartes, but the very possibility of ontological proofs relating to the existence of something, Kant has to avoid the common refutation of Descartes’ proof, which is that we do not truly conceive of a perfect being, we only think of a concept of a perfect being, since whether we do or not then becomes a contingent element that has to be demonstrated as true or false in some evidential manner, or try to tackle the problem of whether or not there is a meaningful difference between conceiving and thinking of a concept.


Kant’s move is firstly to remain within Ockham’s logic of ‘real’ and ‘nominal’ definitions. Since God is conceptually absolute, it must have a ‘real’ definition, and not merely a nominal one. Since ‘perfect’ is part of the ‘real’ definition of the concept of God, and the concept is absolute, it must contain every perfection. (this latter itself is predicated on the idea that there can be no contradictory perfections, but this assumption was in fact demonstrated by Leibniz). Since, conceptually, something that does not exist is not perfect, ergo a perfect being must necessarily exist. Since we could not have any name for an absolutely perfect being but God, God must necessarily exist. Kant appears initially to beg the question, since in common parlance today ‘real’ and ‘existing’ are often interchangeable, how can existence not itself be real? And how can an infinite being have an absolute position?


The problem for modern readers of Kant is that terms such as ‘real’ and ‘position’ have changed their ordinary meaning through usage. ‘Real’ here refers to a substance, the notion of which is itself dependent on a substantialist metaphysics. ‘Position’ refers to ‘being posited’, i.e. being stated as so, which is itself dependent on a subjective, substantialist metaphysics. And in fact via Descartes himself substantialist metaphysics had become necessarily subjective. By this point I’m sure you’re starting to agree with Russell on the difficulty of actually refuting ontological proofs.


Substance is in many ways a difficult concept which originated with Aristotle. The concept arises from the notion of change itself, and since for Aristotle all change is a form of motion, is a concept of Aristotelian physics. Substance though for Aristotle is not a physicalist notion, as many have come to see it. In any change to a given thing, we take it for granted, at least to a point, that the thing itself is still the same thing, otherwise to say it experienced change would be nonsensical, since it would no longer exist. For instance, in the change that occurs when ice melts into water, we have the prior thing, ice, and the thing it has changed into, water. While the observable physical properties of ice and water are significantly dissimilar, since we can revert the change, and since each melting and freezing, as long as we perform the changes carefully, results in the same quantity of ice or water, we make an implicit assumption that there is something consistent (self-same, more precisely) underlying the different appearances of ice and water. This ‘something’ is what Aristotle referred to as substance. The physicalist misinterpretation of substance is that Aristotle intended that substance is always and only a physical entity in the sense that we understand ‘physical’ in common sense usage, which is oversimplifying the concept, since for Aristotle non-physical (in our sense) things can change, and therefore have movement, and are thus part of physics itself. For Aristotle, non-physical would have to refer to something that is necessary, and necessarily exactly as it is, such that it cannot move (change) in any manner whatsoever, and that not by a contingent situation, but as a necessary aspect of being that which it is.


We can see, furthermore, that even current physics, despite the many odd implications of quantum mechanics, maintains the basic substantialist posit. Particle physics is a model precisely of substance in physical entities. That on the most basic imaginable level the building blocks of matter in current particle physics have neither mass nor location doesn’t in itself damage Aristotle’s idea, merely the physicalist misinterpretation.


In any event, substance by the middle ages had been determined as to its required and optional attributes, which included extension, duration, etc. Substance in Latin is ‘res’, hence ‘res extensa’, ‘res tempora’, and even ‘res cogitans’ – the substance of thinking, or, after Descartes, the subject. Res was translated into English as ‘real’. Since the definition of any ‘real’ in substantial terms involves its attributes, any ‘real’ thing must be definable in terms of ‘real’ attributes, or predicates. To go back to Ockham, universals, abstractions, and connotative terms cannot primarily refer to a ‘real’. In a more modern parlance, only the particular is ‘real’, although that phrase usually also implies ‘actual’, if we avoid that implication, we can get a sense of what was intended by the term from Aristotle all the way to Kant.


What Kant is therefore saying is that with a conceptual ‘res’, or ‘real’ thing, all of the ‘real’ predicates are valid even if the thing involved is imaginary, does not have existence, and adding existence to its list of predicates merely demonstrates that you have posited it as absolutely being-so, and therefore existing. Our usual method of positing the absolute being-so of anything is some type of verifiable evidence, which destroys the deductive necessity of an ontological proof – it becomes dependent on the contingent results of a verifiable test. Already in Kant’s refutation the necessity of the experiment, an inductive notion, is used to counter a deductive argument. A counter-argument to Kant, therefore, could based on the invalidity of using inductive logic within a purely deductive framework. A more serious counter-argument might center on Kant’s reliance on the Cartesian placement of the “I” as the certain subject. Subject, or in Latin subiectum, was generally a more technical synonym for ‘res’, technical specifically in grammatology, where the subiectum of a proposition is the ‘real’, or substance, and what is predicated of the substance a determination of or about it. Transposed into modern elementary school grammar, in the propositional sentence “The dog is red.”, dog is the subject of the sentence and the colour ‘red’ is predicated of it. That Kant’s metaphysics is necessarily a subjective substantialist metaphysics for his refutation to not become questionable appears, from this definition of subject, as either redundant or nonsensical. The term subject and therefore subjective undergoes a fundamental change of meaning through Descartes’ work, which has since affected many ways in which we use the term, which can be seen clearly in the phrase ‘merely subjective’, as of perception, feeling, etc. This signification of the term is radically different from simply being the adjectival form of subject in the sense that grammar retains.


The last major innovation in Ockham’s work with which we need to deal is the notion that was properly introduced in his Treatise on Predestination, and not made part of his logical foundation. That notion is referred to as his theory of future contingents. The details of the theory are not, themselves, of real importance here, however their association with what came to be known as nominalism is a key to understanding Cartesian science and follow-ons such as determinism.




Nominalism can refer to a number of things within philosophy. For Ockham, it was fundamentally a theory of knowledge and propositions, nominalism is defined in different ways, however, and different aspects are not necessarily interdependent. Some common definitions for nominalism include:



  1. A denial of metaphysical universals. Ockham was emphatically a nominalist in this sense. 
  2. An emphasis on reducing one’s ontology to a bare minimum, on paring down the supply of fundamental ontological categories. Ockham was likewise a nominalist in this sense. 
  3. A denial of “abstract” entities. Depending on what one means, Ockham was or was not a nominalist in this sense. He believed in “abstractions” such as whiteness and humanity, for instance, although he did not believe they were universals. (On the contrary, there are at least as many distinct whitenesses as there are white things.) He certainly believed in immaterial entities such as God and angels. He did not believe in mathematical (“quantitative”) entities of any kind.
  4. Nominalism can be used in the sense it has as one of the three terms of the ‘unholy trinity’ of terms that underlie most scientific atheism, whether admitted to or not. The ‘unholy trinity’ consists of naturalism, nominalism, and reductionism. As we shall see, nominalism in this sense is at least as questionable as the other two terms when utilized as a scientific metaphysics. In relation to Ockham, of course, as a deeply religious friar, he is in no way a nominalist in this last sense.


The following quote sums up quite nicely both the usage and problematic of the ‘unholy trinity’. It also makes a good short break from discussion of logical subtleties, and has the third virtue of pointing forward so that the reader has some idea of what this history is tending towards.


Belief in the “unholy trinity” of reductioni sm, nominalism, and naturalism is at the root of much anti-religious thought, whether consciously or not. Taken together, these doc- trines, in the extreme form in which they are usually held, preclude any belief in the spiri- tual, and thus any type of theistic interpretation of science, such as theistic evolution. There are two basic approaches to resolving th e science-religion conflict posed by the unholy trinity. The first involves rejection of a branch or conclusion of science, as is done by Creationists. The second is to deny the scope implicitly assumed for science by the unholy trinity. This is done at the direct observational level by those such as the Intelligent Design school, and at a deeper, more indirect level by most advocates of theistic evolution. But the unholy trinity itself has many serious problems, both with respect to science and philosophy. It tends to channel scientific thought and procedures into certain directions, and keep them from others, quite independently of empirical evidence, thus imposing an intolerable burden on science, which can operate quite well on much weaker metaphysical assumptions. The unholy trinity also rests on erroneous assumptions about the nature of the real, about epistemology, and about metaphysics. Utilizing the philosophy of Xavier Zubiri, it is possible to clarify the nature of those assumptions, and why they are wrong.


Reductionism, Naturalism, and Nominalism: the “Unholy Trinity” and its Explanation in Zubiri’s Philosophy, Thomas B. Fowler.


What Ockham is most famous for, “Ockham’s Razor”, is actually an interpretation of the second definition, and one that goes far beyond the definition that Ockham, like most philosophers, would agree with. In ontology it is a seemingly ‘natural ideology’ that ontologies should not be needlessly expanded. What this means in practice is that there should not be a proliferation of unnecessary categories of knowable entities, whatever the generic level of categorization. As an example, at the species level of zoological ontology, I may require two categories to make sense of the differences between dogs and cats. At the breed level I may need many more to make sense of the differences between poodles, terriers, rottweilers, Abyssinian blues, Russian blues and tabby cats, however needlessly complicating categories at any level, for instance distinguishing between the various patterns and colours of tabby cats is unnecessary, since anyone familiar with tabby cats knows that individual tabbies have innumerable variations on a similar set of patterns. The usual formulation of Ockham’s Razor, that the simplest explanation is the best, is nowhere found in Ockham and in fact brings in a concept foreign to his logic, which is that of explanation itself. Ockham was dealing with description, not with explanation; categorization, not causality.


Where Ockham does approach the notion of explanation, and therefore causality, is in the notion of future contingents, however in that notion Ockham precisely deals only with contingent phenomena. How this functions in terms of the notion of predestination is a complex topic, but the physicalist interpretation as determinism that reached its peak in the 19th century roughly states that if one knew the exact state of the beginning of reality, everything from that point on could be predicted accurately to the end. This kind of determinism would have been ludicrous to Ockham, resting on impossible definitions of both God and necessity. For Ockham nothing in reality is necessary, the only necessary thing is God, everything else is radically, even absolutely, contingent.




Causality from Aristotle to Lakoff


The conflation of description and explanation began early on in the history of metaphysics, with the Stoics and the Neo-Platonists. It took Augustine decades to distinguish the related conflation of things and bodies. Ockham would, still, not have made the error of introducing causality into either ontology (first philosophy), or even metaphysics (considerations after the physics). Aristotle’s doctrine of the four causes is to be found specifically within the physics, where it is combined with the four types of movement. Determinism has a theological basis, just as does predestination, but both the character of the Theos and the nature of causality is different in each. Determinism, as it has existed since the 18th century, would have been unthinkable to Ockham, since it both denies contingent entities and events, and posits all causality as temporally prior to effect. Ontological priority cannot be devolved to temporal priority without a basic misunderstanding of ontology. As descriptive, ontological priority indicates what grounds what, not what causes what, in the modern sense of a temporal sequence of events, the first as cause and the second as effect. This notion arose first through Machiavelli’s criticism of the notion of first and formal causes from Aristotle’s doctrine (itself already misunderstood in terms of temporality via the move from ‘final cause’ as the fourth, ontologically first but temporally last, cause, to its partial redefinition as ‘first cause’ in Aquinas, where it still functions ontologically as Aristotle’s ‘final cause’, but also as the first ‘efficient cause’, in terms of the Theos as the originary ground. ‘First’ in Aquinas sense still indicates ontological, and not temporal priority, since for Aquinas there could be no demonstrable beginning or end to reality as the expression of an eternal being. The introduction of a kind of efficient causality into the notion of final or telic causality however overweighted the doctrine of causes, putting the emphasis on the temporally successive efficient cause, rather than the ontological priority of the telic or final cause. The error of viewing Aristotle’s final cause (even in Aquinas’ guise) as temporally prior was carried over into science and even some later philosophy. Science understands causality much as it was defined by Hume, who mistook telic causality as merely the initial efficient cause, something he was at pains to deny. In an ironic twist, Protestant notions of causality generally derive from a misunderstanding of theological causality, a misunderstanding moreover most fully defined by a concerted atheist. Hume’s formulation runs as follows:


  1. The cause and effect must be contiguous in space and time.”
  2. The cause must be prior to the effect.”
  3. “There must be a constant union betwixt the cause and effect. ‘Tis chiefly this quality, that constitutes the relation.”


While Hume’s formulation codified the effective notion of causality within both science and technology, and through the latter formed the effective notion of causality for modern common sense, postmodern physics was forced to abandon in fact all three of Hume’s determining features. Hume’s formulation can be described as ‘direct efficient causality’, and direct causality had already been rendered problematic by Leibniz. The incompatibility of Hume’s formulation with quantum mechanics, and the incompatiblity of 2. with intial relativity theory, which has since been extended to 1. and 3. due to the loss of spatial along with temporal simultaneity, is prefigured in Leibniz abandonment of direct causality. While direct causality is incompatible in every way with current physics, a clear notion of indirect causality has not been forthcoming, although systemic causality depends on indirect causality, it still operates through networks of direct causality.


In terms of the proper method of modern science, the development of models of reality, a fairly recent determination of the types of causality possible within a model demonstrates the continuing reliance on direct causality:


There are six different kinds of causality within a model.


1. A direct causal relationship is one in which a variable, X, is a direct cause of another variable, Y (i.e. it is the immediate determinant of Y within the context of the theoretical system).


2. An indirect causal relationship is one in which X exerts a causal impact on Y, but only through its impact on a third variable, Z.


3. A spurious relationship is one in which X and Y are related, but only because of a common cause, Z. There is no formal causal link between X and Y.


4. A bi-directional or reciprocal causal relationship is one in which X has a causal influence on Y, which in turn, has a causal impact on X.


5. An unanalyzed relationship is one in which X and Y or related, but the source of the relationship is unspecified. 6. A moderated causal relationship is one in which the relationship between X ad Y is moderated by a third variable. In other words, the nature of the relationship between X and Y varies, depending on the value of Z.


From Jaccard, J., & Turrisi, R. (2003). Interaction Effects in Multiple Regression. Thousand Oaks, CA: Sage.


That direct causality has no potential mechanism remains problematic, primarily in physics due to the extremes of scale on which its models operate. While the other sciences generally use direct causality as though it were unproblematic, systemic causality, while still viewed as operating effectively ‘through’ direct causality (via causal networks), posits indirect causality as effectively determinative – viewed at the level of individual ‘nodes’ in a causal network, direct causality can only appear as random, and so what we experience as causal relations are due primarily to indirect causality.


Lakoff describes systemic causality in the following manner:


Systemic causation, because it is less obvious, is more important to understand. A systemic cause may be one of a number of multiple causes. It may require some special conditions. It may be indirect, working through a network of more direct causes. It may be probabilistic, occurring with a significantly high probability. It may require a feedback mechanism. In general, causation in ecosystems, biological systems, economic systems, and social systems tends not to be direct, but is no less causal. And because it is not direct causation, it requires all the greater attention if it is to be understood and its negative effects controlled. Above all, it requires a name: systemic causation.


While Lakoff’s description remains dependent on problematic direct causality, systemic causation can be a useful notion for dealing with indirect causation, primarily because anything, even the simplest particles with mass, can be modeled as systems, which would essentially reverse Lakoff’s description insofar as ‘direct’ causes are themselves only macro effects of indirect causes at a lower scale.


Systems and Top-Down Causality


Physics for the last 400 years or so has preferenced bottom-up causality, to such a degree that it has become accepted as the norm in most other fields as well. However it is not without its critics. The major criticism of bottom up causality concerns the behaviour of systems. It’s not difficult to show in any reasonably complex system that causality is determinative (often intentionally so) top-down of behaviour at lower genera, from the perspective available at the lower generic systemic level intentional, determinative causality can only appear as arbitrary and random. However the equivalent is not true bottom-up, where causes on a lower generic scale do have effects on higher scales, but those effects are not determinative, nor can they be made intentional, they are unpredictable not simply in practice but in theory. Since, as we have noted physics models reality, ideally in a predictive manner, the inherent unpredictability of bottom-up causality is a significant problem for physics, precisely because its models are systemic, and therefore follow this pattern of causality where the uppermost systemic level is determinative. Since, in terms of temporal development, the uppermost level is also the last to develop, the usual temporal sequence of cause and effect, in a number of key situations, is also reversed. The manner in which Hegel reinstated telic causality (precisely as retrospective) is possibly the only philosophical account of causality that is in a practical sense compatible with the models of current physics.


Ontology and Onto-Theology


The discussion of causality, while somewhat of an excursion, was somewhat necessary in terms of dealing with the difference between ontology proper, and what some would term onto-theology. As we noted earlier, ontology as primary logic is concerned with description and classification. Aristotle introduces causality, which of course is intimately bound up with explanation, within physics (although he uses it freely in his discussions on other positive sciences such as biology and economics). His logic does not consider causality or by extension explanation at all. Both description and explanation are commonly thought of as part of understanding. The claim has been made recently that there is no such thing as understanding in science outside explanation. At first sight this appears to be a strange claim, since understanding is generally considered to primarily involve using concepts in order to be able to adequately deal with a given thing. While explanation may be involved in those concepts, explanation on its own seems to be missing the fundamental notions that are precisely captured in logic and ontology. Put simply, explanation may determine the cause or origin of a thing, but cannot determine what thing it in fact is, nor can explanation determine the correctness of a definition of a thing, nor can it determine its function within a logical syllogism. Not only can explanation never alone determine the meaning of a thing, by focusing the view away from the thing to its causes, which always implies continuing to the origin, it has a tendency to ‘explain away’ the thing insofar as it in fact is. That a thinker on science could intelligently make the claim that scientific understanding is purely explanatory itself requires an explanation.


We noted earlier that Kant’s refutation of ontological proofs depended not merely on a substantialist metaphysics, all science can quite easily be demonstrated to depend on a substantialist metaphysics, but on a subjective substantialist metaphysics, and that the subjectivity of substantialist metaphysics was fundamental to the Cartesian certainty that founds modern science itself. Kant himself is well known for the principle of sufficient reason, which tacitly or explicitly is accepted by virtually all modern science. However if we go back to Ockham (and other thinkers of and before Ockham’s time) the principle of sufficient reason would not be in any way convincing. To see how this could be the case we need to look at what modern science in fact observes as the first step in its process of model building.


Ockham’s ontology deals explicitly with ‘entities’, however this rather abstract term indicates that the matter under discussion is in being. Our usual terms for ‘entities’ as beings comprise things and bodies. In his major work Insight, Bernard Lonergan discusses the difference between things and bodies.


Let us now characterize a ‘body’ as an ‘already out there now real.’ ‘Already’ refers to the orientation and dynamic anticipation of biological consciousness; such consciousness does not create but finds its environment; it finds it as already constituted, already offering opportunities, already issuing challenges. ‘Out’ refers to the extroversion of a consciousness that is aware, not of its own ground, but of objects distinct from itself. ‘There’ and ‘now’ indicate the spatial and temporal determinations of extroverted consciousness. ‘Real,’ finally, is a subdivision within the field of the ‘already out there now’: part of that is mere appearance; but part is real; and its reality consists in its relevance to biological success or failure, pleasure or pain. As the reader will have surmised, the terms ‘body,’‘already,’‘out,’‘there,’‘now,’‘real’ stand for concepts uttered by an intelligence that is grasping, not intelligent procedure, but a merely biological and non-intelligent response to stimulus.


Lonergan, Bernard (1992-04-06). Insight: A Study of Human Understanding, Volume 3: 003 (Collected Works of Bernard Lonergan) (Kindle Locations 5838-5842). University of Toronto Press. Kindle Edition.


By a thing is meant an intelligible concrete unity. As differentiated by experiential conjugates and commonsense expectations, it is a thing for us, a thing as described. As differentiated by explanatory conjugates and scientifically determined probabilities, it is a thing itself, a thing as explained.


Lonergan, Bernard (1992-04-06). Insight: A Study of Human Understanding, Volume 3: 003 (Collected Works of Bernard Lonergan) (Kindle Locations 5877-5880). University of Toronto Press. Kindle Edition.


In Lonergan’s usage, we can take ‘real’ as equivalent to ‘actual’ in the ordinary sense of the term, he is no longer using it in the technical sense of a ‘res’. However things for Lonergan are not bodies insofar as we experience the latter only in our biological experience of reality, and not in other experiential modes, which include scientific, dramatic, and common sense modes of experience. Lonergan explicitly notes that scientific things, as opposed to things for us, or common sense things, are differentiated by explanatory conjugates rather than descriptive conjugates. What does this mean in terms of what science takes as an observable entity?


The answer is related to our posit of the Cartesian revolution as a basic change from a substantialist metaphysics to a subjective, substantialist metaphysics. Insofar as Descartes wanted to demonstrate a certainty in our knowing, he intended from the beginning that reality could be dealt with in a specific way, mathematically. The notion of reality as mathematical did not of course originate with Descartes, and Galileo before him was famously quoted as saying precisely that reality is mathematical. Not only are scientific things composed of explanatory conjugates rather than descriptive ones, they are described mathematically. What is fundamentally taken as an observable, a thing, for modern science, in line with its basis in a subjective substantialist metaphysics, is a thing viewed as an object.


Traditional grammar defines object as the thing that is acted upon by the subject. As with a number of other grammatical terms, this definition was fundamentally reversed by the Cartesian revolution, although in a sense both the original meaning and its reversal are simultaneously operative in the scientific notion of an object.


Within a specific research situation, the scientific object remains that which is acted upon by the scientific subject, which after Descartes always primarily means the “I” that is engaged in the research. However, prior to this, a thing must have in some way become an object for science.


The process of taking a ‘thing’ and making it an object for science is in fact the process of research itself, insofar as investigation into a thing becomes research, the thing is isolated as much as possible from contingency, and becomes for science a necessary thing, an object. Contingencies from which it is isolated include other things, but effectively what is necessary to remove contingency itself is the thing’s contextual relations, its meaning. Only insofar as it is necessary, can any particular research activity claim to be generalizable. An example of the difference between a thing for us and a scientific object may help in understanding this process.


A table has various properties that we can describe, many of which depend on the material and efficient causes that brought that particular table into existence. If a particular table becomes an object for science, those properties become explanatory and not descriptive conjugates. Research can tell us many interesting and occasionally useful things about the table: it can be analyzed to determine its age, the materials used, various aspects of the construction techniques involved. It can be modeled in various ways: as a geometrical structure; as a chemical composition; as a set of atomic or sub-atomic systems; as a thermodynamic process. All of these can tell us interesting things about the table, in particular, how it might respond to particular events, which can be extremely useful. However the researcher cannot tell us what it is. Ontology, as the activity which determines what a thing is, is not part of the researcher’s basic arsenal, precisely because it does not involve explanatory conjugates, but descriptive ones. As a scientific ‘thing’, which always means an object, it has always been stripped of the contextual relations that are inherent in determining what a thing is ontologically. None of the various models, then, can claim ontological relevancy, because they have obviated ontology from the beginning. For a scientist (or a mereological nihilist) to make the claim that a table is ‘really’ an aggregation of subatomic particles is irrational. Firstly the model is built on a thing already extracted from its ontological determinants. Secondly the model is an unverifiable image. Scientists though are also animals, and as animals want ‘their’ things to be bodies, ‘out there now reals’ in Lonergan’s formulation. For animals a scientific model is useless, it can’t be eaten, procreated with, or in this specific instance, eaten off. It is the researcher’s own biological dissatisfaction with scientific objects that leads to researchers trying to convince others that the scientific object, which not only has no relevance, but precisely no existence outside the lab, is relevant to someone not involved in the research in question.


The question remains, though, what makes particular things appropriate to be utilized in this process and others not? This is precisely where the reversal in Descartes comes into play. A thing becomes a possible object for science precisely insofar as it drives research. A thing, then, can only properly become an object for modern mathematical science to the degree that it can be measured. Something that can in no way be measured cannot exist as a scientific object, which is where we get the nonsense from researchers that things that cannot be measured do not exist, while we experience plenty of things that cannot be measured every day. The height of ridiculousness is reached when, in a misguided attempt at claiming ‘scientific authority’ someone studying something that can precisely never become an object for science, attempts to make it one, and then arrives at the startling conclusion that it doesn’t exist, since nothing of the original thing survived its objectification.


This may seem strange at first, the notion that scientific objects have no being, in the strictest sense do not exist (whether existing in the researcher’s mind as a model constitutes a type of existence is a more difficult question). However when we consider the being of a being, not necessarily in Heidegger’s full sense, but in the metaphysical sense of the beingness of a being, what makes it the being it is and not another, any kind of ontological analysis must deal with the meaning of the being, which must be in turn determined by its contextual relations. The most basic of those contextual relations, that which is closest to non-scientific modes of experience, is its functional relations. While we tend to think of a thing in terms of its ‘look’ (Eidos, or Idea in Greek originated precisely from this) it is easily demonstrated that nothing about a thing’s look is necessary for it to be the thing it is. A table may be made of wood or steel or both, or glass. It may be manufactured in an industrial complex, crafted by a carpenter, or simply found. It may be square, rectangular or round, or an undefined shape. It requires no particular colour, and may even be transparent. What determines a table as a table is that it is used for placing things on, and thereby holding them up. Certain low growing trees that have a relatively flat top are often used as tables in areas where they are common. The situation is similar with a chair, it may be made of any material whatsoever, it may not even be made, as anyone who’s found a comfortable nook in a rocky landscape to rest for a moment will attest. The basic flaw that affects most ontologies that simultaneously attempt to be ‘scientific’, and especially those that attempt to be so in a mathematical way, is that the most important aspect of an ontological description, the function, is always lost when it is treated scientifically, and appears, when something is described mathematically, to be extrinsic, and is therefore ignored. The most accurate and simple ontological definition of a chair is “for sitting”. Web Ontology Language, to take a recent example, is a total failure, because it is neither a language nor does it define anything within it in a properly ontological manner.


To understand what is meant by the Cartesian revolution, the founding of modern science on mathematical certainty, requires that it be looked at in its most developed form, in the philosophy of Immanuel Kant. Almost by definition anyone working in the sciences is a Neo-Kantian in their basic outlook, which is to say that they subscribe to a subjective, substantialist metaphysics. However, the very comprehensiveness of Kant’s treatment makes it a difficult vantage point from which to determine precisely what the underlying assumptions of such a metaphysics are, much less to determine whether in fact they are valid assumptions. We have seen that the truth-claim made by modern science is problematic in a number of ways, not the least of which is that things, as examined by modern science, do not in fact exist, except in some sense in the mind of the researcher. If the truth-claim is that problematic, particularly when its foundation is a claim to the certainty of its metaphysics, and the mathematical formulations in which it is expressed, then the assumptions underlying it are at the very least questionable. When in addition the thinkers to whom its development leading up to Descartes would in no way themselves have supported Descartes’ conclusions, we need to look at how the posits of those thinkers changed between their originators’ works and the form they take in Descartes, and their further development up to and including their most sustained exposition in Kant.


We began with William of Ockham, generally credited as one of the main originators of the nominalism named by many as one of the ‘unholy trinity’ of beliefs that underlie particularly atheistic modern science, but I would argue underlie all of modern science, whether the trinity are interpreted as implying atheism or its opposite. However the nominalism specified in the ‘unholy trinity’ is not at all locatable in Ockham’s own works, and he would have expressly disagreed with the main thrust of such a nominalism. How then did ‘nominalism’ come to name something so far removed from Ockham’s own works and their guiding assumptions?


Nominalism in its extreme form “is the theory (or belief) that only concrete things exist; abstract entities such as species do not.” (Fowler, 2007). As we have seen, for Ockham abstractions certainly do exist, although universals generally do not. As Fowler notes, this is immediately appealing to the types of people that tend to gravitate to the sciences. The mind set of researchers, in a general way, is one of the basic social contexts in which beliefs that are, on examination, extremely questionable if not completely problematic, become accepted and even treated as self-evident. In the case of nominalism in its extreme form, while many would agree with the notion that species is an ontological classification that only exists in the mind, and not in reality, as we have seen, this is true of the very objects that researchers consider empirical and base their evidence on.


However, as we have noted, species wasn’t a difficult concept for Ockham himself, nor has it been so for most metaphysicians before or since. The nominalism that moves as far from substantialist metaphysical realism as possible is held simultaneously in an area of inquiry based on a specifically subjective substantialist metaphysics, which creates a very tenuous position for the reality all metaphysics assumes as underlying appearance.


Ockham’s notion of a ‘mental language’ is one of the key notions that, modified in various ways in the period between Ockham and Descartes, led to the decision by Descartes to base all certainty on the mathematical. I refer to Descartes ‘decision’ against the claimed methodology of Descartes’ writings, simply because his method is judiciously applied in some areas of thought and not others, that only an a priori decision as to the outcome of the method can sufficiently account for the simultaneous apparent arbitrariness in the choice of what to apply his method to and the consistent avoidance of topics that, were his method applied, would call into question his results. There are a number of assumptions stated in his Discourses that are crucial to the believability of his major result, that mathematical certainty is the truest basis for knowledge in general, but that are not only questionable in general, directly contradict other assumptions of those who most doggedly affirm his result:


it follows, that the light of nature, or faculty of knowledge given us by God, can never compass any object which is not true, in as far as it attains to a knowledge of it, that is, in as far as the object is clearly and distinctly apprehended. For God would have merited the appellation of a deceiver if he had given us this faculty perverted, and such as might lead us to take falsity for truth [when we used it aright]. Thus the highest doubt is removed, which arose from our ignorance on the point as to whether perhaps our nature was such that we might be deceived even in those things that appear to us the most evident. The same principle ought also to be of avail against all the other grounds of doubting that have been already enumerated. For mathematical truths ought now to be above suspicion, since these are of the clearest. And if we perceive anything by our senses, whether while awake or asleep, we will easily discover the truth provided we separate what there is of clear and distinct in the knowledge from what is obscure and confused.


Descartes, René (2011-03-24). The Selections from the Principles of Philosophy (Kindle Locations 448-455).



Mathematical formulations of knowledge are described by Descartes as “above suspicion”, not via any complex deductive argument, but by an exceedingly simple syllogism, which can be expressed as follows:


  1. [The faculty of] knowledge can not encompass falsity if its object is clearly and distinctly apprehended since that would imply that God is a deceiver.
  2. Mathematical truths are most above suspicion since they are the clearest.

Now, the first part of this syllogism, while believable within a theistic context, is problematic specifically for the Cartesian context. Descartes’ method proceeds in the first place by doubting whatever can be made questionable. In positing the subiectum (the “I” or subject that is the subject of any particular act of thinking) as the only complete certainty, Descartes begins by doubting the external world itself. In order to do so he expressly posits a deceptive demon that might be responsible for his apprehension of the world. Yet in his conclusion the final basis for belief in mathematical truth is that God is not such a demonic, deceptive being. If the latter is true, then the ‘doubt’ in the external world cannot in fact be a real doubt, but only a sophistic ploy.


As for the second part, that mathematics is clear and distinct, we can look at the three main substantialist metaphysical positions, of which every scientist subscribes to some variation of:


  1. The realist position – first espoused by Plato, that the forms of things, universals, in fact exist, and particulars are never more than inadequate copies.
  2. The moderate position – this comes in a number of guises from the partial nominalism of Ockham to the conceptualist position (that universals are ‘mental’ things) first espoused by St. Augustine.
  3. The extreme nominalist position: that no abstractions or universals exist.


Since these positions have been argued in various forms from Plato to the present day, we can assume that in natural language the truth is neither clear nor distinct in Descartes’ terms.


Can we formulate the argument in mathematical terms? As it happens we don’t need to. Post-Cantor, mathematics has founded itself on set theory, the most common formulations being variations on the Zermelo-Frankel theorems. For the extreme nominalist this creates a problem, sets themselves are universals. The following excerpt summarizes the argument that has ensued between extreme nominalists and other mathematicians:


A notion that philosophy, especially ontology and the philosophy of mathematics should abstain from set theory owes much to the writings of Nelson Goodman (see especially Goodman 1977), who argued that concrete and abstract entities having no parts, called individuals exist. Collections of individuals likewise exist, but two collections having the same individuals are the same collection.


The principle of extensionality in set theory assures us that any matching pair of curly braces enclosing one or more instances of the same individuals denote the same set. Hence {a, b}, {b, a}, {a, b, a, b} are all the same set. For Goodman and other nominalists, {a, b} is also identical to {a, {b} }, {b, {a, b} }, and any combination of matching curly braces and one or more instances of a and b, as long as a and b are names of individuals and not of collections of individuals. Goodman, Richard Milton Martin, and Willard Quine all advocated reasoning about collectivities by means of a theory of virtual sets (see especially Quine 1969), one making possible all elementary operations on sets except that the universe of a quantified variable cannot contain any virtual sets.


In the foundation of mathematics, nominalism has come to mean doing mathematics without assuming that sets in the mathematical sense exist. In practice, this means that quantified variables may range over universes of numbers, points, primitive ordered pairs, and other abstract ontological primitives, but not over sets whose members are such individuals. To date, only a small fraction of the corpus of modern mathematics can be rederived in a nominalistic fashion.


The result so far has been extremely similar to the result of trying to rederive scientific theories on nominalist principles, very little of it works.


From the perspective of the certainty of mathematical formulations in terms of scientific expression, a more important result is that mathematics is as unable to clarify the argument as natural language, contradicting Descartes’ fundamental posit.


At this point I want to return to the quote that I began the exposition with, I will requote it here for convenience.


The pitfalls of a pseudo-mathematical method, which can make no progress except by making everything a function of a single variable and assuming that all the partial differentials vanish, could not be better illustrated. For it is no good to admit later on that there are in fact other variables, and yet to proceed without re-writing everything that has been written up to that point.


Keynes, John Maynard (2010-12-30). The General Theory of Employment, Interest and Money (p. 232). Signalman Publishing.



This quote, as I noted in the beginning, is problematic for the entire enterprise of modern Cartesian science, because its methodology is not the simplistic sequence of observation-hypothesis-theory-experiment-result, but is in fact the creation of models that ought to predict experimental results. Since these models are generally expressed in mathematical terms, the truth-claim is in reality no different to that of Descartes, that mathematics expresses things clearly and without ambiguity. Given the final justification for this truth-claim, it appears to have no credible basis.


Can this situation be saved in some way? It would seem, from the quote, that an overeager reductionism is the key to the problem. If we avoid the assumption that partial differentials vanish, we would have an accurate model. The problem is that there is only one situation where we do not intentionally eliminate partial differentials, that of experiencing reality itself. If the scientific model is identical to reality, it accomplishes very little that cannot be accomplished by simply going back to reality itself. But can we in fact create a mathematical model of the requisite complexity at all? The answer to that question is, simply, no. Our mathematics is far too simplistic to even begin the attempt. It fails already in trying to model three ideal objects in an ideal space affected by only one force.


Where mathematics has had its greatest successes are those systems that behave in a complex (as least by mathematical standards) manner, but can be modeled in fairly simple equations. The problem even here is that while the model will predict a pattern of behaviour over a period, the next node in the pattern is theoretically impossible to predict. These systems are known as chaotic, and while they do exist in reality, they do not comprise by any means the majority of living and non-living systems that science would like to predictively model. Beyond the complexity of chaotic systems, even a perfectly accurate and complete mathematic model, were that possible either in terms of gathering all the requisite information or in terms of using mathematics to model that many parameters, would be as useless at predicting future behaviour as our original experience is.


The fundamental commonality between the various metaphysics that underlie modern science is that reality itself, in the way it is represented, is a fundamentally invalid assumption. I’m not advocating a solipsistic view in which each individual’s reality has nothing in common with another’s, but going back to Lonergan’s discussion of things in different modalities of experience, the reality that those things are embedded in is also different between different modalities. There are ‘real’ bodies, ‘out there now real’s in Lonergan’s somewhat convoluted formula, but in and of themselves they do not form the contextual world that we refer to as reality. Reality is a construction of those things and how they are configured, a construction that is embedded in that reality itself, although experienced with the complexity possible with linguistic context. It is not the case that reality is completely constructed by the mind, that falls into the same idealist trap of assuming the mind is outside reality. Reality constructs itself, in various modalities and at various levels of complexity of awareness, and we are one of those levels of that construction and those modalities. Ontologically, each change must retroactively posit its presuppositions, those by which new relations are ‘found’ and a new reality constructed.


We should introduce here a precise distinction between the presupposed or shadowy part of what appear as ontic objects and the ontological horizon of their appearing. On the one hand, as was brilliantly developed by Husserl in his phenomenological analysis of perception, every perception of even an ordinary object involves a series of assumptions about its unseen back-side, as well as about its background; on the other hand, an object always appears within a certain horizon of hermeneutic “prejudices” which provide an a priori frame within which we locate the object and which thus make it intelligible— to observe reality “without prejudices” means to understand nothing.


This same dialectic of “positing the presuppositions ”plays a crucial role in our understanding of history: “just as we always posit the anteriority of a nameless object along with the name or idea we have just articulated, so also in the matter of historical temporality we always posit the pre-existence of a formless object which is the raw material of our emergent social or historical articulation.”


This “formlessness” should also be understood as a violent erasure of (previous) forms : whenever a certain act is “posited” as a founding one, as a historical cut or the beginning of a new era, the previous social reality is as a rule reduced to a chaotic “ahistorical” conundrum— say, when the Western colonialists “discovered” black Africa, this discovery was read as the first contact of “pre-historical” primitives with civilized history proper, and their previous history basically blurred into a “formless matter.” It is in this sense that the notion of “positing the presuppositions” is “not only a solution to the problems posed by critical resistance to mythic narratives of origin … it is also one in which the emergence of a specific historical form retroactively calls into existence the existence of the hitherto formless matter from which it has been fashioned.”


This last claim should be qualified, or, rather, corrected: what is retroactively called into existence is not the “hitherto formless matter” but, precisely, matter which was well articulated before the rise of the new, and whose contours were only blurred, or became invisible, from the horizon of the new historical form— with the rise of the new form, the previous form is (mis) perceived as “hitherto formless matter,” that is, the “formlessness” itself is a retroactive effect, a violent erasure of the previous form.


If one misses the retroactivity of such positing of presuppositions, one finds oneself in the ideological universe of evolutionary teleology: an ideological narrative thus emerges in which previous epochs are conceived as progressive stages or steps towards the present “civilized” epoch. This is why the retroactive positing of presuppositions is the materialist “substitute for that ‘teleology’ for which [Hegel] is ordinarily indicted.” What this means is that, although presuppositions are (retroactively) posited, the conclusion to be drawn is not that we are forever caught in this circle of retroactivity, so that every attempt to reconstruct the rise of the New out of the Old is nothing but an ideological narrative. Hegel’s dialectic itself is not yet another grand teleological narrative, but precisely an effort to avoid the narrative illusion of a continuous process of organic growth of the New out of the Old; the historical forms which follow one another are not successive figures within the same teleological frame, but successive re-totalizations, each of them creating (“positing”) its own past (as well as projecting its own future).


In other words, Hegel’s dialectic is the science of the gap between the Old and the New, of accounting for this gap; more precisely, its true topic is not directly the gap between the Old and the New, but its self-reflective redoubling— when it describes the cut between the Old and the New, it simultaneously describes the gap, within the Old itself, between the Old “in-itself” (as it was before the New) and the Old retroactively posited by the New. It is because of this redoubled gap that every new form arises as a creation ex nihilo: the Nothingness out of which the New arises is the very gap between the Old-in-itself and the Old-for-the-New, the gap which makes impossible any account of the rise of the New in terms of a continuous narrative.


We should add a further qualification here: what escapes our grasp is not the way things were before the arrival of the New, but the very birth of the New, the New as it was “in itself,” from the perspective of the Old, before it managed to “posit its presuppositions.” This is why fantasy, the fantasmatic narrative, always involves an impossible gaze, the gaze by means of which the subject is already present at the scene of its own absence —the illusion is here the same as that of “alternate reality” whose otherness is also “posited” by the actual totality, which is why it remains within the coordinates of the actual totality.


Zizek, Slavoj (2012-04-30). Less Than Nothing: Hegel and the Shadow of Dialectical Materialism (pp. 271-273). Norton. Kindle Edition.



The problem, then, isn’t in the necessity with which we posit the presuppositions, but that as re-totalizations we posit them as universally necessary in-themselves, not simply necessary for-us. The radical contingency of the new in the ‘birth of the new’ cannot be part of the causal narrative, nor can the old as it was in-itself be part of that narrative, nor can the new ‘as it was for the old’ be part of that narrative. All of these can only continue in so far as they are for-us, i.e. Eliot’s ‘ideal order’ of the historical as it still is has to be modified by the new’s positing of its presuppositions so as to remain an ideal order despite the newness of the new. The perfection of the ‘past perfect’ consists precisely in this retroactive arranging of what, as past, remains in the present.

What Zizek implies but fails to make particularly clear is that the ‘redoubling’ of the gap is possible precisely because the gap exists in the old in-itself, and only insofar that it did exist could the new ‘exploit’ that gap as part of its birth.


This same positing of the presuppositions, of course, occurs in the birth of ‘modern’ science. Modern science had to posit its precursors as simultaneously its presuppositions, and ridiculous or absurd. Thus the redoubling of the gap inherent in the old is thus also redoubled in the new, where there is not simply a contradiction in the new itself, but the new is necessarily founded on a contradiction.


The contradictions in the founding of modern Cartesian science are much easier to reconstruct than the contradictions in an event such as African colonization, because the topics are much simpler, and the historiology much more accurate and complete, i.e. we know much more of the actual past and therefore the contradictions inherent in the posited presuppositions are much easier to uncover.


That Descartes’ own primary works are full of contradictions, not the least of which is that the final proof contradicts the terms of the initial doubt, is a primary indicator of the fundamental contradictions of the new position. However there are other telling signs of the intentional blurring of the old in the birth of the new.


The mythical figures of Aristotle and the thinkers of the middle ages (usually conveniently left unnamed in the mythos of modern science) did arise from an accurate description of a real person. The irony of the way contradictions inspire mythos is that the person in question is precisely the person credited with banishing “theories not tested by experiment”, Galileo. The surprising thing about the real, as opposed to the mythical, Galileo, is not that he would insist on experiment to demonstrate his theories, in opposition to his times, but that his lack of doing so was origin of most of his arguments with those of his time, as well as the origin of most of his arguments with the work of the real Aristotle. Aristotle’s thinking, as is obvious to anyone actually familiar with his work, is based on keen observation and strict attention to observational detail, something that Galileo, brilliant as his ideas were at times, couldn’t be bothered with actually doing.


This is not to say there was no innovation in Galileo, or that his innovation wasn’t codified substantially first by Descartes. The myth of modern science is that Galileo’s innovation was the experiment in the modern sense, the reality is that Galileo’s innovation was the validity of abstracting to a degree unthinkable to someone like Aristotle. The validity of the experiment itself is based on this validity of extreme abstraction, which is then generally expressed in the most abstract language, that of mathematics. Descartes’ determination to base certainty on mathematics was due to its level of abstraction, not a supposed clarity or lack of ambiguity.


When looked at, the jump from Aristotelian experiential science to modern experimental science is a difficult jump to accept. Aristotle made concrete observations of things in their concrete circumstances; modern science abstracts all possible contingency away, and therefore all possible circumstances, by isolating the object in the research lab. It’s consequent generalization of the result to all things under any circumstances appears, when put in that context, ‘imaginative’ at the very least, if not completely far-fetched. However the Galilean / Cartesian revolution was relatively easily accepted. This ease of acceptance indicates that it corresponded to assumptions common in society in general, which of course it did. The simple assumption of the almost infinite efficacy of abstraction originated precisely with the scholastics. It was during the middle ages that Aristotle’s concrete experiential science, insofar as it had to at least appear to correspond with Church dogma, was abstracted further than would have been plausible to men of earlier ages, and this plausibility of extreme abstraction (and extreme reduction) became an acceptable process of reason through the work of the very scholastics the mythos of modern science uses as its foil, depicted as ‘fools’ only possible under the ‘bad old’ way of interpreting reality.


Metaphysical realism is almost a ‘spontaneous’ ideology to a creature that has reflexively generated a measuring facility (in the widest sense of ‘measure’, such as the way we use the term in the phrase ‘measured response’). That it is compatible with our biological understanding of ‘real’ makes it immediately more persuasive than it would be otherwise. The original variants of metaphysical realism, those of Plato and Aristotle, were subverted by the interpolation (by the Stoics and Neo-Platonists) of a creator-being, specifically a creator-being with self-aware intentionality. For Plato the ideas simply were. For Aristotle whatever substance might be it could not be defined any further than it was, as what remains the same through changes that denotes something as the same thing. Nor could form be thought any further than the origin of appearing itself as any specific appearance. The ‘conceptualist’ move of placing form and later substance into the mind, originating with Augustine and surviving until Hegel, reaching its apogee in Kant, could only have occurred within the context of reality viewed as creation, which context began with the Stoics and was fully defined by the thinkers Augustine generically refers to as ‘platonists’, and of course by Augustine himself. The difficulty, after Augustine, is that the Platonic Ideas and Aristotelian forms are no longer persuasive, they are no longer easily believable in our context.


Even with the nominalism of Ockham and the subjective turn of Descartes, whose implications were taken to their extreme by Hume and Kant, there is a basic coherence between the biological mode of experience we all share and the conceptual apparatus that configures ‘reality’ in a scientific mode of experience. The assumptions that underlie the latter lead to the Kantian impasse, which is itself repeated in multiple ways in modern particle physics, cosmology, psychology and cognitive science. Hegel’s solution of the impasse, along with later refinements, do not share this coherence with the biological mode of experience, and simultaneously do not share the assumptions on which all previous and most subsequent science is based. For Hegel, that implied that speculative philosophy insofar as it overcame the Kantian dilemma was The science, since the others had determined themselves as invalid.


Most researchers today remain predominantly Kantian in their assumptions and in defense of their sciences. Unsurprisingly they continue to arrive at the same impasse, the only oddity being the amount of time and research it takes them to do so. While quantum mechanics as understood by Bohr, for instance, has dropped many invalid assumptions, the attempt to re-normalize his interpretation within the metaphysical context of other science is nothing more than an apologetic.

The question, then, becomes whether modern science’s modeling has any ontological validity? The issue in answering this is that it doesn’t attempt to. By removing ontological determinants in re-determining a thing as a scientific object, it loses the ability to even claim it is discussing anything with any ontological validity. In terms of the validity of experiment, then, the main valid experiment of modern science is the experiment that it itself is, as a theological experiment to validate its initial posits. As such, it has accomplished one task, which is to demonstrate the lack of validity of the theology on which it is based.


What has been attributed to modern science in terms of ‘discovery’ is simply that which technology has revealed and modern science failed to account for. The failure of science’s truth-claim will inevitably lead to the failure of science as a community, but as far as society goes, it’s simply another priesthood divested of its robes.



















Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s