“Either intelligence anticipates the discovery of functional relations on which relations between measurements will converge, or else it anticipates the discovery of probabilities from which relative actual frequencies may diverge though only at random. The latter alternative has a fairly clear claim to the name ‘statistical.’ The former alternative is not limited to Newtonian mechanics, and in the opinion of many does not regard quantum mechanics. It is a mode of inquiry common to Galileo, Newton, Clerk Maxwell, and Einstein; it is as familiar to the chemist as to the physicist; it long was considered the unique mode of scientific investigation; it has been the principal source of the high repute of science.”
Lonergan, Bernard (1992-04-06). Insight: A Study of Human Understanding, Volume 3: 003 (Collected Works of Bernard Lonergan) (Kindle Locations 2209-2215). University of Toronto Press. Kindle Edition.
This differentiation between ‘classical’ modern scientific method and what might be termed ‘postmodern’ method appears simple enough, however it has a major impact on the ‘certainty’ of science as a whole (since ‘classical’ science can no longer be regarded as any more than a convenient approximation of what is factically statistical), and the maintenance of the appearance of scientific certainty is based on a fundamental equivocation: both probability and certainty have a primary and secondary meaning in statistical science, and the conflation of these two oversimplifies a complex question, which as a result has barely been asked, much less been answered.
“…If events are probable, they do not diverge systematically from their probabilities. But if they occur neither sooner nor later, then there is empirical evidence for the intervention of some systematic factor. However, if with the mathematicians one envisages an infinity of occasions, then the qualifying phrase ‘neither sooner nor later’ admits so broad a meaning that empirical evidence for a systematic factor never can be reached. A common solution to this antinomy is to say that very small probabilities are to be neglected, and this, I believe, can be defended by granting mathematical and denying empirical existence to the assumed infinity of occasions.”
“if probabilities must be verified, it also is true that there is a probability of verifications. But it is of no little importance to grasp that this second probability shares the name but not the nature of the first. For the first probability, apart from random differences, corresponds to the relative actual frequency of events. It is the regularity in the frequencies, and it is to be known by a leap of constructive intelligence that grasps the regularity by abstracting from the randomness. In contrast, the second probability is not some fraction that, apart from random differences, corresponds to the relative actual frequency of verifications. A preponderance of favorable tests does not make a conclusion almost certain; indeed, a very few contrary tests suffice to make it highly improbable. More fundamentally, the second probability is not known by a leap of constructive intelligence that abstracts from random differences, for such leaps never yield anything but hypotheses.”
“the second probability is known through acts of reflective understanding and judgment; it means that an affirmation or negation leads towards the unconditioned; and it is estimated, not by counting verifications and abstracting from random differences, but by criticizing verifications and by taking everything relevant into account. For these reasons, then, we distinguish sharply between ‘probably occurring’ and ‘probably true.’ For the same reasons we refuse to identify ‘certainty’ in the sense of unit probability with ‘certainty’ in the sense of ‘certainly verified.’ It follows that we find it meaningless to represent by a fraction the probability of a verification. Similarly, we find it fallacious to argue that probable events are not certain events because probable judgments are not certain judgments.”
Lonergan, Bernard (1992-04-06). Insight: A Study of Human Understanding, Volume 3: 003 (Collected Works of Bernard Lonergan) (Kindle Locations 2188-2194). University of Toronto Press. Kindle Edition.
The equivocation noted here, when unraveled, results in the conclusion that the kind of verification needed by science in particular, empirical verification, can never itself be certain. If the verification is not certain, then not only is the theory not certain, which would be no major issue, but the verification of any theory is dependent on the unconditioned which itself cannot be verified, meaning that the probability that a given model is correct is undefinable, and no certainty is possible at all. The re-founding of science on mathematical certainty is thus demonstrated as a basic fallacy.
The further difficulty for many modern scientists is precisely that they do not simply accept that the unconditioned is itself unverifiable, they vigorously deny its possibility. If the unconditioned is possible, then a scientific theory may have a possibility of truth. If the unconditioned is impossible, then so is any truth in science. The results of this impasse can be seen in various paradoxical situations in the current sciences, from cosmology to string theory to cognitive science, the very theories undermine the validity of the subject matter, leading to the inverted question problematizing their very existence “why do we find nothing whenever we study anything deeply enough?”.
This is expressed by Kant as the impossibility of knowing anything in-itself and the illusory nature of the knower. However just as it took modern science half a millennium to distinguish ‘things’ from ‘bodies’, it has taken modern science nearly as long to realize through the practice of research what Kant had already worked out in thinking.
There is however an inherent paradox (and therefore an invalid assumption) in that if both the knower and its knowledge are illusory, what is experiencing the illusion and from whence does the illusion arise? There is also an inherent paradox in a study of anything that negates its own topic. It might seem that the latter is the more general paradox, but in fact they are the same, since we have an immediate experience of that particular experience, i.e. that of knowing itself in the most general sense. The attempted rehabilitation of substantialist realism with subjective substantialist realism is a failure, and modern science is a massively elongated experiment that has finally demonstrated its own failure, although it is yet to admit it.
The uncertainty inherent in modern science has its foundation in the positing of things as mathematical entities with explanatory conjugates. Unsurprisingly the greatest heresy in science is identical to the greatest heresy in the theology in which it originated: that explanation, and most particularly explanation from origin, constitutes neither knowledge nor understanding. Further, what does constitute understanding, without which there is no knowledge, is always a priori to rationality itself, and unquestionable by rationality precisely because it is dependent on a specific understanding in general that is already operative.
Put in mathematical terms, while a theory based on probability can tell you the probability of a given event, the probability that the theory is verifiable is indeterminate. We can say the probability is 1-n, where n is between 0 and 1, but there is no way to determine what n is. Intuitively, though, we can say that n grows in a correlation to the complexity of the phenomenon. This makes any theory (or model) less verifiable as studied phenomena grow in complexity. Given a model of any reasonable complexity, n is liable to be rather large, making the verification of any given model highly unlikely. Since models of complex systems have very little (if any) predictive capability, the verifiability becomes so improbable that the value of the model is extremely close to zero. Since ‘classical’ theories are at best approximations to statistical theories, the ‘high repute’ of science turns out to be completely baseless.