The question came out of a correspondence I was involved with. Suffice it to say the underlying question of the email was as to whether my disagreement with science was simply the way it was conceptualized, or something more intrinsic. The person that asked the question is himself struggling with the difficulty of justifying science given its inadequacy in studying complexity.
I suppose I’m beginning from the perspective of critiquing the supposition that ‘nature is mathematical’ as being not radically different from mistaking geometry, as a more or less useful approximation of reality, for reality, where actuality becomes a more or less accurate approximation of geometry. While to some degree experimental physics in the 20th century with Heisenberg, Bohr and others dispensed with the conflation of scientific ‘things’ with bodies ‘out there and extant’, i.e. what is real in a biological mode of experience, it took modern science 500 years to accomplish that distinction. Of course Augustine and Aquinas were brilliant men, and it did take Augustine a couple of decades to distinguish ‘things’ in other modes from biological ‘bodies’, and Aquinas a few years even with the help of Augustine’s writings, but the wholesale dismissal of pre-Cartesian thinking has merely led to invalid assumptions being reintroduced and left unquestioned, and the repetition of what has already been more than adequately examined leading to the same issues and impasses (as an example, cognitive science’s arrival, given the same basic Kantian assumptions, at the same paradoxical conclusion over the past 10 years: that the knower cannot know the real in-itself and the knower itself is an illusion. The unanswerable question inherent in the paradox is whose illusion could the knower possibly be?)
(as an aside, probably the best treatment of the distinction of ‘things’ in different modes of experience, and the notion of genera as scalar systemic levels, was written by a former professor at your college, Bernard Lonergan, in his 1954 work “Insight’)
On a pragmatic level, there is only so much complexity that the I-Subject and the sciences based on its understanding can grasp, precisely because the I-Subject is too simple. It’s necessarily so – if we had to take into account all the parameters that, as a self, we know in any situation, we couldn’t make a timely judgment. But in terms of this simplicity of judgment we need to recognize that ALL judgment is only provisional.
That the limitations of the I-Subject are not simultaneously the limitations of the self is a major theme of the (unfortunately rather long) paper I’ve been writing called Horizons of Identity. When dealing with complexity, we can’t begin with a simple model and then somehow ‘complexify’ it, we have to start with the complexity of the real itself. As has been observed by a number of people (particularly in the rather muddy area of ‘systems thinking’, which despite its overall muddiness contains flashes of brilliant insight) we intuitively have a very good grasp of complex systems that we are experienced with. What we need to do firstly then, is to try to clarify these intuitive understandings and make them explicit without oversimplifying. Simultaneously we have to recognize that although our intuitiion understands complexity, for the most part we don’t reflexively understand our understanding (or have insight into our insight, in Lonergan’s formulation). Hegel’s absolute knowing, for me, is precisely this reflexive understanding-understanding, which as absolute can only occur when we give up the idea of totalizing knowledge. One significant reason we tend to run into the same problems consistently when trying to modify any complex system, from the environment to the poverty cycle, is that although our intuitions are largely correct and we know where the ‘lever’ is that can change things, the necessary change is counter-intuitive, and we inevitably push or pull the lever the wrong way. It’s not all that difficult to realize that were the effective action not counter-intuitive, it wouldn’t have become a repeated problem, because we would have solved it in most systems before we even thought to examine the thing in a more detailed way.
Where Zizek and I part ways, if I understand him correctly (and some of his statements point to his understanding being more on the path I have started down than the one I’ve generally understood him to be on) is that Hegel’s work is not conceptualist. While it is, on the one hand, ‘merely’ a change in perspective, in the full sense of a change in perspective all of reality changes, and Hegel doesn’t jump to the conclusion that this is ‘merely’ a cognitive act, but that reality itself is (in vastly simpler ways that we for the most part don’t understand) is always already perceptive and cognitional. That this has the advantage of also helping in modeling indirect causality is also helpful, particularly as we have no proper operative means by which direct causality might work.
The worst invalid assumption in dealing with complexity is misunderstanding the effective path of causality, which is part of my reason for spending as much time as I do on discussing more complex notions of causality than the inadequate mechanistic notion most people work with (fine for most common sense ‘things’, but not for complex social, political, or even scientific ‘things’). Modeled as systems, causality within a given system is determinative only top down, which simultaneously implies a certain retroactivity since systems develop in a temporal sense bottom up. Bottom up causality, while it certainly has effects, is inherently unpredictable. When dealing with systems complex enough that the patterns of the system on a large scale creates further patterns that themselves form an emergent system, at any given point of generic development the highest scale becomes determinative and simultaneously maintains the scale from which it originated. As a concrete example, the mind appears, at least to the unprejudiced eye, to be an emergent system of the body. While we can certainly produce mental effects via inducing physical or neurochemical changes, the effects are unpredictable. Although grief, as a process of the mind, will in a determinative, predictable manner lower the amount of serotonin and noripinephrine in the nervous system, increasing the levels of those neurotransmitters will have ‘some’ effect, but the effect will be unpredictable. When we model on a particular generic scale, though, we can make valid reductions, rather than invalid ones, that can help in the process of clarifying what in a sense we already understand, but pre-conceptually. The highest generic scale we can model on, against the assumptions of modern science, is the most useful.
As to the question of what ontology can do, I think the first and most important thing it can do is rechannel the effort from ‘explaining’, especially ‘explaining from origin’, to ‘describing and understanding’ how things are. Looked at properly explanation is not the business of ontology or even of philosophy as a whole, since it has no necessary impact on understanding or lack thereof. Explanation is where ontology becomes onto-theology, and it’s precisely in this move that modern science as explanatory becomes nothing more than a subset of (a particular) theology. Revitalizing the notion of ‘measure’ in its widest sense (as in a measured approach or measured response) as the other fundamental use of the I-Subject (along with provisional judgment). Even causality, as explanatory, received its best known early treatment in Aristotle’s Physics, and not in first philosophy, as one might initially expect. The Cartesian revolution, though, and particularly its furthest development in Hume and Kant, made a change that few people appear to have noticed. In the experiential science of Aristotle and others, the requirement of ontological validity made physics the basic science. Initially by not requiring ontological validity to be demonstrated but just assumed, and later not necessarily assumed, but still not required, the basic science on which all the others are founded became accounting. Accounting-for, in a generally inadequate manner, what has always already been revealed in some other way (primarily via technology) is all that modern experimental science accomplishes. Looked at critically the ‘successes’ generally claimed by or for modern science are not scientific at all, but technological, and scientific method is worse than irrelevant in technology, it’s a known and avoided hindrance.
There’s a simple but accurate saying that can be useful in prodding people to think further than their initial assumptions as to the usefulness of science as it is today, and spark the notion that it must overcome the basic assumptions that limit its usefulness to such a radical degree: “Measurement is not prophecy.” It’s an adage used in software development as a caution against extrapolating too much from any given benchmark, but it applies to a myriad of overly simplistic thinking, including that of modern science itself.
Finally, what I call the ‘two headed grotesque’ of scientific atheism and religious fundamentalism (two headed because although they argue all the time, they share the same basic body of assumptions and so depend on each other) is for me a great example of Marx’s (semi) humourous maxim that history repeats itself twice: first as tragedy, then as farce. The period of human development we know as mythical being repeated first tragically in the dark and to some degree the middle ages, but repeats secondly as farce in the two headed monster. I’d love to do a vaudeville with the main character as a two headed grotesque, the heads in question being Richard Dawkins and Pat Robertson. I think it might be as revealing as it would be ridiculous.