The Need to Rethink Economics – Thomas Pinketty’s “Capital in the Twenty-First Century”

In a way, we are in the same position at the beginning of the twenty-first century as our forebears were in the early nineteenth century: we are witnessing impressive changes in economies around the world, and it is very difficult to know how extensive they will turn out to be or what the global distribution of wealth, both within and between countries, will look like several decades from now. The economists of the nineteenth century deserve immense credit for placing the distributional question at the heart of economic and for seeking to study long-term trends. Their answers were not always satisfactory, but at least they were asking the right questions. There is no fundamental reason why we should believe that growth is automatically balanced. It is long since past the time when we should have put the question of inequality back at the center of economic analysis and begun asking questions first raised in the nineteenth century. For far too long, economists have neglected the distribution of wealth, partly because of Kuznets’s optimistic conclusions and partly because of the profession’s undue enthusiasm for simplistic mathematical models based on so-called representative agents.

Piketty, Thomas. Capital in the Twenty-First Century (Kindle Locations 389-394). Harvard University Press. Kindle Edition.

The Problem With Trying to do Science with Common Sense ‘Things’ – A Response to Max Tegmark’s Recent Article on Consciousness (New Scientist, April issue)

The Problem With Trying to do Science with Common Sense ‘Things’

A Commentary on the Desire for a Physical Account of Consciousness



In an older article I discussed the difference between bodies, which are ‘things’ in the biological realm of experience, and ‘things’ in other areas. Of course the problematic ambiguities are not simply between ‘things’ in scientific areas of experience and activity and ‘bodies’ in animal experience, and the tendency of scientists to want their ‘things’ to be satisfying to themselves, as ‘bodies’ are satisfyingly real to their animal experience. The more complex forms of nonsense appear when ‘things’ in one area, such as common sense experience, are used to ‘explain’ other ‘things’ that also have an ambiguous being as both ‘things’ within the sciences and ‘things’ to pragmatic common sense.


A handy example is the following article in the latest edition of New Scientist, written by Max Tegmark: ‘


The fourth state of matter: Consciousness


Tegmark’s basic desire (it really doesn’t even qualify as a hypothesis, much less a theory, because what’s being thought is too indeterminate to have specific implications) is to create a scientific notion of consciousness, something that has not been developed in any reasonable way within the sciences, and to define it physically, since that’s what sciences does, or at least it must seem that way to someone as unacquainted with science as Tegmark obviously is. He has to assume that such a scientific notion is at least possible, otherwise writing about it in a supposedly scientific publication would be ridiculous in the first place, and worse, probably wouldn’t be paid for. The attempt at a ‘scientific’ notion of consciousness, then, begins with an unwarranted reduction to matter, because, obviously, everything physical is material, and vice versa.


Tegmark begins with the most obvious characterizations of matter when common sense uses the term, i.e. that it can be in different ‘states’, such as solid, liquid and gas. In a pragmatic way, we posit some sort of unchangeable substrate that, while any given material may have no apparent properties in common with itself in a different ‘state’, it somehow remains indescribably describable as the same material.


Never mind that philosophical attempts for thousands of years have tried to come up with a physicalist notion of consciousness without accomplishing anything other than making otherwise accomplished thinkers look ridiculous (a prime example is the Cartesian notion that consciousness, as the anima or soul, is attached to the body at the pineal gland). Never mind, either, that since ‘spirit’, another word for what Tegmark is trying to define, already means breath, and therefore implies a ‘state’ of matter already contained in the three possible in the common sense usage, i.e. a gas. Beyond jokes about the gaseous invertebrate, is the common sense notion of material ‘states’ relevant to the scientific notion of matter at all?


In chemistry, or at least in high school level chemistry, to a large degree notion of material states is not hugely different from the common sense notion. While a chemist has a different understanding than the common sense understanding of what a ‘state’ might mean, for practical purposes of manipulating chemicals the common sense notion is close enough . An advanced chemist, however, using molecular bindings would have to context switch for ‘material state’ to mean anything resembling the common sense notion. In physics there is no ambiguity at all – what common sense calls ‘material states’ have nothing whatsoever to do with what a physicist means by ‘material states’, in fact, what common sense intends by ‘matter’ is largely meaningless in a physicist’s context.


What is matter to a physicist? The answer to that isn’t an easy thing to convey, partly because physicists do without a notion of matter that can be employed imaginatively with ease, and work with a mathematical notion that, pressed to express it in natural language, most would likely have difficulty, and it wouldn’t make sense at all to common sense. Heisenberg and Wheeler did attempt to draw the ontological implications (what quantum mechanics says about what and how matter is) of quantum mechanics and the standard model of particle physics developed out of quantum mechanics, respectively. The core of each is found in the following two statements.



In the end Plato was right, the most basic forms of matter are not anything we would consider ‘material’, they’re ideas.” – Heisenberg


It from bit, as in a computer bit of information, binary on or off. Matter at its most basic is information and nothing else.” – Wheeler


While these statements have an obvious similarity in a certain sense, they do not correspond in any apparent way to the common sense notion of matter. The similarity is that both descriptions appear to be describing specifically not what we think of as matter, as “stuff” “out there”, but appear to be describing what we would expect to find not “out there in reality” but within the common sense notion of consciousness. ‘Ideas’ and ‘information’ are, for common sense at least, elements of thought, and thought is the distinguishing ability of consciousness. How can consciousness be defined in terms of matter, if matter is defined in terms of abilities that appear to be abilities of consciousness?


What ‘state’ implies in either notion is also completely non-obvious. If we go back to the common sense meaning and pick one of the material states, say liquid, we find that saying some ‘thing’ (by which we generally mean a body “out there now real”) is experienced as a type of thing that behaves in particular ways, such as self-leveling in a still container, feeling ‘wet’ to the touch, etc., that are common to any materieal that is in a liquid ‘state’. Bits and ideas aren’t wet, nor can we fill a jug with them and pour them back out later. But then neither are they solid, we don’t bump into or trip over them (except in the metaphorical sense, which is the bulk of Tegmark’s actual thought activity in his article ). Maybe they seem more like gases? After all they’re a bit ethereal … But then ether turned out not to exist. Describing something as “like something that doesn’t exist” doesn’t give us much to go on.


So according to Tegmark there’s some kind of ‘fourth state’, and consciousness is matter in that fourth state. No science posits a fourth state, which throws a bit of a wrench into Tegmark’s argument, but what makes it intrinsically a non-starter is that no science posits material ‘states’ as saying anything meaningful about matter in the first place. Tegmark is trying to create a scientific notion of consciousness (after all, we all have an implicit common sense understanding of it) but he is attempting to do so by defining it in terms that have no relevance scientifically. The resulting confusion, not surprisingly, is similar to Descartes’s confusion, when he tried to account for consciousness as though it were a “body out there” in some way. The difficulty is not that physicalist descriptions don’t work because the “soul” is somehow “higher” and more “mysterious” than that, it’s that physicalist explanations have no relevance in a scientific context. Physicalism, like every other -ism, is a belief system. While plenty of belief systems do get mixed up in actual scientific activity, they are never anything but a hindrance to the sciences. They are part of the complex interplay of ideologies that give our shared social and political reality a tenuous scaffolding by which we interpret phenomenally complex topics such as human and societal behavior without any need for science.


So, after this detour, what is Tegmark actually doing?


Consciousness is a “hard problem”, as he quotes ad nauseum, only because he’s expecting a type of solution that cannot, by definition, exist for this type of problem. If we look at the type of solution Tegmark is looking for (the means by which he flails around trying to find it is not really all that important) we can see that it has certain features:


  • it’s explanatory, which simultaneously means explaining from origin, and explaining via temporally sequenced causality – I.,e., cause then effect

  • it accounts-for consciousness in an exact way, which makes it measurable

  • it’s reductive, i.e. the unique aspects of consciousness can be accounted for without requiring anything not required to account-for the very simplest ‘bodies’ ‘out there’


The problem with the notion of explanation is that temporally sequenced cause then effect only applies in discussing efficient causality. Explanation from origin can only make use of efficient causality if a causa efficiens is posited. Since any such being would then be the creator of consciousness, we’re no longer dealing with scientific accounting-for, but religious accounting-for. Origin always means telos, or goal, no matter how much reductionists may try to hide the fact. The myth that Darwin did away with teleology is just that, by his own statements he found one means by which teleology works, which far from doing away with teleology both assumes it and strengthens its claims. (this is the reason Darwin’s book was called “On the Origin of Species”, not ‘on the forerunners of particular species’, or ‘on the trigger of species differentiation’, although those two titles, along with ‘on the cause of species extinction’, would constitute the sum of Neo-Darwinism, but the origin or goal of species itself, the reason nature differentiates itself from itself through its history).


The common sense notion of consciousness, which is still all we have to work with despite Tegmark’s article, would probably, if gone into at all, somewhere note that consciousness and its contents are not exact, nor are they measurable in any meaningful sense of the term. Thoughts, memories, perceptions, any of the contents of consciousness are subject to misremembering, dissembling, inaccuracy, etc. In certain cases it might be very useful to be able to measure the contents, for instance, to know how much pain someone is in and what is therefore an appropriate treatment, however there is no way to do so, not simply within the constraints of current practice, but due to the nature of consciousness itself.


Being a “hard problem”, a simple solution is unlikely to suffice. The history of science is for the most part a history of simple ideas failing to account for apparently simple phenomena. Occasionally there is a successful simple way of understanding something that initially seemed complex, however we have a term for those things, chaotic systems, and they exhibit specific characteristics, none of which are associated with consciousness.


We did note though, just a moment ago, that we know certain things can or cannot be posited of consciousness due to its nature. But by the word “nature”, we are referring to at least the entry point to what Tegmark is seeking. Before we can proffer an explanation of anything, we have to have a reasonable description of it, to know and be able to share what it is we’re attempting to explain in the first place.


We have plenty of descriptions, in fact, to work with. The history of literature, philosophy, and science itself as intellectual, conscious activity are part of that description. The difficulty is not obtaining detailed description, nor is the problem, in a general way, validating what is contained in those descriptions and the observations that gave rise to them. Of course, a person’s consciousness isn’t observable to another in the way that a ‘body’ is, not even indirectly, the way the behavior of ‘things’ in particle physics are validated by confirming a predicted effect on an observable body. But we don’t need that kind of external verification because there is nothing mediating our observation that we need to ensure doesn’t affect the observation. Consciousness, in the sense Tegmark is using it, is not even ‘simple’ consciousness, but the specifically reflexive, self-aware consciousness of an individual living in contemporary society. We all have valid experience of that, or nothing on these pages would have meant anything, and being unable to find any bodily use for it, we would have moved on long ago.


Of course, none of the vaunted techniques of the natural sciences appears tobe particularly useful in looking at the proper topic, the reflexive, self-aware consciousness that makes ethical judgment and abstract thought, and therefore science itself, possible for us as beings. We can describe the basic necessary features for consciousness to be an ethical, philosophic or scientific consciousness, to be a social and political consciousness (which other higher animals also possess). We can look historically and anthropologically at the necessary features of society without initiation into which, many of those features would not come into being in each individual, and the traits of those individuals necessary to create and maintain such a society. The interplay of the social construct of a ‘person’, together with the way that various aspects of that society depend on specifics of that construct, and how each both sustains and simultaneously must change the other, is a fascinating topic, but not one suited for a chemistry lab. Tegmark’s problem, outside of a lack of rigor to any of his thinking, is that he is looking for a scientific solution to a problem science can’t ask, because all of science is itself only a small part of the question.






The Neo-Darwinist Myth of Darwin as an Anti-Teleologist

The Myth of Darwin as an Anti-Teleologist


Asa Gray noted “…Darwin’s great service to Natural Science in bringing back to it Teleology: so that instead of Morphology versus Teleology, we shall have Morphology wedded to Teleol- ogy” (Gray 1963).

Darwin quickly responded: “What you say about Teleology pleases me especially and I do not think anyone else has ever noticed the point.” (letter to Asa Gray, Charles Darwin)


The problem with materialism, including neo-Darwinist materialism, aside from its being an undemonstrated assumption carried into science from theology, is that like the theistic creationism it decries, it fails to understand the actual problem and merely explains it away.


A good example are the various ideas of the ‘unit’ of evolution, these range from Dawkins’ ‘selfish gene’ to theories that go beyond the organism to include the various microbes that are necessary to an organism in a symbiotic fashion, to the species as the basic unit. The problem is not simply that none of the arguments put forward are decisive, but that the question itself is nonsensical. When nature is viewed historically, it appears evolutionary. Evolution therefore is a description of the apparent behaviour of nature as historical. There is no ‘unit’ of evolution, in fact there is no theory of evolution whatsoever, there are only theories of the history of nature. Thie ‘unit’ of the history of nature can only be nature itself viewed as a whole. This can easily be demonstrated by the obvious fact that individuals do not evolve, although there is diversity within species, whether those individuals have necessary microbes included. Nor does a species evolve, for the simple reason that if any substantial change results, not in an evolved species, but a new species.  Nor is there any means by which we can draw a line between what is part of a given organism, species, or genus and what is not, and therefore becomes an environment precisely by the coming into being of an organism, species or genus.


Darwin was attempting to answer one of the two basic questions the evidential history of nature raises, and his answer is not fundamentally any different from that of Empedocles, that survivability of a given variation depends on its relation to its environment. The other basic question is why does nature, viewed historically, appear as evolutionary in the first place. Positing some sort of ‘drive’ is merely begging the question, because ‘drive’ is merely another unexplained word for the missing answer. ‘Evolutionary’ doesn’t mean random change, but precisely means that nature as historical appears to have an inbuilt tendency, albeit at times interrupted by various catastrophic events, to become more diverse and more complex. It is the reason for this ‘more’ that neither materialism nor theism even attempt to understand, and instead merely explain it away.


Darwin was not a materialist by his own statements, despite the largely mythical claims of neo-Darwinists, but a teleologist in Aristotle’s sense, modified by the speculative philosophy of his time, particularly that of Hegel. Origin, in both Hegel and Aristotle, is not prior temporally, the misapprehension both theists and materialsts are under, but temporally a posteriori. The fundamental cause remains the telos or goal, which to Aristotle and Hegel, as with most reasonable men, is temporally after the development of a given being., but for a teleologist metaphysically prior. This wasn’t a difficulty for Aristotle, because the temporal cause and then effect sequence wasn’t yet accepted as obvious. It wasn’t a difficulty for Hegel, because it creates paradoxes as Kant had clearly shown that cannot be solved without abandoning the assumption that the temporal sequence of cause and effect is the only type of causality. How the origin accomplishes what it does evidently accomplish is not answered by Darwin, although he mentions it in his major work as an unanswered problem.


The laws governing inheritance are quite unknown; no one can say why the same peculiarity in different individuals of the same species, and in individuals of different species, is sometimes inherited and sometimes not so; why the child often reverts in certain characters to its grandfather or grandmother or other much more remote ancestor; why a peculiarity is often transmitted from one sex to both sexes or to one sex alone, more commonly but not exclusively to the like sex.


Darwin, Charles (2012-05-16). On the origin of species (p. 13).  . Kindle Edition. 


Naturalists continually refer to external conditions, such as climate, food, etc., as the only possible cause of variation. In one very limited sense, as we shall hereafter see, this may be true; but it is preposterous to attribute to mere external conditions, the structure, for instance, of the woodpecker, with its feet, tail, beak, and tongue, so admirably adapted to catch insects under the bark of trees. In the case of the misseltoe, which draws its nourishment from certain trees, which has seeds that must be transported by certain birds, and which has flowers with separate sexes absolutely requiring the agency of certain insects to bring pollen from one flower to the other, it is equally preposterous to account for the structure of this parasite, with its relations to several distinct organic beings , by the effects of external conditions, or of habit, or of the volition of the plant itself. Darwin, Charles (2012-05-16). On the origin of species (p. 3). . Kindle Edition. (Italics mine)



Empirical Data and Unquestioned Assumptions

In an interesting twist, the early thinkers (in a modern sense) both within natural and economic history worked with a relative paucity of empirical data and a relative inability to process what data they had, yet their awareness of the fundamental issues at work is more acute than later researchers.  By contrast, the more recent researchers in both fields,  with far more empirical data to work with and better tools with which to analyze the data, have proven far more beholden to untested assumptions, and far more likely to allow those untested assumptions to overdetermne their results,  even to the point of manipulating the data they had to skew the results in favor of their preconceptions.   

Many of the best known 20th Century economists, such as Kuznets and Friedman, have tended to justify untested assumptions by an overfocus on data concerning systemic behavior during aberrant periods, those that reflect the impact of catastrophic events rather than data compiled during periods where events followed their more usual course, because data compiled during the more usual course of events tends strongly against their assumptions.  The opposite is true of the Neo-Darwinists, who have underemphasized the importance of systemic behavior and change during periods of catastrophe, precisely because that data undermines their preconceptions.

In both cases it’s demonstrable that empirical data, far from altering a given researcher’s preconceptions, is often manipulated specifically to justify those preconceptions.  It’s also demonstrable that the way in which data is manipulated is determined not by an a priori misunderstanding of the relative importance of various data, but by the method most conducive to maintaining assumptions that produce desired results.  Worse, from the perspective of empirical method, the very existence and ability to analyze data allows researchers greater freedom to manipulate data to suit their prejudgments, and that freedom has been exploited to its utmost extent, whereas in a situation of a paucity of data or an inability to process available data, researchers forced back onto their own ability to think and question tended to include a questioning of their own presumptions, largely because they had no manipulable data with which to justify them against anticipated objections.
In other words, the more empirical research has become, the less it feels the need to question its preconceptions, and simultaneously the greater the ability to manipulate data to confirm those preconceptions, the greater the tendency to do so.  The more available empirical data is, in other words, the greater the tendency to confirmation bias justified by manipulation of that data.
When you consider the further change over the past century and more from exact data to statistical data, the ability to manipulate data has obviously increased, but this has had precisely the opposite effect to what might have been anticipated.  Since statistical data is inherently more suspect, the change has not led to an increase in the manipulation of empirical data, but instead led to a renewed awareness of its relative inability to substantiate thoughtless assumptions, and a renewed tendency, at least in the top researchers, to anticipate objections to their work by spending more time questioning the assumptions underlying it.
The belief in empirical data as indisputable ‘hard facts’, a myth that had its greatest effect within actual science in the 19th century and the first 2/3′s of the 20th, is the underlying issue.  Although science itself is in the process of abandoning this particularly noxious superstition, for the majority of the general pubic, empirical data as indisputable fact is the predominant impression of the nature of science itself, and the justification of it’s privileged status.  However what is not maintainable within science itself will soon prove unmaintainable to the general public, and the result, since hte majority of that public will not immediately understand that this critique of a superstition about science is actually a positive correction to the notion of what science is and is not, will be at least initially a disillusion with science that will be more profound than any religious disillusion has been.  The extremism likely to accompany this disillusion arises from three factors – the suddenness with which it tends to occur,  the rate at which such understanding is shared, and the absoluteness of pronouncements made by scientists themselves and by laymen basing their pronouncements on those of scientists.
The most extreme of those who are anti-religious are almost invariably those who had the most belief in it, hence the extremism of atheism in former fundamentalists.  That extemism is liable to seem tame compared to the extremism when science as absolute truth is forced to admit not simply individual failures, but ageneral failure of scentific method made possible by science’s own faith that its methods were infallible.


New Scientist Headline: “Why do we exist? Is there a god? What is life? Get New Scientist: The Collection free this week”

Seeing this headline in my email in-box, I immediately felt let down by one of the few popularizers of scientific insight that had not generally dropped below a kindergarten level. None of the listed questions have anything to do with ‘science’ since no empirical, observational, or experimental means exist with which to analyse the questions or test any potential answers, and none of the things mentioned has been revealed in a technological manner, which in terms of specifically modern science is a requisite for any thing to become an object for science in the first place.

Science for many has unfortunately turned into a religion, and as such its priests are no better than any other religions ever were, and in many cases they are becoming worse; specializing in the same absolutism and obscurantism as the most duplicitous examples of the self-professedly religious; and professing a belief system more rigid, more self-righteous and more convinced that it is the only way to truth than the worst fundamentalism.

It’s particularly sad to see the New Scientist stooping lower than Discover magazine in order to win readers that don’t understand what science is, where it is useful, and where it has become nothing more than a hypocritical group of two-bit magicians.

I initially had a twinge of sympathy when I started seeing researchers who excel at using scientific method losing funding in favour of ‘knowledge workers’ that have no interest in it, now that both private and public sources of funding have lost interest in paying the additional expense and waiting for results thatare slower to arrive incurred by adherence to scientific method, not because scientific method is the only way, the best way, or even a way at all to truth, but because as a shared praxis it grounds a feeling of community, something lost more and more throughout society as capitalism destroys any potential soure of opposition to its unfettered freedom to manipulate and control the majority of the populace. Science is not a way at all to truth, a scientific statement can only be considered correct or incorrect, as validated by experiments that themselves must be judged on their validity. Truth, on the other hand, has to involve meaning, something impossible for a praxis that of necessity strips away of a thing’s context and meaning in order to first transform it into an object for science. The questions listed in the New Scientist title are not meaningless, as logical positivists who completely failed to understand the only thinker associated with them with any aptitude, namely Wittgenstein. They are, however, questions that are inaccessible to science as such. Only insofar as science abandons the area of intellectual activity in which it has validity can those questions be posited as ‘scientific’ in any meaningful sense of the term.

I’ll admit that I have fully lost whatever sympathy I did have, given the hate-filled vitriol and egocentric ignorance of supposed scientists criticizing and at times calling for the forceful extermination of communities who happen to disagree with its ideas, and particularly those ideas that science itself has failed utterly to properly clarify. While in many cases those communities’ shared praxes may be based on inherited superstitions and so forth, the meaning and worth of them for members of those communities resides in the feeling of community they engender, the actual beliefs held are largely as irrelevant to most members of those communities as their neighbour’s choice of car.

Aside from the few whack-jobs who feel some bizarre need to be antagonistic and hurtful to anyone that happens to not believe what they do, the basis of the shared praxes of those communities are for the most part not significantly different than some of the unquestioned assumptions on which the shared praxis of scientific method is based. Looked at without an already developed belief in those methods, their basis is precisely a very similar, and similarly naïve theology that a competent theologian would refer to as nothing more than a children’s story. And the so-called history of modern science taught in western schools is as ridiculous and mythical as the children’s stories of any of the major religions.

Considering, as well, the behaviour of certain particularly loud and bigoted scientists who have promoted pseudo-sciences that only existed to help in maintaining the privilege of the wealthy, together with the fact that they are defended by other scientists who are fully aware of how indefensible their bigotry is so as to avoid losing their own prestige, I can’t at this point differentiate the behaviour of many in the scientific community from that of some of the most bigoted right-wing fundamentalist whack-jobs. Their behaviour, as well as the underlying intent, is eerily similar.

An apparently ‘human’ breed displaying a voracious appetite for fleecing those with the least to spare, based on the highly credible tradition of cretins such as P.T. Barnum, and displaying a fondness for evangelizing unverifiable and often directly bigoted beliefs, descending (in every sense) from the type of whom Pat Robertson is a particularly odious example, has unfortunately hijacked the place formerly occupied by men of science such as Darwin, Heisenberg, Bohr, Einstein, Wheeler and others. These ignorant,bigoted cretins whose only credibility rests on their entertainment value to a bored public, have become the archetype of the purveyor of supposedly current 21ist century science, notwithstanding that most have little to no understanding of actual science post 18th century.

When one compares the yearly debates of Heisenberg and Bohr versus Einstein, Bell and others, carried out with phenomenally intricate and difficult thought-experiments, through the 1920′s and ’30′s, with the level of the supposed debates between Dawkins and whichever of his straw-man ‘opponents’ needs extra cash this week is particularly telling in terms of the state of science’s current public persona, not to mention their view of the capability of the members of said public. The lack of a noticeable opposition to such charlatans from scientists themselves is telling as to how much stock the scientific community puts into its prestige versus how much stock it puts into promoting and enabling real research.

That the money donated to Dawkins’ misinformation group known as the “Foundation for Science” has gone to creating bigoted propaganda about religion and lobbying for the re-privatization of the British university system, with none used actual research on any topic whatsoever, says all that needs to be said about the real intent: diverting the public’s attention from the shady dealings their paymasters are engaged in. The mealy-mouthed excuse that the public ‘can’t understand hard science’, usually with an example who is a half-wit even by fundamentalist standards, is a particularly condescending claim, considering that the much more intricate and complex debates in the 1920′s and ’30′s were popular enough with the public, most of which had far less education than the average person today, that they were front page news for the days long duration of each debate.

Which brings me to finish with a note regarding a recent quote from Neil deGrasse Tyson.  Although I believe Mr. Tyson is generally sincere, in his attempts to popularize something he personally finds fascinating, the sheer ignorance and the massive egocentrism that underlie a black American saying  “Republicans are doomed to poverty because they’re ‘born into’ ignorance.” is sad, ludicrous and embarrassing all at once.  That it was published in a pro-democrat publication, one that is continually criticizing republican bigotry, without so much as a blush, much less anyone pointing out that it’s as bigoted a statement as any of the statements they attribute to ‘right wing nuts’ demonstrates how shallow their populism is: like most liberals, the implicit motto is that everyone should be treated fairly as long as they agree with us.

Not that writing off an entire group of people ‘from birth’ due to your opinion of particular beliefs that some members of that group hold isn’t screwed up from the start no matter who it originates from.  That a member of a group that has endured far more than most, at least as far as recent history goes, from exactly this kind of bigoted prejudgement, one that doesn’t even wait for a child to develop into a human being before writing them off as worthless and ‘doomed’, both shocks and amazes me.  That anyone with a reasonable education continues to side precisely with the group he is criticizing in their equation of ‘moral goodness’ and ‘wealth’ with no apparent awareness of the depth of irony his flawed assumptions create is even more astounding.  that a member who, from birth, has also been a member of a group prejudged as ignorant, worthless, and poor due to their moral failures rather than systemic bigotry, can without personal embarrassment or public ridicule assume the holier-than-thou, absolutist beliefs scientists hold about their ‘principles’ and ‘methods’ to the degree that it overcomes what ought to be a powerful suspicion of that kind of ignorance, demonstrates if demonstration were needed at this point that even for people who have done actual research, the boorish, bigoted, superior attitude of people like Dawkins is only an exaggeration, and not a fundamentally different view of the rest of humanity from that held by the community of scientists in general.

Tysons’s remake of Cosmos, originally done by Carl Sagan, suffers from the same intrinsic flaw.  While Tyson, like Sagan, is sincerely fascinated by scientific insights, he is basically ignorant of the nature of scientific insight, which must include both its unique abilities, when contrasted with other human intellectual activities, and also its boundaries, its limits.  Any ‘thing’ is defined by its boundary, its limits, and not understanding the limits of science, which is by definition an understanding that is itself outside scientific understanding, means that you don’t understand science itself, you only understand particular scientific statements.  However exact Tyson’s knowledge of given scientific statements may be, his thought lacks the rigour necessary to delineate where science is relevant and where it is not; what lies within its purview, and what must remain outside it for it to remain scientific.  That philosophy is by definition outside the purview of science is not a temporary limitation that will be overcome, in fact it isn’t a flaw in science at all, but the source of its strengths when used appropriately.  That being a scientist has no positive correlation with having an acute understanding of social and political reality, if anything it tends to have a negative correlation, is not surprising in the least since the methods of each and the abilities required to become adept in each of those areas are as different as to often appear opposed.  Polymaths do exist that can apply their intelligence to many areas of intellectual activity, but if you read a biography of any historically noted polymath (often referred to as ‘renaissance men’) the ground of their ability to ask intelligent questions in a large number of fields, in many cases questions that had not occurred to specialists who had spent a lifetime working in those fields, is a rigour of thought that is totally other to a simple penchant for exactness of measurement.  This rigour of thought is found most often in the best philosophy, simply because philosophy is nothing other than thinking well, and thinking well implies a rigour that goes beyond the exactness of mathematical projections, so far beyond that the two lack any meaningful basis for comparison.  Only by thinking well can embarrassing slides into bigotry such as that noted above be stopped before they start.  Only by thinking well can the limits of science, and thereby the nature of science, be appropriately determined.  Only by thinking well can can technology be understood in its being as a work and not simply a thing, in the same sense that we call something made by an artist a work, while we do not refer to a tree as a work.  Only by thinking well can we avoid the temptation to treat human beings as things, or worse reduce them from things that at least have context, meaning and their own unique character, to interchangeable and therefore easily exchangeable, measurable resources.  Only by thinking well can we understand the inherent limitations artificially imposed by a purely technical view of reality, and realize that a technical view of reality is necessarily a creationist, theistic view, no matter how ardently its holder believes he is an atheist.  Finally, only by thinking well can we expose the invalid assumptions underlying the various sciences, inherited from the milieu in which those sciences arose and matured; and show how the apparent paradoxes that various specific sciences are currently foundering on are produced, as all paradoxes are produced, by holding invalid assumptions as obvious fact, failing to question beyond the particular results of a given experiment designed to test a specific theory, where every aspect of the process is working under the same invalid assumption, and as a result is invalid from initial perception to final interpretation of those experimental results.

Why We Should Support a Guaranteed Annual Income and not just an Increase in the Minimum Wage

On the Minimum Wage


While Rech is spot on in his reasoning for raising the minimum wage more substantially than the ~$10/hr being proposed, there are a number of reasons why that wage increase, while helping a huge percentage of Anericans, will slso hurt a smaller, but still significant percentage, and potentially increase the concentration of wealth in the topi 1%. I’m not arguing against the idea of raising the minimum wage, in fact I would go beyond the minimum wage for those working, and give at least the amount he proposes as a guaranteed annual income for all Americans. However, without massive changes in the workings of the financial industry, the dangerous concentration of capital in a tiny percentage of individuals and corporations will continue to create instability dangerous enough to potentially destroy the economy to the degree that currency would become essentially worthlesss.


Raising the minimum wage to ~$15/hr (which would still likely be too low to keep a significant percentage of workers above a properly revised poverty line, and do nothing for those who for various reasons, often very good reasons, cannot work, at least in the current way in which employers expect people to work) would raise a good percentage of low income earners to just above a realistic poverty line. These people are, as a rule, those who have to spend every bit of their income and still must do without in terms of things we have been raised to expect. Ethically, and economics is really a branch of ethics, it is the right thing to do, and as such it also creates certain economic benefits. Since Reich has already listed the most important of these, I won’t bother to redo the list. I will, however, list the advantages of a GAI that would not be assisted by simply a raise in the minimum wage.


  1. A guaranteed annual income has four significant benefits over a higher minimum wage alone:

    1. Those who for various reasons cannot work in current working conditions remain shut out of the economy both as producers (due to demands on workers that often clash with personal and family needs) and as consumers. This is one of the reasons countries with better unemployment fall-backs do better on average, and far better during economic slowdowns, than countries without. Taking away most of a worker’s income if unemployed hurts the economy doubly, as both the productive and the consumptive economic involvement suffer hugely. If unemployment goes up temporarily in a country without a reasonable UI system, the effect tends to spiral in a way it doesn’t in a country with such a program, because the loss of consumptive involvement puts the need for other workers at risk. Unemployment, though, only helps the problem to a degree and for a limited period, any longer downturn will decrease its benefit to the overall economy. A guaranteed annual income will help this issue to a much greater degree, and the improvement is independent of the length of particular economic downturns.

    2. Since the GAI goes to everyone regardless of need, the cost overhead of such a program, when compared to the complex set of assistance programs that each have their own eligibity rules is lowered significantly. Since this is not a cost that assists either productively or to any great degree consumptively, lowering it will produce a net benefit to the economy as a whole

    3. Since it is need-independent, it doesn’t penalizes the majority if they in fact work for additional income. The few that are penalized are for the most part those who currently get the most benefit when all social programs, including tax breaks, are taken into account, the top 1%, primarily since the government could no longer afford to give its friends the tax breaks it currently does. The top 1%, as well, have the highest percentage of any of the income groups of members who do no work whatsoever, which makes them the biggest ‘idle’ welfare recipients in the country.

    4. Cutting tax breaks to the wealthiest individuals and corporations would implicitly assist in lowering the dangerous concentration of wealth. The reason an increased minimum wage would not have an equivalent effect has to do with the way in which the wealthy are able to maintain (and usually increase) that wealth year over year. The primary vehicle is interest. In order to ‘save’ or ‘invest’ money, someone wealthy implicitly requires persons or companies to spend that money via borrowing. The problem specific to an increase in the minimum wage is that it actually enhances the ability to borrow, while not significantly reducing the desire to do so. If most people put their financial status ahead of everything else in their lives, aside from a huge number of lives becoming absolutely miserable, the economy would disintegrate, but even prior to that disintegration, only if the government is willing to increase its borrowing to match the decrease in borrowing by individuals and private companies could those living on interests or investment income continue to do so. Money ‘saved’, if not borrowed by others, is simply money lost, and the money borrowed by those with the means to do so and simultaneously the need to do so in order to live in a reasonable manner is necessary to keep economic demand sufficient to make investment anything other than another way to lose money. A raise in the minimum wage alone, without an increase for those who cannot work, and a change in the financial system that would cut down on the ability to live off others’ labor via investment and interest, would not raise the income significantly of those who currently have both the means and the need to borrow in order to maintain a decent lifestyle (including such things as covering the costs of children and elderly family members, not particularly adding luxuries), but would increase the number of those with the means, as well as the need, to borrow. This increase in the demand for both goods and credit both lead to a greater increase in the concentration of wealth in the top 1%. While the top 1% is likely to get some reductions in social assistance via tax breaks, the increase in the size of the pie, and the increase in the percentage of the pie that is used up paying interest and taking profits would more than make up for the loss in tax breaks.


Those that have the most to lose in this situation are those who currently earn enough to pay taxes, and get by on their income, but do not earn enough to pay for things that they might not even think of going without or even could go without and not simultaneously lose their income(such as children, a house, a car) . With a greater number of people with the means and similar reasons for desiring to borrow, the increase in demand will almost inevitably raise the interest rate, and the increase in spending, while increasing the earnings of companies, will also increase the cost of doing business in terms of accounts receivable cash flow loans and profit-taking to maintain investment levels. A GAI, since it also increases, though by a smaller percentage, the basic incomes of those just above the level of ‘getting by’, will by a small margin decrease the pressure to raise interest rates, the increase in cost of doing business, and the pressure to increase profit taking in order to maintain investment levels.


The final point in favor of a GAI is that, like universal health care in those countries that have it, the universality of the program gives the vast majority of citizens an incentive to maintain it. Since only a small minority would not gain from the GAI, only a very small percentage of the population has an inherent desire to dismantle the system by electing a government that proposes to do so. A higher minimum wage, by contrast, benefits a large, but significantly smaller percentage of the populace, and as importantly benefits that percentage with the least political leverage, while hurting a much larger percentage of the populace (due to increased interest rates and prices, and increased pressure to keep non-minimum wage salaries down to minimize the increase in payroll costs). This combination makes it much easier for a group interested in revoking the increase to gain sufficient public support to do so.


Obviously, since the problems that an increased minimum wage can cause are only partially mitigated by the alternative of a GAI, other things need to change simultaneously.


  1. The right-wing attacks on only one segment of those who do not work, those who are also poor, needs to be eliminated by pointing out, and continuing to drive the point home until the majority get it, the reality that the majority of money given to those who are idle goes to the very wealthy, not to the poor; the reality that many of those who are poor and do not work cannot work precisely because they can’t afford to, either because paying for childcare or care of the elderly would cost more than they can earn, or because it would impact their health too greatly; and the reality that the majority of those who do work, and ironically the higher the income, the higher percentage the percentage that fall into this bracket, do not do anything useful. Well over half of American workers do nothing useful. In many cases the appearance of usefulness is simply created by an inverse (and equivalently useless) job role elsewhere that, since it exists, has to be countered. Eliminating both, via intelligent legislation that removes the apparent need for those workers, frees everybody from the perceived need to work as many hours as they do (which lowers productivity even further) and would force those who want ‘more’ than the universal income to find something useful to do in order to obtain that ‘more’. Those who would want more, whatever the actual level of the GAI, would include the vast majority of those who work now, because most people work, and work as hard as they do, in order to feel productive and maintain their social status and respect of their peers than simply in order to cover the cost of living. The attacks on those in the most need of assistance are only justifiable via a backwards and archaic morality that goes directly against both ethical and economic considerations. Eliminating the specter of the ‘lazy and useless poor’ on social assistance, by constantly reminding people that the lazy are primarily found in the richest neighborhood, not the poorest, and the most useless in the highest income brackets, not the lowest, would significantly undermine any attempt to revoke any gains that might be made.

  2. Paying for the GAI by decreasing the concentration of wealth, and by eliminating non-productive overhead associated with assessing means in granting social assistance and overhead associated by employing vast numbers of workers for too many hours per week who accomplish nothing of actual worth would:

    1. lower stress due to working too much and the attendant medical costs from stress related illnesses

    2. increase productivity in the majority of jobs, because beyond 30-35 hours per week, depending on the nature of the job, leads to poor attention to the work and the massive cost of rework that should have been unnecessary

    3. lessen the prejudice against those who cannot work either due to other social and family realities, since those who do work will need to actually find something useful to do, and this will be more difficult when useless jobs are demonstrated to be useless, and eliminated.

  3. Not cause the economy to implode due to a lower rate of employment. Productivity would be increased by eliminating rework, costs lowered via lowering of medical costs due to stress and lowering of the overall payroll by eliminating jobs that don’t actually contribute to that productivity. Simultaneously demand would be maintained and even increased by the simple fact of putting more money in the pockets of those that need to spend it, thus keeping the money in circulation.

  4. Keeping more of the available money in the economy in circulation is one of the best ways of lowering demand for credit, and therefore lowering interest rates, which are in fact the main cause of inflation. Workers demand raises as a consequence of interest-related inflation, and rarely get the equivalent of the increase in cost of living, since giving them that equivalent would simply raise inflation further beyond the interest rate that generates its minimum value.

Things, Bodies, and the Assumptions of Modern Science

Things, Bodies, and the Assumptions of Modern Science

The distinction between ‘things’ of the intellect, and ‘bodies’ that are external reals ‘there right now’, is one of the basic difficulties human beings have. Were this distinction clear to more than a few people, a huge number of pseudo-arguments could be consigned to the dustbin they deserve.  The best treatment of the topic I have found is that in Bernard Lonergan’s major work ‘Insight’.

A ‘thing’ in any area of experience is a collection of relations comprehended by the intellect as a unity. Its validity is contextual: what is valid in one area of experience, for instance biological experience or the dramatic experience of social life is not valid in the context of physics or chemistry, and vice versa. A ‘body’ is something valid specifically in the biological realm of experience. A dog may take a passing interest in a still photo of another dog, but quickly loses interest since it does not respond in the manner that things of interest to a dog do, and so for a dog is not a ‘body’, not ‘real’ in the common sense meaning of the term. The same dog, though, might sit for hours watching dogs on television (dogs will even do so, in some cases, when the televised dogs are cartoons), the interaction and behaviour, however a dog perceives and conceives them, qualify those as ‘real’, as ‘bodies’ that are external to it, there at the present time, and can be interacted with in the way that dogs interact with other real bodies. The problems of mereology (the study of wholes and parts) is fundamentally an inability to understand and maintain that distinction. At its extreme there is the mereological nihilism of someone like Cian Dorr, who maintains that the only ‘real things’ are subatomic particles, everything else is just an aggregation. The idiocy of this notion becomes vividly apparent when you attempt to comprehend the traffic patterns of a modern city as ‘really’ the result of the random movement of subatomic particles. Oddly, as a supposedly scientific mereology, Dorr’s ideas violate the first principle of a scientific ‘thing’, that it be verifiable via observational evidence. Since subatomic particles can only be partially and indirectly verified via the behaviour of ‘things’ Dorr treats as arbitrary aggregations, his ideas rely on the very experiences of ‘things’ at a greater scale, the very experiences he is at pains to claim as ‘illusions’.
In the definition above the most difficult term, although we use it non-conceptually fairly regularly without really specifying precisely what is intended, is ‘relation’.  Defining a thing as a given unity of relations appears to ignore what common sense sees a thing as made of, in favor of something common sense finds difficult to accurately define.  For common sense, things are composed of parts, not relations.  Of course even the most pragmatic person knows that parts have to have some kind of arrangement to be a particular thing, but the notion of ‘relation’ is not especially clear when thought of in terms of ‘arrangement’.  To clarify the matter a little for those of the more pragmatic persuasion, I’ll indulge in a somewhat lengthy, not to say arbitrary, example:  

1. I own a car, as it happens a relatively old one.
2. Cars are made of parts, after all when something goes wrong, my first stop is likely to be an auto parts store (assuming I do the necessary labor myself).  
3. Given that my car is 31 years old, and I am its third owner, chances are pretty good that a fair number of parts have been replaced over its lifetime.  
4. \Since the original manufacturer, at this point, doesn’t stock parts for such an old car other than regular maintenance items that sell in reasonable quantities, parts that have been replace in, say, the last 15 years most likely didn’t come from the manufacturer, particularly since buying parts from someone else is generally cheaper than going to the dealer.  
5. Parts from someone else, of course, can’t be a random generic part.  Only parts designed for that car, for the most part, will fit on the car.  In the event a broken part is simply not available from anyone, my choices are to get a used part from a wrecker, or re-engineer everything that part interacts with in order to use a similar part from a different type of vehicle.
6. My car, like everyone else’, is a specific type of car. If someone wasn’t familiar enough to know what type it is, my response on being asked would be to name the make and model, and possibly the year and trim type, if the variations are sufficiently different from one another.  
7. When choosing parts that were designed for my specific type of car, I have usually a few choices.  While the dealer may not stock all the parts I might at some point need, larger distributors do keep stock, and the parts are generally separated into three classes: OEM, OES, and aftermarket.  This separation is due partly to common ways in which cars are designed an manufactured, and partly due to different target markets for the parts’ makers.  
8. Most parts in any given car are not made by the car manufacturer themselves.  This would require any car company to be proficient in too many specialties, which generally results in an unmanageable operation.  Different companies specialize in, for instance, cooling system parts, fuel system parts, etc.  The car manufacturer designs the car, sometimes with already existing parts, sometimes specifying modifications to existing part designs that will improve cost or performance on a specific model.
9. An OEM or Original Equipment Manufacturer part is generally the part the car manufacturer themselves used when designing and building the car.  Buying an OEM part for my car, for instance a Behr radiator if it needs replacing, is replacing the original with an identical part from the same company.
10. For both logistical reasons and to meet regulated standards, however, for most parts a car manufacturer is obligated to have an alternate source.  This is for the car company’s own protection (if a supplier can’t meet demand for a part, the alternate is used and production continues) and for the consumer (if the OEM company went out of business or stopped making the part five years after you bought a new car, there would most likely still be the alternate around, and they would have increased incentive to continue making the part).  These ‘alternate’ parts, called Original Equipment Specification or OES parts, are generally not identical with the OEM part, but they are interchangeable in terms of functionality and performance.
11. The third class of parts, aftermarket parts, are made by companies who often had nothing to do with the original car manufacturer.  There are various reasons why a company would make an aftermarket part:  In certain cases, both the OEM and OES parts manufacturers for some reason no longer make the part, but there is still a sufficient demand in the market for another company to create a new part that will work in place of those.  In other cases, the OEM or OES part may be priced so much higher than manufacturing cost that there is a sufficient demand for a similar enough part at a lower price, these parts may or may not have completely interchangeable specifications – for instance ‘universal’ type parts fit well enough in most cars to be usable, but can’t be precisely interchangeable on a large number of different car types. Windshield wipers, for instance, generally fit most cars, although exactly how they are attached varies slightly from car to car – the universal part simply has close enough connectors for most cars, and in fact has two or three types of connector, each of which is close enough for a significant percentage of the market.   The last type of aftermarket parts are those designed to improve on the original part.  ‘Improve’ is a pretty vague word, and indeed how they ‘improve’ is often a buyer opinion.  Some aftermarket manufacturers, for instance, specialize in looking at commonly problematic parts on cars that have a relatively long life expectancy.  By fixing a fault, for instance, in a part that didn’t show up until the cars were five or ten years old, the aftermarket parts maker can hopefully convince parts buyers that their part is superior, and if used will not need to be replaced as quickly as another OEM or OES part.  Other ‘improved’ aftermarket parts may be cosmetic, or simply trend-conscious.  
12. The question, then, that all this is leading to, is that given that a fair number of parts have most likely been replaced (and some I know have because either I replaced them, or they are noticeable modifications to the original design) the car still ‘is’ the year, make and model today that I said it was last year, or that the company advertised it as 31 years ago?  Not only are some of the parts not original, they are different in design and come from a different origin to the original parts, yet the car is still the same car.   This ‘sameness’ is not an illusion, as the more physicalist minded mereologist might claim.  Even after major parts have been replaced, the car has a distinctive feel, a distinctive manner of operating, of turning, braking, etc. that a competent driver would soon intuitively recognize, and quickly associate definitively with that type of car.  
13. The answer is of course that the car is not a collection of parts. Quite simply, if it were merely an ‘aggregation’ as mereological nihilists claim, a Ferrari in a parts bin should be as immediately recognizable as a Ferrari on the road as you hear the distinctive exhaust sound and see the distinctive shape leaving my 31 year old sedan in the dust.  In fact, only a Ferrari engineer would recognize one in a parts bin, even a Ferrari driver would be unlikely to immediately know it was a Ferrari, unless by chance the part had the insignia on it.
14. So, then, the car is a unity of relations, since the identity of the parts themselves are not decisive in determining the identity of the car. What then is a relation?  The key is in the fact that unless I purchase an OEM or OES part, I’m not quite guaranteed that the part will have the exact relations as the original had.  Nissems, for instance, makes aftermarket cooling system parts, mainly for European cars.  In many cases these are actually higher quality than the OEM or OES part, but any backyard mechanic who has used them knows the double-entendre of their slogan “Making the Difference”, because a very common side effect of the differences they introduce to improve the durability and performance of the part is that the parts directly related to it require minor but often time consuming and annoying modifications to parts that are related directly to it.  Even in this case, though, the car is still the same type of car, the slightly modified minor relations do not change the more effective relations that determine the car’s identity.  Those relations, which are often at a higher scale but need not be, must be kept very close to the original for a car to feel ‘right’ in the way that car did originally.  
15. When a car doesn’t feel ‘right’ after work has been done (assuming the ‘not right’ feeling isn’t due to something having been done wrong) the interaction of some aspect of the car with every other aspect is not the same as it was.  A relation, then, is a determinate and identifiable potential to interact.  

Now a car, as you might gather, is also a body to those of us raised in 20th and 21st century civilization. It is ‘out there now real’ as Lonergan puts it.  However it need not be a body to still be legitimately a ‘thing’ in a specific area of experience.  A sentence, for instance, in grammar is a ‘thing’ grammatically.  It has a determinate range of structures that allow it to be identified as such.  However it is not in the usual sense a ‘body’, i.e. is is precisely not ‘out there now real’, but has a different mode of being, one only accessible to the intellect, and only to the intellect as grammatical in a specific mode of that intellect.  Many, if not most of the ‘things’ that interest scientists are of this type.  To speak of them as if they were bodies however is to talk the kind of nonsense Dawkins spouts: a gene is not a ‘body’ to either itself or anyone else, much less a ‘selfish’ body.  It is only a gene for a scientist if it is apprehended as such via the intellect, using a complex set of tests that still in many cases are inconclusive.  The gene, more cogently, is only a gene to a self-aware creature that posits it as a factor in its own origin.  For itself, were it to have the type of cognitional ability Dawkins apparently ascribes to it, it still wouldn’t be a gene, selfish or otherwise, to itself it could only be a structure of chemicals that under certain conditions will copy that structure to a different set of chemicals, replicating its own structure. Dawkins, like most science ‘popularizers’ always subsequently claims it as to be a mere metaphor for the public that doesn’t understand ‘hard’ science, but never mentioning what is supposed to be said if the metaphor is dropped, precisely because nothing at all is said if the metaphor is dropped.  Researchers in their daily work probably rarely pay attention to the ‘as if’, the metaphor that makes their work not nonsensical, because the metaphor, rather than being described in natural language as an imaginative projection, is described in symbols as a mathematical projection.  But only what can be imaginatively projected in some manner can sensibly be projected mathematically, and the only arbiter of a mathematical question that could have multiple answers is to be found in which implied imaginative projection is impossible and/or paradoxical.  Physicists don’t need to consider the ontological validity of their equations each step of the way, but determining if an infinity that crops up is a ‘real’ infinity or a spurious infinity triggered by the infinite divisibility of the ‘real’ number system, an infinite divisibility not found in reality itself, requires that the physicist revisit the ontological implications of the infinity, to determine if it in fact makes sense or not.  The metaphors that are always involved in a scientific explanation when put into natural language are not artifacts of some mysterious difficulty, that is simply obscurantism, and as such is no different from the obscurantism of any other set of people who claim an ‘extraordinary’ knowledge not sharable with ‘just anyone’, but only for the initiated, in other words a magician’s contrivance.  These metaphors are the intrinsic content of an explanation that is for the most part handled in formal symbolic mathematical projections, rather than in the mode of imaginative hypotheses in which they originated and still have their proper sense and meaning.  If a scientist’s metaphors don’t make sense, in most cases their theories are also nonsense, but couched in a formal language that is not known well by the majority of non-scientists.  The much worse problem though occurs when a scientist does understand his field, and his metaphors do make sense within the context of his science.  The temptation is always to go a step too far and claim that that is how things ‘really are’.  This is the problem that plagues the work of intelligent, sincere attempts to convey the context and meaning of scientific insights to a wider audience.  As a creature that is still also an animal, despite his intelligence and reason, what every scientist has in common with the general populace (or at least every human scientist) is a desire for ‘their’ things to be ‘really real’, i.e. to be satisfying in the way that an interaction with an ‘out there now real’ is satisfying to an animal’s desires and necessary to their survival.  The claim of Carl Sagan, repeated most loudly by Neil Tyson, was a sincere claim, but was only a half truth.  Living as we do in an age where much is revealed by technology alone, we do need an understanding of the essence of technology as a mode of revealing.  However we do not particularly need an accounting for what technology reveals that rarely catches up with what technology has revealed, and certainly not one that only performs that accounting in the terms already set out by technology.  Modern science is that accounting-for what technology has already revealed i a technological manner, which is to say as measurable and therefore exchangeable, and nothing more.  For something to even become an object for science it has to already have been revealed in some manner, and since the technology of the telescope reinvented astronomy, that manner has been almost exclusively technological, at least in the natural sciences.  The human sciences, over the same historical period, have merely managed to ineffectually ape the same type of accounting-for with very few of the requisite technological revealings to account for.

Plato began to understand the difference between things and bodies a couple of millenia ago, Aristotle was clearer on the matter. It took Augustine a couple of decades to figure out what Aristotle and Plato were talking about but he wrote the problem and the solution out in a more explicit fashion than either Aristotle or Plato. It took Aquinas a number of years, even though he had Augustine’s writings as a guide. It took modern physics four hundred or so years. Not until the 20th Century with Heisenberg, Bohr, Einstein and a few others was science able to deal with its ‘things’ purely as scientific things, without requiring that they be externalizable as ‘at hand bodies’. The majority of scientists today, including a competent physicist such as Neil Tyson, never mind the stamp collectors of science such as Dawkins, don’t understand it.

Related to this are the twin problems of validation and evidence. Scientific evidence and validation is a very peculiar case of what is considered validation and evidence in a more general sense, because virtually nothing in science can be validated without at least indirect observational, empirical evidence.   A relatively higher percentage of valid ‘things’ in the sciences are inherently imaginative projections that create models, and those models are judged by how accurately they predict relatively simple interactions. This is fairly useful, but also extremely limited in its basic truth claim. No matter how correct, imaginative models (and mathematical models are simultaneously imaginative models) cannot lay a fundamental claim to truth, precisely because the things they accept as valid are already removed from their meaningful context, and only when removed do they become objects for science. Without a meaningful context, any statement may be correct or incorrect, but the question of truth and falsity is not thereby answered, because it is never even raised. Scientific ‘objectivity’ has its only valid basis in this removal of a scientific object from the contextual situation that make it meaningful as the thing that it is, which is to say, its being. As an example, a scientist may be able to tell you all kinds of things about a statue, viewed as a scientific object: its age via carbon dating, where it was found, the distance from the found object to the location of the materials from which it was made, etc. But in becoming an object for science, what science cannot tell you without stepping out of the scientific mode of experience is what it is, which is always what it means in a specific context. Nothing that science can tell you would distinguish an object, say, as a statue of a Buddha from a bust of Elvis. Of course one might be able to infer much of the object’s original being via an interpretation of scientific statements, but in each case one has to bring non-scientific understanding from other areas of experience to accomplish the inference.  This is not a fault of science, rather it is precisely its strength when considered appropriately.  However what can be considered ‘objectively’ is of necessity still extremely simple.  To put the supposed ‘power’ of advanced mathematics and physical models into perspective, to this point we can predict the potential movements of an abstract model containing only identical abstract bodies and only one force if there are two bodies in the model, but not more.  Our most powerful and advanced mathematics fails when we add a third abstract, identical, perfect body to a model that only models the potential interactions via one idealized force.  

People who ‘believe’ in science as something it isn’t will immediately point to our ‘advanced’ understanding of nuclear power, and ‘advanced’ breakthroughs in medicine.  Just to set them straight, our ‘advanced’ understanding of the atom, as exemplified in nuclear fission, doesn’t suffice to enable us to understand why we can only create the effect with a couple of isotopes of specific elements that happen to have unusual properties, nor do we have any understanding that can relate the atomic model of an element to its behavior, other than behavior that is affected by mass and nothing else.  In fact, those unusual properties of those specific isotopes of those specific elements were precisely the properties that the model of atomic radiation attempted to account for in the first place. A circular argument may be vicious or virtuous, depending on to what degree it advances our intuitive understanding of a topic. But a circular explanation is at root a non-explanation.  

The advances in medicine in the last couple of centuries touted as evidence of science’s prowess have largely been achieved by the highly scientific methodologies of dumb luck and trial and error.  The most common treatment for manic depression was discovered to be effective by a doctor who was merely trying to use its dulling properties to quieten some particularly boisterous manics.  Antidepressants were discovered while testing a couple of particularly ineffectual antihistamines.  In neither case do we have any understanding beyond guesses as to how they work.  How does the medical establishment look for other potential medications?  By trying out chemicals that look similar, also a highly scientific methodology.  Finally, how do medical researchers determine if in fact a similar looking chemical has desired therapeutic effects?  Primarily by asking the people that take the chemical in a research trial. Notch one there for the power of scientific ‘objectivity’, when the result of clinical trials is determined primarily by collecting samples of subjective judgments.  I’m not damning medical research, any more than denying the undoubted effectiveness of nuclear power based on specific isotopes of uranium, merely pointing out that the supposedly well understood ‘scientific’ basis of both is for the most part mythical, as is the supposed use of scientific methodology in their discovery.  The most common means of discovery of anything new over the last few hundred years has been technology, and by that I do not mean an unexamined entity referred to as ‘science and technology’, for the good reason that technologists do not in general use scientific methodology, it would interfere with what they in fact do.  Nor are technologies ever based on science, what is revealed by any given technology, as the intrinsic meaning of that technology, has to have already been revealed by that technology before it can become a possible object for science.

In other areas of human activity and experience direct, observational, empirical evidence is not a necessary or even usual criterion for verification of the validity of any particular ‘things’ relevant to that area of experience : our social and political ‘things’ are validated by our ability to get by in the social and political milieu we live in; biological ‘things’, which are generally the same as ‘bodies’, are validated by our survival and ability to reproduce, not by empirical observation; our common sense ‘things’ are validated by our ability to accomplish tasks such as driving to a mall, buying what we need (and knowing what that is) and getting home without any detrimental incidents. None of these modes of behaviour require observational evidence for validation of their truths, nor can their truths even be expressed in an ‘objective’ manner, in any meaningful sense of the term. The ‘things’ of science, interesting as they might be to a scientist or a curious layman, are irrelevant to a carpenter insofar as he is engaged in being a carpenter, and a scientist claiming it is ‘more true’ that a table is made from subatomic particles and not wood is just promoting an unverifiable model over the validated reality of a carpenter’s work. Scientific nonsense such as the ‘hologenome’ as the unit of evolution, notwithstanding that it comes a tiny bit closer to sense than the organism as the unit, is only possible where both an organism and its microbial contingent, as well as evolution, are posited as ‘bodies’, as ‘out there now reals’ and not simply unities of explanatory conjugates in a specific scientific context. The nonsense arises due to the ambiguity with which we cross from things to bodies and back without distinction. Evolution, in its most general sense, is not itself a ‘thing’ but a description, specifically a description of how nature as a whole appears when viewed historically. The Neo-Darwinist myth that Darwin’s ideas replaced teleology and teleological understanding, rather than supplementing our understanding of how teleology might function, aside from contradicting Darwin’s express statements about his ideas as inherently teleological, makes no sense in a concrete context, and is only possible if evolution is viewed as something ‘out there at present and real’, a ‘body’ in the sense that humans, like other animals, understand perceived unities in their environments. A description is only a ‘thing’ to the active intellect, it has no subsistence ‘out in reality’. As a description of nature viewed historically, the only sensible ‘unit’ of evolution is nature itself. This can be easily demonstrated by the simple reality that individuals do not in a general way evolve, neither do individuals if you include their microbial attendants, neither do species, simply because any significant evolutionary change generally creates a new species. Where teleology still applies is in attempting to answer the basic question surrounding the evidential history of nature: why does nature, viewed historically, appear as evolutionary. Not only does natural selection contain an implicit teleology, simply because the ‘environment’ involved in selecting is only an environment when posited as such by an organism that has already distinguished itself from the rest of reality.  That ‘rest of reality’, or that portion of it that is relevant since it can interact with it, is now its ‘environment’, which of course leaves the origin of the organism as teleological.

From the practical perspective of understanding how nature should appear as evolutionary, nothing in any of the various modifications of the notion of natural selection appears to preference a general tendency to increasing diversity and complexity, in fact organisms with the most robust survivability and the greatest likelihood of recurrence are those that are simple and very much alike. Bacteria survive in more environments, and more diverse environments, than any multicellular organism, and despite the vast variance of environments, bacteria all look pretty much alike.  In fact the genotypical diversity of bacteria far outstrips their phenotypical variation. From a physicalist perspective, the second law of thermodynamics, entropy, would by all intuitive projections imply that over time beings should get less complex and less diverse due to the overall loss of structural energy in whatever scale system we take as the operative one. The laws of thermodynamics are not themselves explanations but descriptions of what actually happens on a directly observable, i.e. human, scale, while the other two main branches of physics deal with the extremes of non-human scales – micro and the macro, which is the reason thermodynamics is always the arbiter of theories in particle physics and relativity physics – if a model in either relativity physics or quantum particle physics contradicts directly observable evidence, which is fundamentally laid out in the laws of thermodynamics, then it cannot be a correct model, since it is itself unverifiable and contradicts the verified.

The evidence of the historical record of nature, though, shows a general tendency towards evolutionary change. This evidence is not something which existed in the past, as simplistic creationists often argue, but exists precisely in the present as the historical record of our origins.  Dinosaurs exist (beyond those particular dinosaurs that are not extinct, i.e. birds) as fossilized historical records. That they ‘were’ extant in some sense millions of years ago is something we can validate and have validated, albeit indirectly via other directly verifiable observations. Precisely what that sense was we cannot know, since our notion of being is dependent on the human context of any being’s extantness, i.e. its meaning.  Obviously they did not exist at that time as part of our historical origin, that is our experience.  Their experience of themselves is as fundamentally unknowable as a galaxy’s experience of itself.

Evolutionary change is not random change, but change with a specific tendency towards ‘more’ in terms of diversity and complexity, although occasionally interrupted by catastrophic incidents, some of which are self-caused by nature. We implicitly recognize evolutionary change as this specific type, as is demonstrated by our immediate ability to distinguish between evolutionary, random and devolutionary change in most cases (as in many things there are gray areas, but these are fairly low in incidence percentage-wise, we tend to over-focus on them because they are rhetorically useful).  We also implicitly recognize in historical change as evolutionary, the history nature as our origin.  We experience human history implicitly as the origin of our social being, and natural history implicitly as the origin of our bodily being.  The difficulty though in demonstrating a reason, outside the implicit teleology of being our origin, is precisely the root of the issues many have with the concept of evolution.  Those who jump on the ‘scientific’ bandwagon in attempting to simply silence critics, rather than attempting to understand why they have an issue with an idea that not only appears obvious to most modern, scientifically educated minds, but in fact was historically the first hypothesis known regarding natural history, predating creationism by many centuries.  In all the science of natural history done since Darwin, his basic and most important insight, that nature is historical, and that the history of nature is a history of increasingly complex and diverse interplay between all its aspects, has been completely missed in favor of a naive belief in simplistic causality and almost an exterminative campaign against anyone that dares to wonder why nature would have such a history when the inverse would seem to be more probable.  It is in the ideas involving systemic self-organization, self-optimization and strong emergence of both physical and meta-physical systems, and the retroactive nature of teleology itself in the meta-logic of Hegel and others, that a satisfactory understanding of what that basic question contains and the beginnings of an attempt at a satisfactory answer may be found, not in the equivalently misguided attempts of naive materialists and naive theists to simply explain it away with their opposing beliefs.  The constantly fueled ‘argument’ between the two serves no purpose other than the desire of those enjoying privilege within the status quo to maintain that privilege, while the two sets of people most likely to challenge their right to that privilege argue over two equally worthless positions, on a question of tangential and abstract importance.,

Applying scientific methods to other modes of reality, and only considering ‘things’ if they qualify as such in a scientific context, is not only a sure way of being annoyingly irrelevant, it falsifies what science itself is about and can and cannot do. As a result it’s not only irritating but dangerous. A common statement that exhibits this generalization of scientific validity outside its relevant areas of experience is “only what can be measured is real”. Any minor consideration of biological, psychological, social or political realities shows the statement to be nonsense, yet many scientists and so-called ‘scientific philosophers’ continue to make the same or equivalent claims without even realizing how inane the statement is. In fact it is only valid in science to the degree that science already accepts only what is revealed technologically to be real. This is not an ontological reality, merely an inherent limitation in the way in which technology reveals what had previously been hidden. The way technology reveals is not incorrect, but neither is it close to being comprehensive. Modern science, which is simply science to most people who have neither studied ancient science nor done advanced work in postmodern science, originated in technology, most notably the technology of the telescope. It was justified via a theology that intrinsically posited a creator being with the nature of an ‘ultimate engineer’. The origin of the claim that “only what can be measured is real”, or in our terms, is a ‘body’, is an indemonstrable posit that all reality can be modeled by a mathematics that isn’t even defensible, in the post-Cantor era, as mathematics, never mind as a model of reality.

Virtually every attempt to apply such thinking to areas such as social and political reality, for example, has resulted in a number of pseudo-sciences from phrenology to social Darwinism, to socio-biology, evolutionary psychology and neuropsychology. All these pseudo-sciences have two things in common: by applying scientific methods where they are not relevant they create an appearance of validity for propositions that are in reality ‘just-so’ stories; those ‘just-so’ stories are almost invariably themselves based on unquestioned assumptions that arise from ,and simultaneously assist in maintaining, the status quo of the society into which their participants were initiated. The last three have the dubious prestige of being promoted by a certain Richard Dawkins, who evidently is interested only in maintaining Oxbridge privilege, even at the cost of irreparably damaging actual science.

Of course nearly anything can be justified via this misapplication of both scientific method and unquestioned scientific assumptions: selfishness and altruism; rapacious capitalism and stifling totalitarianism; justification of the maintenance of privilege and justification of any means whatsoever to eradicate it. The ‘evidence’ of these pseudo-sciences is irrelevant in the human, political and social realities they operate in. Superstition and false conclusions that arise from invalid assumptions are by no means the exclusive province of cults and religions, in fact the most pervasive and most lethal have been generally propagated by so-called scientists who themselves believed their superstitions to be factual. This tendency is itself a psychological and sociological issue – neuropsychology is merely a neuropathology. Within the real sciences, which is more serious as far as any consideration of science itself goes, scientists still constantly justify simplistic theories via a superstitious attachment to Occam’s razor, that the simplest sufficient explanation is always to be preferred, although the historical record demonstrates more than sufficiently that the most common reason for the failure of scientific theories has been that they were too simple. To make the irony that much greater, the justification for Occam’s razor is a very particular (and theologically naive) protestant Christian idea of the nature of god. By contrast, the more Catholic theological equivalent is the argument of Leibniz – that what satisfies the human imagination the most is to be preferred. Leibniz posited the human imagination as the closest thing to a god’s creative power, which requires an underlying notion of the nature of that god as being more like an artist than an engineer: rather than doing things the most efficient way, and once for all time, the unspoken assumption is that god tries this and that, blots out things that don’t work, and tweaks his work until he’s finally either satisfied with it, bored of it, or both. That both are theological in their basis is part of the historical reality that modern science developed in a world for which a creator being was simply a reality, and not a question. The lack of questioning of the basis of the one more usually preferred, though, demonstrates the more serious issue that although science constantly questions its results, it rarely questions the assumptions that guided it from the initial perception, to a hypothesis, to a developed and testable theory, to the design of the tests to validate that theory, and the interpretation of the results of those tests. Science can, and probably should, rid itself of its theological dependencies. Very few non-religious scientists realize that would involve a revaluation of every scientific observation, theory, and every acceptance of the interpretations of experimental validations of each theory. Even fewer are willing to allow the inevitable loss of prestige and credibility such a revaluation would cause to the sciences in general society. Those outside the intellectual worlds of science and the philosophy of science would inevitably take it, not as a positive step in ridding itself of the superstitions it has so pointedly criticized elsewhere, but an admission of the complete failure of science and its methods.

The world we need to navigate as members of human society is far more complex than science can either model or validate, yet we have to make difficult decisions and complex judgments on a constant basis. To do an item by item logical comparison between living in the United States and Europe, with scientifically verifiable observational evidence for every factor could not be completed in a lifetime, and by the time it was partially done both societies would have changed sufficiently to make the completed work irrelevant. But if the relevant manager at the job we work at asks us to relocate from one to the other we haven’t that kind of time, we have to go with the political, social and common sense realities we already know from having been initiated into a specific political and social milieu. The ethical, social, and political ramifications for both you and those closest to you are both drastic and inherently unpredictable, but the choice has to be made, and we have to take full responsibility for that choice.

Which brings us to the last and most dangerous theological inheritance of modern science: predetermination. While the theological complexities of how some religious sects deal with the apparent contradiction of free will and an omniscient creator are themselves an interesting topic, modern science did not originate within such a complex understanding of theology. Even today, when both quantum and relativity physics have been forced to abandon the reductionist, physicalist determinism of early protestant theology, it remains fully operative in most of the other sciences, where the exceptions to the likely course of events are rare enough to simply ignore. Although the bottom-up ‘explanations’ of the Neo-Darwinists and converged theorists contradict both mathematical and experiential evidence, and lead to paradoxes that can only be resolved by abandoning the invalid assumption on which they are based, many if not most scientists hang on to the belief that such explanations are at least theoretically possible, if impractical, with a fervor that rivals the most fervent of religious fanatics.

Technologists, perhaps fortunately, perhaps not, do not themselves use scientific methodology, except perhaps as an occasional self-critique. They use what works, and don’t bother with accounting for its working. Modern science, looked at properly, is precisely an accounting for what technology has revealed, and continues to reveal further. The last irony I’ll mention is a hypothetical discussion between physicalist, non-telelogical biologists taking place over a 3G or 4G cellular network (whether they’re using phones or iPads is up to the reader’s imagination). The topic of the discussion is a current controversy within biological circles: whether self-organization, self-optimization, and strong emergence are possible. The irony is that, while the technologists that invented the techniques used in 3G and 4G networks cannot scientifically account for how these systems increase performance on what is basically the same hardware, they do so via self-organization and self-optimization of an emergent system using algorithms inspired by a speculative philosophy of nature, and the basis of this strongly emergent system is the dynamic relations between the physical and logical components of the system itself. Perhaps someday scientific biology will be able to account for what we (and it) already not only use, but take for granted.