The Heap Fallacy

The nature of truth can be confusing, and rarely is it more confusing than in the metaphysical distinction of groups and members. Is a chorus nothing more than the individual tones? Or a symphony nothing more than the individual notes? On some level, we must answer how could it be anything more? And yet the relationships between tones and notes seem to play a relevant role, and relationships do not exist when considering everything individually. What I take from this and will examine in this brief post is that sometimes a general truth means something entirely different from the particular truths from which it is abstracted. For example, it is true to say all living humans require food to remain alive, but it is not necessarily true to say they need apples or oranges or grapes or meat or cheese or bread or pizza or pork belly and sauerkraut sandwiches, etc. Thus, there is no particular food that humans need to eat. But it does not follow from this that human beings do not need to eat any food. Food is still required.

I have often encountered arguments of this type, particularly in political economic debates. They go in both directions. One variety says, the needs of the group cannot be determined and so individual needs are indeterminable. The other variety says, the contributions of individuals cannot be calculated and so the value of their combined inputs are indeterminable. Arguments of these types are often invoked both in support and condemnation of markets. But both of them are in fact heap fallacies, which turn on a conceptualization problem. Heap fallacies exploit ambiguity and are all too common in philosophy, politics, and economics.

The heap fallacy is related to the “sorites paradox” and is sometimes called the “continuum fallacy”. The fallacy works by rejecting a claim based on the vagueness of the terms used in the claim. For example, imagine I lay a single straw on the ground. This surely does not constitute a “heap” of straw. Now let me lay another on top of it. Still, we would not call it a heap of straw. And if I were to lay a third, still no. The fallacy would be then to conclude that since no single straw could be responsible for the change from a non-heap to a “heap” of straw, there is no heap of straw. The problem in this example is the term “heap” itself, which is arbitrary. However, in this case, its arbitrary nature is not logically relevant. We can draw the line anywhere, and so we can refine our definition of “heap” to anything. Perhaps three straws are a heap, or three-hundred, or three thousand… the point is the fact that we can draw a line anywhere does not mean that the line does not constitute a real distinction.

This type of argument which holds that if specifics are not given then a thing must be false is rarely recognized as fallacious. Generalities are not necessarily vague as this fallacy contends. As in the example above, to require that food be required is not the same as to say that this or that particular kind of food is required. Still, “food” as a generality is required for life despite our apparent inability to specify particularly which food-stuffs. It is all too easy to play with this distinction between the general and the specific. One can accuse any generality of being abstract, metaphysical nonsense. And to make matters worse, sometimes generalities can be just that! A group of strangers could be assembled in any made-up category, but this doesn’t necessarily mean that it has any greater significance.

At the same time, one can accuse those who focus narrowly on the specifics of missing the “big picture”. As with the food example. Again, this is not always the case, some times a generality is simply arbitrarily applied. More interesting still are the cases in which something entirely arbitrary, that is a made up category, comes to take the properties of a real distinction. For example, consider human “races”. On the one hand, it seems like “race” is an arbitrary and made-up distinction in which no clear line could be drawn. In this situation, “race” is merely a fantasy of Immanuel Kant’s devising, denoting no real distinctions. On the other hand, that would render “racism” a fictitious action. No one could be racist if “race” itself was not a real thing. And what would it mean to say things like, “sickle-cell anemia affects black populations almost exclusively”? The statement seems to convey some medical information, but if “black populations” is not a real category then it is an empty statement. Based on the heap fallacy, it would seem race is a real thing, but an arbitrary one.

Our only recourse to avoid the heap fallacy is to be aware of the poverty of such rhetoric and guard against its use. We are condemned to constant vigilance. It is all too easy to treat all arbitrary categories as though they were not real, but this is a mistake. Arbitrariness is not an indication of a lack of realness. Some very real things are arbitrary.

 

The Abortion Line

As I write, access to abortion is facing its greatest threat in more than forty years. Attacks, some absurd and incoherent, have passed state legislatures in more than eight states and most have been signed into law. With legal challenges pending, the goal of these new bills is to attack the precedent established in Roe v. Wade. In this post, I want to take a long hard look at abortion rights, and the conversations surrounding this topic. Mostly, I want to resolve a media distortion that magnifies the divide and separates people on the issue when they are, in fact, not so far apart. The conclusion I will draw is that we are all pro-choice, only some of us want to choose for others, the rest want to choose for themselves.

From the point of view of someone entrenched in the media, abortion seems to be the most divisive issue facing America today. But the reality is very different. Consider the following timeline:

Screen Shot 2019-05-17 at 7.54.04 AM.png

On the left-most antipode is conception, the moment when a sperm fertilizes an egg and a single-celled life is formed. On the right is birth, when a human child would naturally evacuate its mother’s body. In between are a host of different “milestones” that have historically been associated with the abortion line. The abortion line is the point where a baby’s right to life supersedes a mother’s right to bodily autonomy. Make no mistake, every single one of us, no matter how “pro-life” or how “pro-choice”, believes in an abortion line. No “pro-choice” advocate believes in post-birth abortion (despite the current President’s hate-mongering rhetoric), just as no “pro-life” advocate believes in forced insemination in order not to waste potential babies by not creating them. These notions sound absurd to our ears precisely because we by and large agree about the abortion line, and even generally where it should fall: somewhere in the 37-week period between conception and birth.

This narrow band in the process of human beings coming into and out of existence is but a speck, and yet even the most extreme among us tend to agree that it is in this range that the abortion line must be drawn. It is our media coverage, with its over-developed sense of drama, that zooms-in, distorting the reality until these rather close theories appear extremely divided. This microscoping effect leads to hostility and even violence as near-agreement becomes vast disagreement. Its effect is so strong that as you are reading this today, you’ll probably feel the need to argue that there is no “near-agreement”.

So let me defend that position a bit. Assuming you accept the medical timeline above and my stipulative-definition of an abortion line, we might ask ourselves if it is ever acceptable to draw it before birth? I have never met a pro-life advocate who was so strongly pro-life that they believed in the forced conception of girls and women in order to prevent the loss of children who would have existed. Now, what is the reason for this? If you are “pro-life”, consistency would dictate that you must force pregnancy on “selfish” women who would allow an opportunity to reproduce simply go by through abstinence. But almost no “pro-life” people maintain this position. This is not because they are inconsistent with their beliefs or because they are really anti-sex or misogynists. It is because they draw the abortion line. Before conception, these “pro-life” advocates assume a women’s right to autonomy takes precedence over a potential life. So, for them, the conception–or slightly later–is the right place to draw the abortion line, switching the mother’s right to body autonomy under the single-celled organism’s chance to develop into a fully-developed human being.

On the other side, we can ask if it is ever acceptable to draw the abortion line after birth? I have never met a “pro-choice” advocate who was so strongly “pro-choice” that they believed in the killing of a toddler in order to free the mother from her attached responsibilities to the child. If you are “pro-choice”, consistency would dictate that a woman retain the right to abort her child indefinitely. But as before, no “pro-choice” advocates maintain this position. This is not because they are inconsistent or because they are deep-down “pro-life”. It is because they too draw the abortion line. After birth, these “pro-choice” advocates assume children’s right to life takes precedence over a woman’s right to autonomy. So, taking these arguments together we can see that both rights are emphasized by both positions and the only real question becomes where specifically to draw the abortion line.

This is consistent with the Roe v. Wade decision which held that the state had an obligation to protect both essential rights. However, the decision also punted the question of where to draw the line back to the individual states.  States have varied but most land somewhere before the third trimester. The decision elaborates a history of abortion looking for guidance from antiquity on where to draw the line. It notes that most ancients, including the Greeks and Romans, were ok with abortion, even very late. Early Christians, following an Aristotelian emphasis on form and a spiritual sense of ensouling matter, settled on the “quickening” (around 18 weeks). For them, the quickening symbolized the moment when a fetus first became “human” and was granted a soul by God, recognizable by its independent movement and human appearance. The decision also notes that even where abortion was illegal, it was not considered the same thing as murder, and often was treated by law as a misdemeanor rather than a felony.

When it comes down to it then, Roe v. Wade reflects our values quite well. It shows that both sides of the debate do respect both women’s autonomy and children’s right to life. Even meager honest reflection will reveal just how true this is. No “pro-life” advocate wants to have their or their mothers’, wives’, or daughters’ medical decision made for them by others. They believe that people should make their own medical decisions based on their own interests for themselves, perhaps with the counsel of a medical professional, but without the interference of the state. They just make an exception for unwanted pregnancies. At the same time, “pro-choice” women are hardly murderers. They do not advocate for abortions, only for women’s right to make the choice for themselves. They would not demand the right of parents who want to have a baby to abort. The argument then is really about whether or not society should be allowed to draw the line on medical intervention for individuals. On this point, I’m ardently libertarian. I don’t think society could make medical decisions for me, so it definitely shouldn’t. Arguably then, either you are for the state making your medical decisions for you (somewhere “death panels” still echos in the distance), or not. On that point, I think we are nearly universally agreed. “Pro-life” advocates then need to demonstrate why abortion is an exception to the rule, and the best grounds they have for it is a child’s right to life.

Of course, there are many other issues surrounding abortion. For example, whether outlawing it prevents abortions or just makes the abortions less safe. But where to draw the line returns again and again as the central problem. Most conservatives want to draw the line at conception, thus tying responsibility directly to sexual intercourse. This argument is often presented as assumed or scientific. “Life begins at conception” so the argument begins. However, conception is just as arbitrary a place to draw the line like any other; life may well begin when the sperm and egg are produced in the respective parents, or when the respective parents are themselves born, or on the other hand, when the fetus first begins to have the “form” of a human being, or when the independent multi-celled organism first attaches itself to the mother, forming not only what will become the baby but also the extra-bits of organic matter, like the placenta. In this final case, the blastocyte was no more the development of a future baby than it was the development of a future placenta.

Many feminists want to draw the line around the third trimester, with a few as late as pre-birth. Again these limits are not saying where we should actually draw the line, they are demarcating the arena in which an individual should be free to choose. Taken as a whole, these “pro-life” and “pro-choice” arguments are both markedly pro-choice. But what is more revealing is that they also both jointly create the window for abortion. Before conception, all of us agree that preventing the life process (as prevention or abstinence) is acceptable. After birth, all of us agree that aborting the life process is unacceptable. Thus, the political debate is one about setting boundaries, not drawing the actual line itself.

As I said, where we draw the actual line is ultimately arbitrary, which is why it is impossible to agree on it politically. What we usually argue over, but shouldn’t, is everything else that gets stirred-up in the mix. Questions of responsibility, sexual punishment, oppression, and much, much more are important questions, but not really as connected to the abortion debate as most of us would like to believe. We would all be infinitely better off if we could admit that we are very close on this issue, politically, a mere 24 weeks apart, and inside that window is where the abortion line should be drawn, with particular exceptions granted, such as in the case of medical emergencies, rape, and incest.

That would leave us free to deal with the real issue: whether society should draw the line for every woman or leave the window open for each individual woman to decide for herself. The former position is not “pro-life”. It is anti-choice. It is not about protecting life but rather about controlling it. The latter is “pro-choice”, but not pro-abortion, in the sense that it leaves women free to not abort, that is to choose life. This is why I am pro-choice. I believe that within the structures that we (nearly) universally find acceptable and where the particular choice is rather arbitrary, free choice ought to exist. So, while it is acceptable that society (both men and women) may determine the window in which the abortion line can be drawn, the actual choice of where to draw the line itself, within that window, must belong to the individual woman for herself.

We see in Concepts not Phenomena

Charles Sanders Peirce once noted that it is an achievement of human excellence to see the world as an artist. What he meant is to see the world as it really appears, and specifically not as we conceptualize it. Similarly, Claude Monet once said of his friend and fellow painter Edouard Manet, “He comes to paint the people, I have come to paint the light.” This comment speaks volumes about what we see when we see what we see.  If that sounds confusing it is because what we see remains constant but what we see it as can change. Monet and Manet were in the same place and painting the same scene, but they painted it vastly differently because Manet was painting the concepts as he knew them while Monet was painting the phenomena as he experienced it.

the people and the light.jpg
Manet’s realism (left) captures the vision of our mind’s eye; Monet’s impressionism (right) captures light as our eyes see.

I want to explore what that means. What did Peirce have in mind when he drew his distinction between phenomena and concept. I suspect that to see the world “like an artist” is to see the world precisely devoid of concepts. That is to peel back every single layer of cognition. We often think of this as what “the eye” sees, or what we see without the “mind’s eye”. Phenomena, we take to be primary to human cognition, like Immanuel Kant, from whom I take the word. The phenomena for Kant came from the unknowable noumena or the thing-in-itself. The noumena–if there is such a thing–is the thing outside of our experience of it, an object before we experience it. Kant held noumena to be beyond our ability to know. Human knowledge, he claimed, is limited to what we experience, that is phenomena. We do not see a chair, for example, what we see is patches of color in a familiar shape we “recognize” as a chaise lounge. We do not hear a song, we hear frequencies of airwaves, that we recognize as Bon Jovi.

This stands against many long-held theories of epistemology and human cognition. The traditional view, since John Locke anyway, is simply that we experience the world through our senses, and those senses give us reliable information, which we then conceptualize into the things we know. This picture, I believe, is completely backward.

No doubt our senses present us with reliable phenomena, qua phenomena, but that is not really what we experience. What we experience are concepts; concepts mapped onto the phenomena before or at the same time we experience them. Really, the human phenomenal experience is all about mapping concepts. Concepts are all we’re concerned with. When I look at a table and chairs, I don’t see colors and shapes and tints and shades and other static phenomena, even though all these are what we might say my eyes can “see”. When I look at a table and chairs, I see a “table and chairs”, that is the concepts “table” and “chairs” applied precognitively to the phenomena. I didn’t have to think about it. I didn’t have to ask myself, “what is that?” and answer myself, “that is a table and chairs”. I simply saw a table and chairs. Whatever part of my mind applies the concepts I know to the phenomena I experience, does so without the acknowledgment of my conscious mind. And what is more, I’m satisfied with my knowledge of the table and chairs because I can apply “table” and “chairs” to the phenomena of my eyes.

To really see what I mean, let’s examine this from another angle. Look at children’s drawings the world over and you will see art, not as the artist sees the world, but as the rationalist see it. The child draws the world of concepts. The humans they depict have the right parts to make them visually identifiable as human: one head, round; two eyes, in the center of the head; one nose underneath the eyes and one mouth underneath the nose; a body; two arms; two legs; perhaps hands with five fingers each; feet; perhaps even a heart. There is nothing of “realism” in the child’s work. Every child is a minimalist. What is relevant here is that to “see the world as an artist” is to unlearn what comes so natural to us that even very young children can do it: seeing the world in concepts.

IMG_2251.JPG

It is important to note that when we see the world in concepts, we are the ones applying the concepts, but we do not create the concepts. We take them from our experience of the unconceptualized world and our culture. When we don’t know what something is, what we mean to say is we have no conceptualization for the pattern of phenomena we are experiencing. Lacking a concept, we don’t even have a name for what we experience and so we are reduced to gesture, verbal or physical, and wonder. The child’s primordial and perennial question, “What’s that?”, is the basis of all human understanding. It is from this question that we build up batteries of concepts into the storehouse of knowledge.

The real point here is that human beings apply the concepts we see and we apply them in such a way that we do not recognize our own hand in their application. We experience them as out there in the world, coming to us through our eyes. But this is both false and dangerous. It is because of this inconspicuous application that we experience our own biases as “natural”. We cannot see ourselves standing before the light and so see our shadow as something manifest in the world. This gap between what we see and how we see it is perhaps the greatest source of epistemological error. The gap is perilous to transverse when dealing with observable phenomena, but it is doubly perilous when the phenomena in question must be inferred from the phenomena that can be observed, for here we must jump the gap twice! 

Alternative Panpsychism

The topos of this article is ontology. The attempt herein is a journey of discovery into the nature of reality while avoiding the limitations of substance dualism, monist reductionism, phenomenological ontology (Heidegger), and the self-satisfying illusion of objectivity. The goal is to take an element or two from all of these and to forge a new theory of being. To understand the nature of being beyond the subjective knowledge as itself.  Rest assured that I don’t intend to suggest answers, but merely to refine the question. I am dissatisfied with Kant’s abandonment of the question entirely. I believe the numena can be known in and of itself and in fact, is known by each and every one of is. And that is as good a place as any to start.

Kant’s view that things in and of themselves can’t be known suffers the fatal flaw that it presumes “things” to be something other than the self. What Kant really means to say is that Other things can’t be known in and of themselves. One might go so far as to suggest that this is the metaphysical root of the Self/Other divide. But for our purposes in this essay, Kant really can’t say that the experiencer is unknowing of its own experience. For proof, I might offer Descartes, whose Cogito argument unquestionably suggests that the experiencer knows they are experiencing. The thinker knows they are thinking, even if they know not what they are thinking. To be able to deduce one’s existence from the fact that one experiences, regardless of whether or not that experience is a delusion, requires a silent premise that one has some experience of experiencing. For if one is not aware that one is aware, the Cogito becomes unconvincing.

So, if we agree with Descartes that we do indeed have a sense of our experiencing, then we must also have an experience of experiencing. There is little radical in this so far, but one implication of this is that we must have an experience of ourselves, that is of our experiences as from a thing, a place of being, an existence. Thus, we know what it is like to exist as ourselves. Assuming then, perhaps contra Kant, that we too are things, then we have the experience of one thing in and of itself, namely ourselves.

That doesn’t seem to get us very far, and if it does anything at all it seems to lock us into a phenomenology of everything that is Other to our subjectivity. But I don’t think that sort of absolutist abandon is quite right. I grant that our direct knowledge of Other things is filtered through our phenomenological experiences in a wholly subjective manner.  However, it is not by direct observation alone that we come to know the world around us, and reasonable deductions of the invisible can nevertheless become knowledge. So, armed with the knowledge of numena as ourselves, and our knowledge of others as we phenomenologically perceive them, what, if anything, can we know?

Let’s make one simple assumption. That there is nothing special or different about an atom that is a constituent of ourselves and those that are not ourselves. To make this more concrete, I’m simply claiming that a sodium atom in table salt is not essentially different from a sodium atom in a neuron of your brain, in fact, the former may be ingested by you for the sole purpose of becoming one of the latter. If you’ll grant me this consistency of the elemental universe, then it is reasonable to assume that my experiences of being a solution of atoms are a trait of atoms. The experience of the sodium atoms in your brain is not wholly different from that of the sodium atom in the table salt and that your experience of the world is then at least similar to the experience of the whole world, all its things, organic or inorganic.

Now, that certainly sounds absurd. Of course, my experiences are different from those of table salt, is what you’re probably thinking. But you’re wrong on a fundamental level.  And yet, you’re right on a level of higher complexity. The danger here is one of equivocation regarding the word “experience”. So, let’s clear that up. When I say your experience of yourself is the same as the experience of the table salt, I do not mean to suggest that the table salt has conscious and phenomenal experiences like you do. What I do mean is that it experiences things that happen in the universe. Salt dropped in water has an experience of dissolution. Much like you dropped in water has an experience of floating.  Perhaps a better example would be a rock dropped from a height toward the Earth experiences gravity in a nearly identical way you would experience gravity in the same situation. The experience I mean here is that of interaction with the other things and forces of the universe.

Before you get all disappointed with the essay and say, well so what? Everyone knows that things can have forces applied to them, but what we really want to know is if they have conscious and phenomenological experiences like us, and if not, then why do we? Good question, let me attempt to answer it by saying that while all matter experiences the things that happen to it, only complex organic matter remembers those experiences for any amount of time longer than the experience takes to occur.  Memory is what makes our experiences stick. Experiences can be recalled, set against one another, compared and synthesized. This is consciousness. This is a phenomenal experience. A phenomenon is more than photons hitting the atoms in the rods and cones of your retina, which is simply an experience. Phenomenal experience requires a secondary process, one that is complex and involves memory, pattern recognition, and ultimately gives rise to the experience of what we call consciousness.

Let me be clear, I’m not suggesting any kind of reductionist physical explanation for consciousness. It’s not that we have more complex structures that give rise to things like biology and psychology, but it is that these structures can repeat experiences. The sodium atoms in our brain and those in the table salt both experience the world, but the physical and chemical structure of our brains allow us to repeat our experiences again (remembering) and to mimic them without the stimuli reoccurring (recalling). I am not suggesting any sort of determinism. The atoms themselves function with quantum mechanical indeterminacies the likes of which make any reductionist determinism a dubious prospect at best.  I am instead saying that consciousness and phenomenal experience can be understood through a monist material worldview.

In sum, conscious experience is a result of phenomenological experience, which itself is a result of physical experience. The last is shared by all matter, living or not. Thus, consciousness is understandable in a monist materialistic picture of the universe, without the need for substance dualism, and limited by neither phenomenology, subjectivism, naive objectivity, or hampered by a reductionist regress into determinism.  All matter experiences, but only living things re-experience. Thus only living things remember, recall, and know that they have experienced anything other than what they are experiencing now.

The Emerging Universe

There is a debate in philosophy and science about whether or not the universe can fully be known.  The debate centers on the conflicting ideas of reductionism and emergentism.  Reductionism is the belief that all phenomena reduce to physics (or possibly mathematics) and so at some point we might know the universe entirely because we would have all the variables and all the formulas necessary to describe all phenomena.  For example, we could predict stock market fluctuations because we could reduce them to human psychology, which we could reduce to physical biology, which itself would reduce to neurochemistry, which again reduces to physics, which we can describe quantifiably.  Emergentism, on the other hand, holds that there are gaps in the chain of reductions that cannot be filled.  In other words, some new properties simply emerge without any seeming correspondence to the system upon which it is built.

Calabi-Yau Continue reading “The Emerging Universe”