lunes, 9 de agosto de 2010
The Limits of the Coded World
In an influential article in the Annual Review of Neuroscience, Joshua Gold of the University of Pennsylvania and Michael Shadlen of the University of Washington sum up experiments aimed at discovering the neural basis of decision-making. In one set of experiments, researchers attached sensors to the parts of monkeys’ brains responsible for visual pattern recognition. The monkeys were then taught to respond to a cue by choosing to look at one of two patterns. Computers reading the sensors were able to register the decision a fraction of a second before the monkeys’ eyes turned to the pattern. As the monkeys were not deliberating, but rather reacting to visual stimuli, researchers were able to plausibly claim that the computer could successfully predict the monkeys’ reaction. In other words, the computer was reading the monkeys’ minds and knew before they did what their decision would be.
We have no reason to assume that either predictability or lack of predictability has anything to say about free will.
The implications are immediate. If researchers can in theory predict what human beings will decide before they themselves know it, what is left of the notion of human freedom? How can we say that humans are free in any meaningful way if others can know what their decisions will be before they themselves make them?
Research of this sort can seem frightening. An experiment that demonstrated the illusory nature of human freedom would, in many people’s mind, rob the test subjects of something essential to their humanity.
If a machine can tell me what I am about to decide before I decide it, this means that, in some sense, the decision was already made before I became consciously involved. But if that is the case, how am I, as a moral agent, to be held accountable for my actions? If, on the cusp of an important moral decision, I now know that my decision was already taken at the moment I thought myself to be deciding, does this not undermine my responsibility for that choice?
Some might conclude that resistance to such findings reveal a religious bias. After all, the ability to consciously decide is essential in many religions to the idea of humans as spiritual beings. Without freedom of choice, a person becomes a cog in the machine of nature; with action and choice predetermined, morality and ultimately the very meaning of that person’s existence is left in tatters.
Theologians have spent a great deal of time ruminating on the problem of determination. The Catholic response to the theological problem of theodicy — that is, of how to explain the existence of evil in a world ruled by a benevolent and omnipotent God — was to teach that God created humans with free will. It is only because evil does exist that humans are free to choose between good and evil; hence, the choice for good has meaning. As the theologians at the Council of Trent in the 16th century put it, freedom of will is essential for Christian faith, and it is anathema to believe otherwise. Protestant theologians such as Luther and Calvin, to whom the Trent statement was responding, had disputed this notion on the basis of God’s omniscience. If God’s ability to know were truly limitless, they argued, then his knowledge of the future would be as clear and perfect as his knowledge of the present and of the past. If that were the case, though, then God would already know what each and every one of us has done, is doing, and will do at every moment in our lives.
And how, then, could we be truly free?
Even though this particular resistance to a deterministic model of human behavior is religious, one can easily come to the same sorts of conclusions from a scientific perspective. In fact, when religion and science square off around human freedom, they often end up on remarkably similar ground because both science and religion base their assumptions on an identical understanding of the world as something intrinsically knowable, either by God or ourselves.
While our senses can only bring us verifiable knowledge about how the world appears in time and space, our reason always strives to know more.
Let me explain what I mean by way of an example. Imagine we suspend a steel ball from a magnet directly above a vertical steel plate, such that when I turn off the magnet, the ball hits the edge of the plate and falls to either one side or the other.
Very few people, having accepted the premises of this experiment, would conclude from its outcome that the ball in question was exhibiting free will. Whether the ball falls on one side or the other of the steel plate, we can all comfortably agree, is completely determined by the physical forces acting on the ball, which are simply too complex and minute for us to monitor. And yet we have no problem assuming the opposite to be true of the application of the monkey experiment to theoretical humans: namely, that because their actions are predictable they can be assumed to lack free will. In other words, we have no reason to assume that either predictability or lack of predictability has anything to say about free will. The fact that we do make this association has more to do with the model of the world that we subtly import into such thought experiments than with the experiments themselves.
The model in question holds that the universe exists in space and time as a kind of ultimate code that can be deciphered. This image of the universe has a philosophical and religious provenance, and has made its way into secular beliefs and practices as well. In the case of human freedom, this presumption of a “code of codes” works by convincing us that a prediction somehow decodes or deciphers a future that already exists in a coded form. So, for example, when the computers read the signals coming from the monkeys’ brains and make a prediction, belief in the code of codes influences how we interpret that event. Instead of interpreting the prediction as what it is — a statement about the neural process leading to the monkeys’ actions — we extrapolate about a supposed future as if it were already written down, and all we were doing was reading it.
To my mind the philosopher who gave the most complete answer to this question was Immanuel Kant. In Kant’s view, the main mistake philosophers before him had made when considering how humans could have accurate knowledge of the world was to forget the necessary difference between our knowledge and the actual subject of that knowledge. At first glance, this may not seem like a very easy thing to forget; for example, what our eyes tell us about a rainbow and what that rainbow actually is are quite different things. Kant argued that our failure to grasp this difference was further reaching and had greater consequences than anyone could have thought.
The belief that our empirical exploration of the world and of the human brain could ever eradicate human freedom is an error.
Taking again the example of the rainbow, Kant would argue that while most people would grant the difference between the range of colors our eyes perceive and the refraction of light that causes this optical phenomenon, they would still maintain that more careful observation could indeed bring one to know the rainbow as it is in itself, apart from its sensible manifestation. This commonplace understanding, he argued, was at the root of our tendency to fall profoundly into error, not only about the nature of the world, but about what we were justified in believing about ourselves, God, and our duty to others.
The problem was that while our senses can only ever bring us verifiable knowledge about how the world appears in time and space, our reason always strives to know more than appearances can show it. This tendency of reason to always know more is and was a good thing. It is why human kind is always curious, always progressing to greater and greater knowledge and accomplishments. But if not tempered by a respect for its limits and an understanding of its innate tendencies to overreach, reason can lead us into error and fanaticism.
Let’s return to the example of the experiment predicting the monkeys’ decisions. What the experiment tells us is nothing other than that the monkeys’ decision making process moves through the brain, and that our technology allows us to get a reading of that activity faster than the monkeys’ brain can put it into action. From that relatively simple outcome, we can now see what an unjustified series of rather major conundrums we had drawn. And the reason we drew them was because we unquestioningly translated something unknowable — the stretch of time including the future of the monkeys’ as of yet undecided and unperformed actions — into a neat scene that just needed to be decoded in order to be experienced. We treated the future as if it had already happened and hence as a series of events that could be read and narrated.
From a Kantian perspective, with this simple act we allowed reason to override its boundaries, and as a result we fell into error. The error we fell into was, specifically, to believe that our empirical exploration of the world and of the human brain could ever eradicate human freedom.
This, then, is why, as “irresistible” as their logic might appear, none of the versions of Galen Strawson’s “Basic Argument” for determinism, which he outlined in The Stone last week, have any relevance for human freedom or responsibility. According to this logic, responsibility must be illusory, because in order to be responsible at any given time an agent must also be responsible for how he or she became how he or she is at that time, which initiates an infinite regress, because at no point can an individual be responsible for all the genetic and cultural forces that have produced him or her as he or she is. But this logic is nothing other than a philosophical version of the code of codes; it assumes that the sum history of forces determining an individual exist as a kind of potentially legible catalog.
The point to stress, however, is that this catalog is not even legible in theory, for to be known it assumes a kind of knower unconstrained by time and space, a knower who could be present from every possible perspective at every possible deciding moment in an agent’s history and prehistory. Such a knower, of course, could only be something along the lines of what the monotheistic traditions call God. But as Kant made clear, it makes no sense to think in terms of ethics, or responsibility, or freedom when talking about God; to make ethical choices, to be responsible for them, to be free to choose poorly, all of these require precisely the kind of being who is constrained by the minimal opacity that defines our kind of knowing.
As much as we owe the nature of our current existence to the evolutionary forces Darwin first discovered, or to the cultures we grow up in, or to the chemical states affecting our brain processes at any given moment, none of this impacts on our freedom. I am free because neither science nor religion can ever tell me, with certainty, what my future will be and what I should do about it. The dictum from Sartre that Strawson quoted thus gets it exactly right: I am condemned to freedom. I am not free because I can make choices, but because I must make them, all the time, even when I think I have no choice to make.
________________________________________
William Egginton is the Andrew W. Mellon Professor in the Humanities at the Johns Hopkins University. His next book, “An Uncertain Faith: Atheism, Fundamentalism, and Religious Moderation,” will be published by Columbia University Press in 2011.
Suscribirse a:
Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario