Monday Morning Surrealism

Thesis: Hegel.

Antithesis: Cholera.

Synthesis: Progress!

Wednesday Primary Source Happy Fun Hour!

Two questions:

i.) Have you read anything of the Spanish-American heterodox pragmatist George Santayana?

ii.) Did you read the comics today or yesterday?

If you answered “no” to i.) and “yes” to ii.), you were mistaken in your answer to i.). Darbey Connley’s strip “Get Fuzzy” has been playing with a common misquotation of a passing remark of Santayana’s,

Those who do not remember the past are condemned to repeat it.

Everyone recognizes the quote, but few are aware of its origins. It comes from a passage of Santayana’s first major work, Reason in Common Sense, vol. I of The Life of Reason, or the Phases of Human Progress. I reproduce the passage without commentary:

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it. In the first stage of life the mind is frivolous and easily distracted; it misses progress by failing in consecutiveness and persistence. This is the condition of children and barbarians, in whom instinct has learned nothing from experience. In a second stage men are docile to events, plastic to new habits and suggestions, yet able to graft them on original instincts, which they thus bring to fuller satisfaction. This is the plane of manhood and true progress. Last comes a stage when retentiveness is exhausted and all that happens is at once forgotten; a vain, because unpractical, repetition of the past takes the place of plasticity and fertile readaptation. In a moving world readaptation is the price of longevity. The hard shell, far from protecting the vital principle, condemns it to die down slowly and be gradually chilled; immortality in such a case must have been secured earlier, by giving birth to a generation plastic to the contemporary world and able to retain its lessons. Thus old age is as forgetful as youth, and more incorrigible; it displays the same inattentiveness to conditions; its memory becomes self-repeating and degenerates into an instinctive reaction, like a bird’s chirp.

Not all readaptation, however, is progress, for ideal identity must not be lost. The Latin language did not progress when it passed into Italian. It died. Its amiable heirs may console us for its departure, but do not remove the fact that their parent is extinct. So every individual, nation, and religion has its limit of adaptation; so long as the increment it receives is digestible, so long as the organisation already attained is extended and elaborated without being surrendered, growth goes on; but when the foundation itself shifts, when what is gained at the periphery is lost at the centre, the flux appears again and progress is not real. Thus a succession of generations or languages or religions constitutes no progress unless some ideal present at the beginning is transmitted to the end and reaches a better expression there; without this stability at the core no common standard exists and all comparison of value with value must be external and arbitrary. Retentiveness, we must repeat, is the condition of progress.

New empirical studies confirm relativity’s time dilation

(above) The block universe.

Via Discovery:

Anyone looking to defer the effects of aging, at least for a split-second, may want to think about driving fast cars at low elevations, according to scientists from the National Institutes of Standards and Technology (NIST).

New experiments have proven that time dilation, a phenomenon predicted by Einstein’s theories of relativity where time flows faster or slower depending on speed and gravity, occurs during ordinary events like riding a bike or climbing the stairs.

“We demonstrated that with our incredibly accurate clocks, just going up a step or two, we can see the effects of time dilation,” said James Chou, a co-author of the new Science article.

While the effects are there at even small differences in elevation or speed, “people won’t notice a difference” in their day-to-day lives.

Movies and television often interpret this phenomenon as one person being shot into space at nearly the speed of light and returning with only minimal aging, while the Earth-bound counterpart grows old.

A consequence of Einstein’s 1905 theories of relativity, time dilation wasn’t definitely proven for many years.

In one particularly famous demonstration in 1971, scientists equipped commercial jets with atomic clocks and flew them around the world. When the aircraft landed, the clocks on the aircraft and the clocks on the ground did not match up. This demonstrated that the time dilation predicted by Einstein indeed happened.

“People have measured time dilation before,” said Vladan Vuletic, a professor of physics at the Massachusetts Institute of Technology who was not involved in the research. “But it’s impressive that it can be measured over such small distances.”

The new experiments used more mundane methods — and much more precise clocks — to test the theory.

The NIST scientists used aluminum ions that act like the second hand, ticktocking between two energy levels over a million-billion times per second. The two clocks are some of the most accurate in existence.

In the first experiment, the scientists offset the two clocks by roughly one foot in elevation to test gravity’s effect of the flow of time. In the second experiment, since they couldn’t send the clocks around the world at a high speed, the scientists made one clock’s ion oscillate faster than the other, several meters per second faster, to test the effect of speed on time dilation.

In both experiments, time flowed differently. Time flowed faster in the clock at a higher elevation, as predicted. Similarly, time flowed slower in the clock where the ions moved faster.

The difference in the clocks was very small, but given the distances involved were significant. If two people somehow managed to live 79 years exactly one foot apart in elevation, the difference in time would only amount to 83 billionths of a second.

One foot is not very far apart, however. What if the distance between two people was much larger, say a person standing on top of the Empire State Building and another one on Fifth Avenue?

“Even over a lifetime it wouldn’t come to a second’s worth of difference,” said Chou.

Not a second’s worth of difference; but it does have astounding implications on eternity.

What Christine O’Donnell and Kant have in common

One has an electable face; the other does not.

Not courage and verve; O’Donnell is unwilling to even answer unscripted questions, let alone implore voters to Sapere Aude!

Nor idealism; having made no statements to suggest the contrary, we can assume the Delaware Republican US Senate candidate believes in the existence of time, space and causation independent of observation. 

Certainly not scientific acumen. Whereas Kant advanced daring hypothoses in seismology and astronomy and anticipated anthropology, O’Donnell cannot even make sense of recent scientific reporting, claiming researchers had “cross-bred humans and animals” producing mice with “fully functioning human brains”. In fact, biologists had simply transplanted nonfuctioning cells into lab mice, which left the animal’s intelligence totally unaffected.

 Nor, as far as we know, are both afflicted with constipation.  

But neither the Prussian or the tea-partier would lie to save a life.

Also, both he and she have categorically denounced masturbation.

Science as the test of imagination against reality

I’ve had thoughts similar to Timothy Williamson’s for about two years:

On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.

A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.

Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.

To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place.

Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.

Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.

In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.

Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.

The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.

Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.

For some time I’ve suspected the extent to which our imagination informs our navigation of reality has, except by Hume and Santayana, been hitherto greatly underappreciated. Inference, induction and deduction all work by the mental modeling of a scenario we have not directly observed, but which we piece together from disparate bits of information. Moreover, all social intercourse is impossible without what psychologists call a theory of mind, or intuitive postulation of other people’s internal states of mind. This can only be arrived at based solely on their external speech and body language, and also our patchwork understanding of psychology and the analogy of our own mind.

Most people’s theories of mind work not to touch on the deepest truths of others’ psychology, but to fluidly engage in spontaneous conversation acceptable within their given social sphere. Their theorizing works on a largely intuitive level; they imagine without realizing they are imagining. They do not realize it because it happens so quickly, probably even unconsciously; and because it is frequently if not on the mark, at least near it. (But by no means always even near it, but more often near than far-off.)

The obviousness of this explanation asserts itself most strongly in those persons lacking intuitive social imaginations, among them, (ahem) those with autistic spectrum disorders (ASD). It has been hypothesized that ASD people experience, in varying degrees, ASD people experience “mind blindness,” a dearth of intuitions about how other people might think. They must reconstruct their theory of mind intellectually, a painstaking process that might take years to reach conclusions neurotypical people have accepted since childhood, or to an even earlier point they can’t remember.

But no matter how well-constructed a theory of mind is, no matter how closely it reflects another’s internal reality, whenever we talk to anyone, we are also engaging with ourselves, with the fictional entity we construct and identify with another person. Much of that fiction is drawn from all our memories of experience with that person. But from these experiences we formulate generalities about their character, and no generality can contain all truths, and most holds many falsehoods. And any gaps in our knowledge of a person we spackle over with speculation, often unwittingly or unconsciously.

The same, of course, is true about our engagement with ourselves, and our formulation of our self-concept.

Monday Morning Surrealism

Crossover fanfiction and the founding of Western civilization

Monday Morning Surrealism

Like many Surreal paintings, Rene Magritte’s Mysteries of the Horizon (1955) might be used as an illustration of four-dimensionalism. It was not Magritte’s intention to illustrate what objects moving within static time-space would look like from the outside of the universe; but the analogy still holds.

Rene Magritte, "Mysteries of the Horizon," 1955

Environmental influence on unconscious volition

Eben Harrel summarizes a paper in an upcoming issue of Science giving exposition to the role of unconscious processing in everyday activities:

Studies have found that upon entering an office, people behave more competitively when they see a sharp leather briefcase on the desk, they talk more softly when there is a picture of a library on the wall, and they keep their desk tidier when there is a vague scent of cleaning agent in the air. But none of them are consciously aware of the influence of their environment.

There may be few things more fundamental to human identity than the belief that people are rational individuals whose behavior is determined by conscious choices. But recently psychologists have compiled an impressive body of research that shows how deeply our decisions and behavior are influenced by unconscious thought, and how greatly those thoughts are swayed by stimuli beyond our immediate comprehension.

In an intriguing review in the July 2 edition of the journal Science, published online Thursday, Ruud Custers and Henk Aarts of Utrecht University in the Netherlands lay out the mounting evidence of the power of what they term the “unconscious will.” “People often act in order to realize desired outcomes, and they assume that consciousness drives that behavior. But the field now challenges the idea that there is only a conscious will. Our actions are very often initiated even though we are unaware of what we are seeking or why,” Custers says.

It is not only that people’s actions can be influenced by unconscious stimuli; our desires can be too. In one study cited by Custers and Aarts, students were presented with words on a screen related to puzzles — crosswords, jigsaw piece, etc. For some students, the screen also flashed an additional set of words so briefly that they could only be detected subliminally. The words were ones with positive associations, such as beach, friend or home. When the students were given a puzzle to complete, the students exposed unconsciously to positive words worked harder, for longer, and reported greater motivation to do puzzles than the control group.

The same priming technique has also been used to prompt people to drink more fluids after being subliminally exposed to drinking-related words, and to offer constructive feedback to other people after sitting in front of a screen that subliminally flashes the names of their loved ones or occupations associated with caring like nurse. In other words, we are often not even consciously aware of why we want what we want.

John Bargh of Yale University, who 10 years ago predicted many of the findings discussed by Custers and Aarts in a paper entitled “The Unbearable Automaticity of Being,” called the Science paper a “landmark — nothing like this has been in Science before. It’s a large step toward overcoming the skepticism surrounding this research.”

Braugh isn’t the only theorist in this vein.  The Illusion of Conscious Will  by Harvard psychologist Daniel Wegner is  one of the most-discussed texts of free will literature of the 21st century. I know this because I’ve never had the chance to read it; Raynor Memorial Library’s only copy was either checked out or reserved for two solid years, corresponding with the latter half of my undergraduate career. (Now that I’ve graduated and moved out of Milwaukee, it is, of course, now available.)

But back to Harrel’s piece. Though the author doesn’t himself use the term, the research under discussion research also lends more evidence to the thesis of embodied congnition:

But Bargh says the field has actually moved beyond the use of subliminal techniques, and studies show that unconscious processes can even be influenced by stimuli within the realms of consciousness, often in unexpected ways. For instance, his own work has shown that people sitting in hard chairs are more likely to be more rigid in negotiating the sales price of a new car, they tend to judge others as more generous and caring after they hold a warm cup of coffee rather than a cold drink, and they evaluate job candidates as more serious when they review their résumés on a heavy clipboard rather than a light one.

“These are stimuli that people are conscious of — you can feel the hard chair, the hot coffee — but were unaware that it influenced them. Our unconscious is active in many more ways than this review suggests,” he says.

All this brings to mind John Dewy’s critique of traditional accounts of mind. He claimed the fundamental error underlying previous theories of consciousness and knowledge was their passivity; humans weren’t the detatched observers of the world as previous epistemologists (supposedly) characterized them, but active organisms assertively seeking to fulfill needs in their environment, and in so doing interacting with and changing that environment. What he didn’t seem to have recongized just how far the environment impresses on us, how much it changes (dictates?) our behavior.

Going forward, I can imagine embodied cognition theorists frequently fending off accusations of behaviorism.

Antinatalism considered

One of the most pressing and paradoxical moral issues–and one of the most ignored–gets a fair airing by Peter Singer in the NY Times philosophy blog, The Stone:

How good does life have to be, to make it reasonable to bring a child into the world? Is the standard of life experienced by most people in developed nations today good enough to make this decision unproblematic, in the absence of specific knowledge that the child will have a severe genetic disease or other problem?

The 19th-century German philosopher Arthur Schopenhauer held that even the best life possible for humans is one in which we strive for ends that, once achieved, bring only fleeting satisfaction. New desires then lead us on to further futile struggle and the cycle repeats itself.

Schopenhauer’s pessimism has had few defenders over the past two centuries, but one has recently emerged, in the South African philosopher David Benatar, author of a fine book with an arresting title: “Better Never to Have Been: The Harm of Coming into Existence.” One of Benatar’s arguments trades on something like the asymmetry noted earlier. To bring into existence someone who will suffer is, Benatar argues, to harm that person, but to bring into existence someone who will have a good life is not to benefit him or her. Few of us would think it right to inflict severe suffering on an innocent child, even if that were the only way in which we could bring many other children into the world. Yet everyone will suffer to some extent, and if our species continues to reproduce, we can be sure that some future children will suffer severely. Hence continued reproduction will harm some children severely, and benefit none.

Benatar also argues that human lives are, in general, much less good than we think they are. We spend most of our lives with unfulfilled desires, and the occasional satisfactions that are all most of us can achieve are insufficient to outweigh these prolonged negative states. If we think that this is a tolerable state of affairs it is because we are, in Benatar’s view, victims of the illusion of pollyannaism. This illusion may have evolved because it helped our ancestors survive, but it is an illusion nonetheless. If we could see our lives objectively, we would see that they are not something we should inflict on anyone.

Here is a thought experiment to test our attitudes to this view. Most thoughtful people are extremely concerned about climate change. Some stop eating meat, or flying abroad on vacation, in order to reduce their carbon footprint. But the people who will be most severely harmed by climate change have not yet been conceived. If there were to be no future generations, there would be much less for us to feel to guilty about.

So why don’t we make ourselves the last generation on earth? If we would all agree to have ourselves sterilized then no sacrifices would be required — we could party our way into extinction!

Of course, it would be impossible to get agreement on universal sterilization, but just imagine that we could. Then is there anything wrong with this scenario? Even if we take a less pessimistic view of human existence than Benatar, we could still defend it, because it makes us better off — for one thing, we can get rid of all that guilt about what we are doing to future generations — and it doesn’t make anyone worse off, because there won’t be anyone else to be worse off.

Is a world with people in it better than one without? Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

I do think it would be wrong to choose the non-sentient universe. In my judgment, for most people, life is worth living. Even if that is not yet the case, I am enough of an optimist to believe that, should humans survive for another century or two, we will learn from our past mistakes and bring about a world in which there is far less suffering than there is now. But justifying that choice forces us to reconsider the deep issues with which I began. Is life worth living? Are the interests of a future child a reason for bringing that child into existence? And is the continuance of our species justifiable in the face of our knowledge that it will certainly bring suffering to innocent future human beings?

Any comment I’d have to make on antinatalism would entail more time than I am willing now to invest in this blog. Let it suffice to say my rejection of categorical antinatalism is more complicated and fraught than Singer’s.

 However, expect a long dissection of J.M. Berstein’s piece on the Tea Party’s alleged metaphysics. Berstein claims that the confusion and distress of the TP movement stems from not a political reorientation, but a philosophical one; specifically, they are Cartesians vainly trying to deny Hegelianism.

It’s an outlandish claim bolstered by erudite stupidity that would be funny if it weren’t so common. Think about it; this is the exact sort of shit Žižek has built a career out of–like when he claimed Buddhism is the hegemonic ideology Western capitalism, or that toilet designs give insight into national characters.