New language discovered in India

Via CNN:

Linguists announced Monday they have identified an endangered language known as Koro that is spoken by about 800 people in northeast India.  The language was unknown to science and recently came to light during an expedition by linguists traveling in India on fellowships for National Geographic, the linguists said in telephone interviews.Koro belongs to the Tibeto-Burman language family, which is composed of a group of about 400 languages spoken primarily in east, central, south and southeast Asia and includes Tibetan and Burmese, according to linguist K. David Harrison.

Some 150 Tibeto-Burman languages are spoken in India alone, but no other language has been identified as closely related, said Harrison, an associate professor of linguistics at Swarthmore College in Swarthmore, Pennsylvania.

Like most languages, Koro is unwritten and transmitted orally. It is neither a dialect nor a sister language close to Hruso-Aka, despite being considered such by both Hruso and Koro people. Koro shares some vocabulary with other languages spoken in the region but shares more features with languages spoken farther east, such as Milang and Tani, the linguists said in a news release issued by National Geographic. Harrison and another National Geographic Fellow, Greg Anderson, led the expedition, called Enduring Voices, which brought Koro to light. Enduring Voices documents vanishing languages and cultures and assists with language revitalization.

Harrison and Anderson, director of the Living Tongues Institute for Endangered Languages, in Salem, Oregon, focused on Arunachal Pradesh, a remote area of northeast India that is considered the black hole of the linguistic world. It is a language hotspot where there is room to study rich, diverse languages, many unwritten or documented. A permit is required to visit, few linguists have worked there and a reliable list of languages has never been drawn up.

“On a scientist’s tally sheet, Koro adds just one entry to the list of 6,909 languages worldwide. But Koro’s contribution is much greater than that tiny fraction would suggest,” Harrison writes in his book, “The Last Speakers.”

“Koro brings an entirely different perspective, history, mythology, technology and grammar to what was known before.”

In the news release, the linguists described their discovery as bittersweet: Of the approximately 800 people who speak Koro, few are under the age of 20, meaning the language is endangered.

“We were finding something that was making its exit, was on its way out,” Anderson said. “And if we had waited 10 years to make the trip, we might not have come across close to the number of speakers we found.”

The team set out in 2008 in Arunachal Pradesh to document Aka and Miji, languages spoken in a small district there. The expedition went door to door among homes propped up on stilts to reach potential speakers of those little-known languages. While recording the vocabularies, they detected a third language — Koro. It was not listed in Indian language surveys, Indian censuses or standard international registries.

“We didn’t have to get far on our word list to realize it was extremely different in every possible way,” Harrison said.

The inventory of sounds and the way these sounds were combined to form words were distinct from other languages spoken in the region. An Aka speaker would call a pig “vo” and a Koro speaker would call a pig “lele.”

“Koro could hardly sound more different from Aka,” Harrison writes. “They sound as different as, say, English and Japanese.”

Anderson and Harrison said Aka is the traditional language of the region’s historic slave traders, and they hypothesized that Koro may have sprung from the slaves; though they said more study is needed to determine the origin.

The project reports that a language becomes extinct every two weeks. By 2100, it is estimated that more than half of the 6,910 languages spoken on earth will vanish. The team will return to India to continue studying Koro in November.


New empirical studies confirm relativity’s time dilation

(above) The block universe.

Via Discovery:

Anyone looking to defer the effects of aging, at least for a split-second, may want to think about driving fast cars at low elevations, according to scientists from the National Institutes of Standards and Technology (NIST).

New experiments have proven that time dilation, a phenomenon predicted by Einstein’s theories of relativity where time flows faster or slower depending on speed and gravity, occurs during ordinary events like riding a bike or climbing the stairs.

“We demonstrated that with our incredibly accurate clocks, just going up a step or two, we can see the effects of time dilation,” said James Chou, a co-author of the new Science article.

While the effects are there at even small differences in elevation or speed, “people won’t notice a difference” in their day-to-day lives.

Movies and television often interpret this phenomenon as one person being shot into space at nearly the speed of light and returning with only minimal aging, while the Earth-bound counterpart grows old.

A consequence of Einstein’s 1905 theories of relativity, time dilation wasn’t definitely proven for many years.

In one particularly famous demonstration in 1971, scientists equipped commercial jets with atomic clocks and flew them around the world. When the aircraft landed, the clocks on the aircraft and the clocks on the ground did not match up. This demonstrated that the time dilation predicted by Einstein indeed happened.

“People have measured time dilation before,” said Vladan Vuletic, a professor of physics at the Massachusetts Institute of Technology who was not involved in the research. “But it’s impressive that it can be measured over such small distances.”

The new experiments used more mundane methods — and much more precise clocks — to test the theory.

The NIST scientists used aluminum ions that act like the second hand, ticktocking between two energy levels over a million-billion times per second. The two clocks are some of the most accurate in existence.

In the first experiment, the scientists offset the two clocks by roughly one foot in elevation to test gravity’s effect of the flow of time. In the second experiment, since they couldn’t send the clocks around the world at a high speed, the scientists made one clock’s ion oscillate faster than the other, several meters per second faster, to test the effect of speed on time dilation.

In both experiments, time flowed differently. Time flowed faster in the clock at a higher elevation, as predicted. Similarly, time flowed slower in the clock where the ions moved faster.

The difference in the clocks was very small, but given the distances involved were significant. If two people somehow managed to live 79 years exactly one foot apart in elevation, the difference in time would only amount to 83 billionths of a second.

One foot is not very far apart, however. What if the distance between two people was much larger, say a person standing on top of the Empire State Building and another one on Fifth Avenue?

“Even over a lifetime it wouldn’t come to a second’s worth of difference,” said Chou.

Not a second’s worth of difference; but it does have astounding implications on eternity.

Why you are lucky to live at the dawn of the 21st century

Because at the dawn of the 20th, radium suppositories were considered sound medicine.

A pluralistic universe?

Via Science Daily:

A team of astrophysicists based in Australia and England has uncovered evidence that the laws of physics are different in different parts of the universe.

The team — from the University of New South Wales, Swinburne University of Technology and the University of Cambridge — has submitted a report of the discovery for publication in the journal Physical Review Letters. A preliminary version of the paper is currently under peer review. The report describes how one of the supposed fundamental constants of Nature appears not to be constant after all. Instead, this ‘magic number’ known as the fine-structure constant — ‘alpha’ for short — appears to vary throughout the universe.

“After measuring alpha in around 300 distant galaxies, a consistency emerged: this magic number, which tells us the strength of electromagnetism, is not the same everywhere as it is here on Earth, and seems to vary continuously along a preferred axis through the universe,” Professor John Webb from the University of New South Wales said.

“The implications for our current understanding of science are profound. If the laws of physics turn out to be merely ‘local by-laws’, it might be that whilst our observable part of the universe favours the existence of life and human beings, other far more distant regions may exist where different laws preclude the formation of life, at least as we know it.”

“If our results are correct, clearly we shall need new physical theories to satisfactorily describe them.”

The researchers’ conclusions are based on new measurements taken with the Very Large Telescope (VLT) in Chile, along with their previous measurements from the world’s largest optical telescopes at the Keck Observatory in Hawaii.

Mr Julian King from the University of New South Wales explained how, after combining the two sets of measurements, the new result ‘struck’ them. “The Keck telescopes and the VLT are in different hemispheres — they look in different directions through the universe. Looking to the north with Keck we see, on average, a smaller alpha in distant galaxies, but when looking south with the VLT we see a larger alpha.”

“It varies by only a tiny amount — about one part in 100,000 — over most of the observable universe, but it’s possible that much larger variations could occur beyond our observable horizon,” Mr King said.

The discovery will force scientists to rethink their understanding of Nature’s laws. “The fine structure constant, and other fundamental constants, are absolutely central to our current theory of physics. If they really do vary, we’ll need a better, deeper theory,” Dr Michael Murphy from Swinburne University said.

“While a ‘varying constant’ would shake our understanding of the world around us extraordinary claims require extraordinary evidence. What we’re finding is extraordinary, no doubt about that.”

“It’s one of the biggest questions of modern science — are the laws of physics the same everywhere in the universe and throughout its entire history? We’re determined to answer this burning question one way or the other.”

I expect this theory, like chaos theory, relativity, Gödel’s incompleteness theorems, and quantum mechanics, will be subject to unspeakable abuses in postmodernist humanities departments.

Science as the test of imagination against reality

I’ve had thoughts similar to Timothy Williamson’s for about two years:

On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.

A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.

Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.

To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place.

Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.

Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.

In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.

Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.

The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.

Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.

For some time I’ve suspected the extent to which our imagination informs our navigation of reality has, except by Hume and Santayana, been hitherto greatly underappreciated. Inference, induction and deduction all work by the mental modeling of a scenario we have not directly observed, but which we piece together from disparate bits of information. Moreover, all social intercourse is impossible without what psychologists call a theory of mind, or intuitive postulation of other people’s internal states of mind. This can only be arrived at based solely on their external speech and body language, and also our patchwork understanding of psychology and the analogy of our own mind.

Most people’s theories of mind work not to touch on the deepest truths of others’ psychology, but to fluidly engage in spontaneous conversation acceptable within their given social sphere. Their theorizing works on a largely intuitive level; they imagine without realizing they are imagining. They do not realize it because it happens so quickly, probably even unconsciously; and because it is frequently if not on the mark, at least near it. (But by no means always even near it, but more often near than far-off.)

The obviousness of this explanation asserts itself most strongly in those persons lacking intuitive social imaginations, among them, (ahem) those with autistic spectrum disorders (ASD). It has been hypothesized that ASD people experience, in varying degrees, ASD people experience “mind blindness,” a dearth of intuitions about how other people might think. They must reconstruct their theory of mind intellectually, a painstaking process that might take years to reach conclusions neurotypical people have accepted since childhood, or to an even earlier point they can’t remember.

But no matter how well-constructed a theory of mind is, no matter how closely it reflects another’s internal reality, whenever we talk to anyone, we are also engaging with ourselves, with the fictional entity we construct and identify with another person. Much of that fiction is drawn from all our memories of experience with that person. But from these experiences we formulate generalities about their character, and no generality can contain all truths, and most holds many falsehoods. And any gaps in our knowledge of a person we spackle over with speculation, often unwittingly or unconsciously.

The same, of course, is true about our engagement with ourselves, and our formulation of our self-concept.

Nanotech “teabag” purifies water


 Scientists have reversed the action of the humble herbal tea bag to purify water on a small scale. Instead of infusing water with flavour, a sachet sucks up toxic contamination when fitted into the neck of a water bottle.The researchers, at Stellenbosch University, South Africa, hope communities that have no water-cleaning facilities will use it to purify dirty water. The sachets are made from the same material used to produce the rooibos tea bags that are popular in South Africa. But inside are ultra-thin nanoscale fibres, which filter out contaminants, plus active carbon granules, which kill bacteria.

“What is new about this idea is the combination of inexpensive raw materials, namely activated carbon and antimicrobial nanofibres, in point-of-use water filter systems,” Marelize Botes, researcher in the university’s department of microbiology, told SciDev.Net.

A sachet can clean one litre of the most polluted water. Once used, it is thrown away and a new one is inserted into the bottle neck. Although the filter is still in development, tests on river samples around Stellenbosch have been successful, said Botes.

“The nanofibres will disintegrate in liquids after a few days and will have no environmental impact. The raw materials of the tea-bag filter are not toxic to humans,” she added. Each bag should cost around three South African cents (just under half a US cent). “Anybody can use it anywhere; it’s affordable, clean and environmentally friendly,” said Jo Burgess, manager of South Africa’s Water Research Commission.

The inventor, Eugene Cloete, dean of the faculty of science at Stellenbosch University and chair of Stellenbosch University’s Water Institute, which opened in June, said: “This is a decentralised, point-of-use technology”.

Shem Wandiga, managing trustee of the Centre for Science and Technology Innovations, in Kenya said: “A technology that supplies clean water at point of use is preferred to technologies that distribute water due to less recontamination. “However, given that the majority of people lacking clean water also have meagre incomes, most living on one dollar a day or less, the technology must be affordable or its cost covered by government. “The major acceptance of the technology by uneducated and untrained person is the proof of the pudding. Sophisticated users will not find difficulty adapting to the technology.

Clean water is still a huge challenge in Sub-Saharan Africa, where 300 million people have no access to clean water according to the World Bank.

The filter is expected to be on the market before the end of the year if approved by the South African Bureau of Standards, which is currently testing it, said Botes.

Sensory, emotional memories inextricable from one another, research suggests

Science News

For some people, the smell of an apple pie might spark a warm childhood memory. For others, a loud sound may bring back strong battlefield images. New research in rats may help explain these connections by suggesting that emotional memories like these are stored in parts of the brain linked to sight, sound and smell.

Scientists have long known that emotionally charged memories tend to be stronger than neutral ones. But researchers haven’t been able to pinpoint where in the brain those memories are stored in the long term.

In the new study, published in the Aug. 6 Science, researchers trained rats to associate sounds, smells and sights with electric shocks. After a month, the researchers damaged an auditory, visual or olfactory part of the brain, called a secondary sensory cortex, in some of the rats.  The damage appeared to make the animals lose memories linked to the damaged sense; they no longer froze in their tracks at the sounds, smells, or sights they had previously learned to fear.

Each sense, including sound, smell and vision, has a primary and a secondary sensory cortex area in the brain. The primary cortex sends sensory information to the secondary cortex, which then connects to emotional and memory areas of the brain.

“I think it’s a groundbreaking paper,” says neuroscientist Alcino Silva of the University of California, Los Angeles. “It’s the first time that I know of that someone was able to connect the sensory cortices to a remote memory, and that’s quite significant.”

The rats’ amnesia happened only for emotional memories associated with fear, not for neutral memories. In addition, the brain damage affected only memories that had been formed a month before, not those from the previous day, and rats could still learn new fears associated with that sense.

“It indicates that sensory cortices that commonly are not considered key structures for emotional memories are indeed necessary for them,” says neuroscientist and study coauthor Benedetto Sacchetti of the University of Turin in Italy.

Sacchetti says he is now doing more experiments to determine whether brain-damaged rats can maintain new fear responses after one month. He also wants to investigate if the same brain areas store positive emotional memories such as joy and if similar brain areas are activated in humans one month after fear-conditioning trials.

Storage of emotional memory over large, distributed areas of the cortex may explain why it’s harder to forget painful memories, Sacchetti says, and could have implications for the therapy of fear-related disorders. A sense-based storage method may also have given a survival benefit to our ancestors, he says. It was probably important for them to remember fear-related stimuli in order to prevent or avoid them in the future.

The new results should be interpreted with caution, says neuroscientist Norman Weinberger of the University of California, Irvine. Since the study doesn’t  show whether brain-damaged mice could store new memories in the long term, researchers won’t know for sure if secondary sensory cortices are the only areas responsible for storing emotional memories until they do more studies.

“What is the big story of the 21st century is that primary and even secondary cortices appear to be sites that are likely to store memories,” Weinberger says. “And there’s no part of the brain which is immune from memory storage of some kind.”