Study Examines Link Between Breast Cancer and Diabetes

health day

Chemo seems to play a role in early diagnosis of diabetes in postmenopausal patientsTHURSDAY, Dec. 13 (HealthDay News) -- Postmenopausal breast cancer survivors are at increased risk for developing diabetes and should be screened for the disease more closely, a new study suggests.

Researchers analyzed data from 1996 to 2008 from the province of Ontario, Canada, to determine the incidence of diabetes among nearly 25,000 breast cancer survivors aged 55 or older and nearly 125,000 age-matched women without breast cancer.

During a median follow-up of more than five years, nearly 10 percent of all the women in the study developed diabetes. Compared to those who had not had breast cancer, the risk of diabetes among breast cancer survivors was 7 percent higher two years after cancer diagnosis and 21 percent higher 10 years after cancer diagnosis, the investigators found.

The risk of diabetes, however, decreased over time among breast cancer survivors who had undergone chemotherapy. Their risk compared to women without breast cancer was 24 percent higher in the first two years after cancer diagnosis and 8 percent higher 10 years after cancer diagnosis, according to the study, which was published Dec. 12 in the journal Diabetologia.

"It is possible that chemotherapy treatment may bring out diabetes earlier in susceptible women," study author Dr. Lorraine Lipscombe, of Women's College Hospital and Women's College Research Institute in Toronto, said in a journal news release. "Increased weight gain has been noted [after receiving] chemotherapy for breast cancer, which may be a factor in the increased risk of diabetes in women receiving treatment."

"Estrogen suppression as a result of chemotherapy may also promote diabetes," Lipscombe added. "However, this may have been less of a factor in this study where most women were already postmenopausal."

The study authors suggested that there may be other factors involved for women who received chemotherapy, including glucocorticoid drugs, which are used to treat nausea in patients receiving chemo and are known to cause spikes in blood sugar. In addition, breast cancer patients undergoing chemotherapy are monitored more closely and thus are more likely to have diabetes detected, they noted.

The researchers said it is unclear why diabetes risk increased over time among breast cancer survivors who did not receive chemotherapy.

"There is, however, evidence of an association between diabetes and cancer, which may be due to risk factors common to both conditions," Lipscombe said. "One such risk factor is insulin resistance, which predisposes to both diabetes and many types of cancer -- initially insulin resistance is associated with high insulin levels and there is evidence that high circulating insulin may increase the risk of cancer."

"However, diabetes only occurs many years later when insulin levels start to decline," she said. "Therefore, it is possible that cancer risk occurs much earlier than diabetes in insulin-resistant individuals, when insulin levels are high."

Overall, the "findings support a need for closer monitoring of diabetes among breast cancer survivors," Lipscombe concluded.

Although the study found an association between diabetes and breast cancer, it did not prove a cause-and-effect relationship.

More information

The American Cancer Society outlines what happens after breast cancer treatment.

SOURCE: Diabetologia, news release, Dec. 12, 2012

Copyright © 2012 HealthDay. All rights reserved.

Many Suffer Chronic Pain After Breast Cancer Surgery, Study Finds

health day

Researchers call for better pain managementTUESDAY, Jan. 22 (HealthDay News) -- About one-quarter of women who've had breast cancer surgery have significant and persistent breast pain six months after the procedure, a new study finds.

Women with breast pain before surgery were most likely to have long-term breast pain after the operation, according to the study recently published in the Journal of Pain.

Researchers followed 400 women after breast cancer surgery and found that about 12 reported severe breast pain, 13 percent had moderate pain and 43 percent had mild pain that lasted for six months. Just under 32 percent had no breast pain.

Four patient characteristics were associated with severe pain: younger age, less education, lower income and being non-white. Younger age was also associated with having moderate or mild pain, the University of California, San Francisco researchers said.

The major clinical factors associated with severe pain were: breast pain before surgery, changes in breast sensation, severity of pain after surgery, number of lymph nodes removed and undergoing additional lymph node removal.

The findings suggest that improvements in pain management after breast cancer surgery are needed to reduce the risk of persistent breast pain, the researchers said, according to a journal news release.

More information

The American Cancer Society has more about breast cancer surgery.

SOURCE: Journal of Pain, news release, Jan. 16, 2013

Copyright © 2013 HealthDay. All rights reserved.

The Real Power of Crystals: Attesting to Atoms [Video]

The exact angles of crystals reveals their underlying structure as given by repeating lattices of atoms and molecules, as explained in this video by geometer George Hart

By George Hart and Simons Science News


$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/article.cfm?id=the-real-power-of-crystals-attesting-to-atoms'submit to reddit purple crystals Image: simonsfoundation.org

From Simons Science News (find original story here).

For most of recorded history, no one accepted the existence atoms, even though Democritus, Lucretius and other ancient philosophers described them. Aristotle claimed matter was infinitely divisible and his view dominated for 2,000 years.

Imagine you lived 1,000 years ago. What evidence could you provide to attest to the existence of atoms? How could you combine simple observations and mathematical thinking to resolve the question, without any modern equipment?

Notes:

I want to thank the Hicksville Gregory Museum and the RISD Nature Lab for access to some of the specimens shown.

Richard Feynman’s exact statement:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied.

From “The Feynman Lectures on Physics,” 1964.

Related:

More videos from the Mathematical Impressions series.

Reprinted with permission from Simons Science News, an editorially-independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the computational, physical and life sciences.

How to Mend a Broken Heart

Listen up, Lonely Hearts Club. Before you get all frothy about the holiday that rubs salt in the wounds of your failed attempts at love, take a page out of Beth Croce’s book on How to Mend a Broken Heart, will ya? Because I’m sure that all you need to cauterize the wound inflicted by your last lover is a spanky cute pin up guy/gal to dive in to your thoracic cavity, grab a hold of that bleeding aorta and squeeze. And oh, did I mention? Beth Croce is stationed down under, in Aussie, mate. So dig their killer accents. **swoooooon**

There’s a little somethin’ for everyone. The blonde, the brunette…

altext Lovely Rosie lends a helping hand while the aortic balloon catheter is deployed. Acrylic and ink wash on vintage text - Limited edition prints available - just click on the image

altext A large needle driver is a girl's best friend in coronary bypass graft surgery. Acrylic and ink wash on vintage text - Limited edition prints available - just click on the image

But wait, there’s more!
Joaqin Phoenix in his skivvies. To the rescue. Ahoy!

altext Handsome Hugh holding the line during the application of cardiac support device. Acrylic and ink wash on vintage text - Limited edition prints available - just click on the image

Brazilian babe in Speedo? Anyone?

altext Tom takes a moment to strike a pose before the unfurling of the stent. Acrylic and ink wash on vintage text - Limited edition prints available - just click on the image

Hipstah-honeys like ‘em lean ‘n green, in skinny jeans…

altext Ben keeps a fine, firm grip on the brachiocephalic graft. Acrylic and ink wash on vintage text - Limited edition prints available - just click on the image

Annnnd, the hot geek. You know you want her:

altext Jenny goes that one step further to make sure every inch of the aortic graft gets a polish. Acrylic and ink wash on vintage text - Limited edition prints available - just click on the image

This series is part of BrokenHeartBooks – A ten volume series of vintage (date unknown) encyclopedias with hand painted covers by medical illustrator/artist Beth Croce

Beth Croce website: BioPerspective
Follow her on Twitter @BioArts
BioPerspective on Facebook
BioPerspectives Blog

Meteor Shocks Russian City

60-Second Space The Russian city of Chelyabinsk was awakened on February 15th when a meteor exploded overhead, with an energy equivalent to about 300 kilotons of TNT equivalent.

$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/podcast/episode.cfm?id=meteor-shocks-russian-city-13-02-15'submit to reddit

The Russian city of Chelyabinsk had a rude awakening early on February 15th when a meteor exploded overhead. The blast wave shattered windows and injured an estimated 1,000 people. Based on preliminary evidence from infrasound stations built to monitor nuclear tests, this looks to be an historic event.

“We know that the energy of the explosion was about 300 kilotons of TNT equivalent.”

Margaret Campbell-Brown, a professor who studies meteoroids at the University of Western Ontario.

“So it was a very, very powerful explosion. It was the biggest explosion from a meteor that we’ve seen in the atmosphere since 1908, since the 1908 Tunguska impact.”

The cause appears to be an asteroid, which Campbell-Brown estimates was 15 meters across. Objects of that size are expected to hit Earth only once every half-century or so. And impacts over cities of more than one million people such as Chelyabinsk are rarer still.

“When you consider all the areas of the Earth that are uninhabited—the oceans, the ice caps, the deserts and so on—it’s very surprising that this happened over such a populated area. Very unlucky.”

—John Matson

[The above text is a transcript of this podcast.]


Fossil Foot Shows Evolution of Upright Walking Took Many Different Steps [Video]

Ardi, Ardipithecus ramidus Image: Wikimedia Commons

Ardipithecus ramidus, fossil Overview

The famous "Lucy" specimen (Australopithecus afarensis) is one of the earliest known human ancestors to have had a comfortably humanlike upright stride. Her kind lived some 3.6 million to 2.9 million years ago. About a million years before her was "Ardi" (Ardipithecus ramidus), which had much more primitive feet, suggesting that although she might have been able to walk upright, she still was well adapted to life in the trees. So it came as quite a surprise last year when researchers described part of a fossil foot from 3.4 million years ago—close to Lucy’s age—that resembled the apelike foot of the much older Ardi.

I report on the latest thinking about what Ardi, the mysterious new fossil foot and other finds mean for understanding human origins in the February issue of Scientific American. In the video below Yohannes Haile-Selassie, of the Cleveland Museum of Natural History, who led the project and research effort to describe the new find, explains why the primitive foot was such a shock.

How politics and an earthquake led to prairie dog plague

John Tull didn’t know he was the heir to a century-old legacy. In 2002, he and his wife boarded a plane from their Santa Fe, New Mexico home and flew to New York City. Shortly after arrival, Tull and his wife both felt ill, with high fevers and odd swellings. In the post-9/11 aftermath, terrorism was feared—the couple was diagnosed with bubonic plague.

But the plague bacterium, Yersinia pestis, hadn’t come from any human with sinister motives—it had come from a rodent (likely a pack rat or a prairie dog) infected in their native state, via a long line of rodents dating back to turn-of-the-century San Francisco.

It’s not conclusively known how Y. pestis first entered the City by the Bay. Plague had broken out in several ports in 1899: Hong Kong, Honolulu, Tokyo, and Sydney; and Marine Hospital Service personnel (the precursor to the National Institutes of Health, led by Dr. Joseph Kinyoun) were kept busy inspecting passengers on potentially infected ships.

Speculation has focused on a steamer called the Australia, which arrived from plague-ridden Honolulu on January 2, 1900. Though the passengers appeared to be healthy, leaving Kinyoun no choice but to allow the Australia to dock, no one can be sure if the ship’s four-legged inhabitants were harboring the deadly bacterium, and subsequently spread it amongst their San Francisco kin. Reports surfaced of an abundance of dead and dying rats in the early months of 1900, but as the epidemiology of plague was still not well-understood at that time, none were tested for the bacillus. The role of rats and their fleas in transmission of plague would not be confirmed until 1905.

Regardless of its origin, plague officially hit the city on March 6, 1900, with the death of a Chinese immigrant named Wong Chut King. King had been suffering from high fevers, and his lymph nodes were horribly swollen and tender to the touch. Delirious, he entered a coma as his organs shut down from the bacterium’s toxins, and died shortly thereafter. Before bacterial cultures were even confirmed, police descended on Chinatown, removing any white inhabitants and sealing off the borders—Chinatown was quarantined, beginning an eight-year struggle that pitted whites against Chinese, scientists against politicians, and state versus federal government.

Because rats and their fleas were the key to disease transmission, the quarantine of Chinatown was not only ineffective, but it also served to sow seeds of fear in the population that was most at risk of plague infection—the Chinese immigrants, packed in to poorly constructed homes permeable to vermin, and unlikely to visit Western doctors when they were ill.  Quarantine was lifted after three days had passed with no new reported plague cases, but the damage had been done. Chinatown’s inhabitants mistrusted the city’s public health board, and newspapers published stories claiming that the reported case had been a false alarm. New cases were purposely hidden from the white doctors and health boards.

The deaths, and fighting, continued for years. Travel bans were briefly imposed on Chinese and Japanese immigrants trying to leave the city, and quarantine of Chinatown was attempted once again in late May of 1900, zig-zagging through the city to avoid white neighborhoods and businesses. The second quarantine was a failure as well, and lifted via judicial order two weeks later. Many San Franciscans called for Chinatown to be burned, even while newspapers were actively denying any plague existed within city limits.

When quarantine failed, a desperate Kinyoun ordered ships and trains to deny passage to anyone without a signed health certificate, and requested that detention camps be built to hold the city’s Asian population. Businesses owners howled with rage at the damage they felt Kinyoun was doing to their livelihoods, and accused him of spiking corpses with his own stores of plague bacteria. Soon, even California’s governor, Henry T. Gage, would accuse Kinyoun of this malfeasance, while the state’s legislature called for Kinyoun to be hanged.

By 1901, all but two of Chinatown’s blocks had documented cases of human plague. Kinyoun, defeated, was replaced first by Joseph White and then Rupert Blue, who both encountered the same atmosphere of political denial and patient suspicion that led to Kinyoun’s ouster.

The tide finally started to turn in 1902. California’s plague denial had become a nationwide embarrassment, and other states had threatened to quarantine the Golden State. By mid-1903, 93 cases of the disease had been officially reported, and the clean-up effort finally began in earnest. Blue worked with the city’s Chinese population, and they slowly began to trust him. Carbolic acid was sprayed into buildings and chlorinated lime sprinkled in houses; rats were trapped and poisoned; ramshackle balcony additions were removed from tiny Chinatown apartments; houses were searched for potential plague cases. By mid-1905, after 121 cases of plague and 113 confirmed deaths, Rupert Blue was hailed as a hero, and the epidemic was declared over. The city could rest easy.

Of course, there is a coda. In the wee hours of April 18, 1906, an earthquake struck. Gas mains broke from the impact, and fires broke out throughout the city, damaging at least 25,000 buildings. Water mains also were destroyed, leaving the fires to burn for 4 days. In the aftermath, many residents were left homeless for extended periods of time.

By 1907, even while rebuilding was ongoing, 40,000 were still residing in shacks. The majority of the city lacked working sewers, and garbage littered the streets. The conditions were perfect for a rat explosion, and a fresh outbreak of plague. Human cases quickly reappeared: one in May of 1907, and then an outbreak of cases in August, which led to the closing of the City and County Hospital: it was overrun with rats.

The city had learned its lesson. Immediately, San Francisco’s mayor contacted President Theodore Roosevelt, requesting assistance. Rupert Blue was sent back to the city he’d cleaned up only two years before. This time, determined to truly eradicate the bacterium (and armed with the newly-published knowledge demonstrating that fleas transmitted plague to humans from their rat hosts), Blue instigated a multi-pronged attack, cleaning up the city to eliminate the rats’ food supply; destroying rat burrows and nesting places, and disinfecting them with lime; adding concrete  basements or screenings to places including homes, stables, warehouses, and  markets; and disinfecting buildings where infected humans or rats had been found. Blue noted, “this latter measure is not considered as important as rat extermination,” driving home that rat control, rather than quarantine, should be the central measure to end plague outbreaks.

Under Blue’s watch, ten million pieces of rat bait had been set out; 350,000 rats were killed outright, and 154,000 of those tested for plague. Not only did this clean-up end the plague epidemic at 159 cases (per Blue’s accounting), but also led to a decrease in many other infectious diseases. The city was given a clean bill of health on Thanksgiving, 1908.

However, plague was not gone. Blue was troubled by some outlier cases that he had seen, such as a case in Oakland and another in Contra Costa County, both far removed from downtown. In August of 1908, Blue’s assistant, William Rucker, investigated the latter, and found not only dead rats but also infected squirrels in the vicinity. Blue wrote Washington that the discovery had caused “considerable apprehension,” due to the fact that squirrels were abundant all over the state and beyond. It was already known in Asia that other rodent species—such as marmots—could host Y. pestis. Perhaps here was America’s version of wildlife plague. Not content with saving San Francisco, Blue wanted to attack the next plague frontier: squirrels. He requested $1.50 per day from the federal government to rent rifles and buy ammunition to kill wild squirrels, but was denied because of sloppy paperwork.

Even if Blue had been granted his request, it’s unlikely that he could have controlled the spread of plague in wildlife. By 1919, an outbreak in Oakland caused 17 deaths. In 1924, Los Angeles was struck, resulting in 37 deaths. Most of the cases in both of these epidemics were pneumonic: the bacteria traveled to the lungs and could then be spread between people, like influenza; and both outbreaks were traced back to squirrels. W.H. Kellogg, Chief of the State Hygienic Laboratory at the time, noted “These endemic foci, constituted as they are of wild rodent infection, are, so far as anyone knows at present, permanent and everlasting.”

Today, wildlife plague has spread from San Francisco all the way to the Kansas border; draw a line from the eastern borders of Montana and Wyoming down to the southern border of Texas, and go all the way west to the Pacific: that’s plague country.  Human cases number about 10-20 per year, most of those coming from the “Four Corners” states, where the prairie dog is the main host.

While now treatable with antibiotics, plague still can cause serious illness and even death, particularly if it’s not diagnosed quickly. The reverberations of San Francisco’s plague inaction even threaten the endangered black-footed ferret, which relies on prairie dogs as a main food source. While we can’t know for certain if plague would have become established in the United States if it weren’t for years of denial and scientist-bashing over a century ago, we certainly can see that the same mentality is alive and well today in some corners of our country. Alas, the more things change, the more they stay the same.

Sources and further reading:

Marilyn Chase, 2004. The Barbary Plague: The Black Death in Victorian San Francisco.

Myron Echenberg, 2007. Plague Ports: The Global Urban Impact of Bubonic Plague, 1894-1901.

The Plague in San Francisco. Science. 13:761-5, 1901.

Rupert Blue, 1909. Anti-Plague Measures in San Francisco, California, USA. J. Hyg 9:1-8.

Images: prairie dogs and Rupert Blue, from Wikimedia Commons.

Sharpen Your Powers of Attention

Cover Image: March 2013 Scientific American Magazine See Inside Managing editor Sandra Upson introduces the March/April 2013 issue of Scientific American MIND

By Sandra Upson

$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/article.cfm?id=sharpen-your-powers-attention'submit to reddit focus, woman focusing, Image: AARON GOODMAN

Showcasing more than fifty of the most provocative, original, and significant online essays from 2011, The Best Science Writing Online 2012 will change the way...

Read More »

Jon Kabat-Zinn, a tireless advocate of mindfulness meditation, sees parallels between the mind and the Pacific Ocean. Waves of emotion may roil the surface, but 30 feet down, all is peaceful. By tuning in to every breath as it travels through your body, you can dive into that basal oasis.

Mindfulness, or being keenly aware of the present moment without judging what is happening, can lift moods, hone focus and improve health. As psychologist Amishi P. Jha writes in her cover story, “Being in the Now,” on page 26, this cognitive cure-all may work by strengthening the brain's attention mechanisms.

Living in the present, of course, is not the same as ignoring the future. Yet that is what we do when we cave in to a fast-food hamburger or bust our budgets with a shopping spree. In “Time-Warping Temptations,” journalist David H. Freedman explores why we overrate the treats of today and cheat our future selves. Turn to page 45.

It's easy to lose our cool, especially in a metropolis, with the stress of congested streets, crowded sidewalks and the loneliness that can emerge amid thousands of strangers. Urban living can harm the brain—notably by increasing the risk of developing schizophrenia. Psychiatrist Andreas Meyer-Lindenberg explains why in “Big City Blues,” on page 58. Fortunately, emerging therapies for schizophrenia are helping patients overcome the disorder's often ignored social and cognitive deficits, which make building friendships and living independently so tough. See “A Social Salve for Schizophrenia,” by psychologist Matthew M. Kurtz, on page 62.

But first, take a look at the lively design we're unveiling for Head Lines. We've packed it with fresh features. In a new column, How to Be a Better..., we share tips for upping your performance; this issue focuses on driving skills. And the first installment of Pharma Watch, which highlights trends in drug research, looks at old medications that are finding new life as brain treatments. Check out the ticker along the bottom, too, to pick up some fascinating facts. We hope you love what you find.

This article was originally published with the title Powers of Attention.

More on rudeness, civility, and the care and feeding of online conversations.

Late last month, I pondered the implications of a piece of research that was mentioned but not described in detail in a perspective piece in the January 4, 2013 issue of Science. [1] In its broad details, the research suggests that the comments that follow an online article about science — and particularly the perceived tone of the comments, whether civil or uncivil — can influence readers’ assessment of the science described in the article itself.

Today, an article by Paul Basken at The Chronicle of Higher Education shares some more details of the study:

The study, outlined on Thursday at the annual meeting of the American Association for the Advancement of Science, involved a survey of 2,338 Americans asked to read an article that discussed the risks of nanotechnology, which involves engineering materials at the atomic scale.

Of participants who had already expressed wariness toward the technology, those who read the sample article—with politely written comments at the bottom—came out almost evenly split. Nearly 43 percent said they saw low risks in the technology, and 46 percent said they considered the risks high.

But with the same article and comments that expressed the same reactions in a rude manner, the split among readers widened, with 32 percent seeing a low risk and 52 percent a high risk.

“The only thing that made a difference was the tone of the comments that followed the story,” said a co-author of the study, Dominique Brossard, a professor of life-science communication at the University of Wisconsin at Madison. The study found “a polarization effect of those rude comments,” Ms. Brossard said.

The study, conducted by researchers at Wisconsin and George Mason University, will be published in a coming issue of the Journal of Computer-Mediated Communication. It was presented at the AAAS conference during a daylong examination of how scientists communicate their work, especially online.

If you click through to read the article, you’ll notice that I was asked for comment on the findings. As you may guess, I had more to say on the paper (which is still under embargo) and its implications than ended up in the article, so I’m sharing my extended thoughts here.

First, I think these results are useful in reassuring bloggers who have been moderating comments that what they are doing is not just permissible (moderating comments is not “censorship,” since bloggers don’t have the power of the state, and folks can find all sorts of places in the Internet to state their views if any given blog denies them a soapbox) but also reasonable. Blogging with comments enabled assumes more than transmission of information, it assumes a conversation, and what kind of conversation it ends up being depends on what kind of behavior is encouraged or forbidden, who feels welcome or alienated.

But, there are some interesting issues that the study doesn’t seem to address, issues that I think can matter quite a lot to bloggers.

In the study, readers (lurkers) were reacting to factual information in an online posting plus the discourse about that article in the comments. As the study is constructed, it looks like that discourse is being shaped by commenters, but not by the author of the article. It seems likely to me (and worth further empirical study!) that comment sections in which the author is engaging with commenters — not just responding to the questions they ask and the views they express, but also responding to the ways that they are interacting with other commenters and to their “tone” — have a different impact on readers than comment sections where the author of the piece that is being discussed is totally absent from the scene. To put it more succinctly, comment sections where the author is present and engaged, or absent and disengaged, communicate information to lurkers, too.

Here’s another issue I don’t think the study really addresses: While blogs usually aim to communicate with lurkers as well as readers who post comments (and every piece of evidence I’ve been shown suggests that commenters tend to be a small proportion of readers), most are aiming to reach a core audience that is narrower than “everyone in the world with an internet connection”.

Sometimes what this means is that bloggers are speaking to an audience that finds comment sections that look unruly and contentious to be welcoming, rather than alienating. This isn’t just the case for bloggers seeking an audience that likes to debate or to play rough.

Some blogs have communities that are intentionally uncivil towards casual expressions of sexism, racism, homophobia, etc. Pharyngula is a blog that has taken this approrach, and just yesterday Chris Clarke posted a statement on “civility” there that leads with a commitment “not to fetishize civility over justice.” Setting the rules of engagement between bloggers and posters this way means that people in groups especially affected by sexism, racism, homophobia, etc., have a haven in the blogosphere where they don’t have to waste time politely defending the notion that they are fully human, too (or swallowing their anger and frustration at having their humanity treated as a topic of debate). Yes, some people find the environment there alienating — but the people who are alienated by unquestioned biases in most other quarters of the internet (and the physical world, for that matter) are the ones being consciously welcomed into the conversation at Pharyngula, and those who don’t like the environment can find another conversation. It’s a big blogosphere. That not every potential reader does not feel perfectly comfortable at a blog, in other words, is not proof that the blogger is doing it wrong.

So, where do we find ourselves?

We’re in a situation where lots of people are using online venues like blogs to communicate information and viewpoints in the context of a conversation (where readers can actively engage as commenters). We have a piece of research indicating that the tenor of the commenting (as perceived by lurkers, readers who are not commenting) can communicate as much to readers as the content of the post that is the subject of the comments. And we have lots of questions still unanswered about what kinds of engagement will have what kinds of effect on what kinds or readers (and how reliably). What does this mean for those of us who blog?

I think what it means is that we have to be really reflective about what we’re trying to communicate, who we’re trying to communicate it to, and how our level of visible engagement (or disengagement) in the conversation might make a difference. We have to acknowledge that we have information that’s gappy at best about what’s coming across to the lurkers, and attentive to ways to get more feedback about how successfully we’re communicating what we’re trying to communicate. We have to recognize that, given all we don’t know, we may want to shift our strategies for blogging and engaging commenters, especially if we come upon evidence that they’re not working the way we thought they were.

* * * * *
In the interests of spelling out the parameters of the conversation I’d like to have here, let me note that whether or not you like the way Pharyngula sets a tone for conversations is off topic here. You are, however, welcome to share in the comments here what you find makes you feel more or less welcome to engage with online postings, whether as a commenter or a lurker.
_____

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Aquarium Fights to Get Disabled Loggerhead Turtle Swimming Again

Life looked grim for Yu, a loggerhead turtle, when she washed up in a Japanese fishing net five years ago, her front flippers shredded after a brutal encounter with a shark.


Reuters$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); }); reddit_url='http://www.scientificamerican.com/article.cfm?id=aquarium-fights-to-get-disabled-turtle'submit to reddit

By Ruairidh Villar

KOBE, Japan (Reuters) - Life looked grim for Yu, a loggerhead turtle, when she washed up in a Japanese fishing net five years ago, her front flippers shredded after a brutal encounter with a shark.

Now keepers at an aquarium in the western Japanese city of Kobe are looking for a high-tech solution that will allow the 25-year-old turtle to swim normally again after years of labor and 27 models of prosthetic fins behind them without achieving their goal.

Yu, weighing 103 kg (227 pounds) and 82 cm (32 inches) long, first came to the attention of keepers at the Suma Aqualife Park in Kobe after she was rushed there from a port on the southern island of Shikoku in 2008.

"She was in a really bad way. More than half her fins were gone and she was bleeding, her body covered with shark bites," said Naoki Kamezaki, the park's director general.

After nursing the loggerhead - an endangered species - back to health, keepers enlisted the help of researchers and a local prosthetics-maker to get her swimming again.

Early versions of prosthetic flippers caused her pain or fell off quickly, and with money short, Kamezaki said he sometimes felt like packing it in.

"There have been times I wanted to give up and just fix her up the best we can and throw her back in," he told Reuters. "Then if luck's on her side she'll be fine, if not, she'll get eaten and that's just life. The way of nature, I suppose."

The latest version - made of rubber and fixed together with a material used in diving wetsuits - was unveiled on February 11 and proclaimed a success, with Yu swimming smoothly around her tank.

But on Friday, one flipper slipped out as soon as she hit the water, forcing keepers back to the laboratory again.

Though Kamezaki admits that it's unlikely Yu will ever live a normal turtle life, he still has hopes.

"My dream for her is that one day she can use her prosthetic fins to swim to the surface, walk about, and dig a proper hole to lay her eggs in," Kamezaki said.

"When her children hatch, well, I just feel that would make all the trauma in her life worthwhile."

(Reporting by Ruairidh Villar, writing by Elaine Lies, editing by Paul Casciato)


Reuters

NASA's 'Mohawk Guy' Explains the Thrill of Exploring Mars

Spotted at the State of the Union address, Bobak Ferdowsi, the Mars Curiosity flight engineer famous for his hairstyle, describes his role as an ambassador for Mars

By Philip Yam


$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/article.cfm?id=nasas-mohawk-guy-mars-curiosity'submit to reddit Image: NASA JPL

 WASHINGTON, D.C.-- Not too many NASA engineers get to sit with the First Lady at the State of the Union address. But having an unusual haircut certainly doesn't hurt in getting you noticed, especially if you are the flight director for the Mars Curiosity mission. Bobak Ferdowsi, better known as Mohawk Guy, caught many people's attention, including that of Michelle Obama, when television cameras caught the 33-year-old in the control room as Curiosity made its spectacular landing last August 6, 2012.

His distinctive look and infectious enthusiasm has led him to reach out to the public to spread the word on the excitement of Martian exploration. At a briefing organized by the White House office of digital strategy on February 13, he revealed how he got into Mars research and the reason for his hair.

[An edited transcript follows.]

What inspired you to become involved in the exploration of other planets?

As a child, it was the kind of thing I dreamed of doing. I saw the 1997 Pathfinder mission. It was the first time I had really seen live pictures of Mars. There was something amazing about seeing the human effort involved, to have something sitting there on another planet, that made me want to do it.

How did you get on the Mars Curiosity team?

In school, I wasn't sure what I wanted to do.  I went down the path of physics and aerospace engineering. At the Jet Propulsion Lab (JPL), I was lucky that they put me on the Mars Curiosity project at the very start. But I still didn’t know what I would do on it.

I told my boss, hey, I really want to work on this stuff, but I don’t even know what I’m good at yet. So I took an apprenticeship approach. Over the course of a few years, I did mission planning, some requirements development, testing, and operations. Along the way, without realizing it, I learned so much and learned a lot about myself. I learned I loved testing the rover. Trying to get one of these things to break is one of the best jobs I’ve ever had.

To name the rover, NASA conducted a contest. How do you think it turned out?

I thought the name, Curiosity, was a little cheesy at first. And now I absolutely love it. Curiosity is actually the perfect  name.  Here we are, and we’re using our own curiosity to explore the planet.

You have a full-time job operating the rover as flight director. How did you handle all the educational outreach?

It’s just a matter of a little time management. I love the outreach. I feel really fortunate for the opportunities to do more of it, like working with the Office of Science Technology Policy. When I give a tour of JPL, it's super exciting. It gives you energy to bring someone else into the picture and show them what you’re working on. And you realize, yeah, this is amazing; it’s not just a job. It helps motivate me and gets me pumped.

Concerning the Curiosity mission, what are you most looking forward to?

Until last week, it was the drillling into the Martian surface. The thing I'm really excited for now is that we’ve laid out the path we’re going to drive on and the places where we are gong to drill. We're seeing at least three or four different types of terrain there. I’m excited to analyze each of those terrains and get the story of Mars pieced together, because each of those terrains represents a different era and a different Martian environment. And we can get down to answering the question of whether Mars was habitable.

What are the odds of life on Mars?

I don't believe there's life on Mars today. I'm optimistic that maybe in the past there were some sort of simple-celled organisms.

What's the deal with your hair?

The hair became an ongoing tradition for me about five, six years ago, when we started doing these things called system tests. I was doing the software testing of the hardware.

Testing is kind of stressful. So with the system test coming up, I thought I'd do something fun. I decided I was young enough to have a Mohawk once in my life. And I also put an ST on my head for system test.

For launch, I went a little crazy. I dyed my hair so that the hawk went from gold to red, like a rocket flame. For landing, my boss sent an email poll to the team asking what my hair should look like. Some of the options were pretty bad. One suggestion was a reverse Mohawk. Ultimately, the team came up with red, white and blue.

Any plans to change your Mohawk hairstyle?

I think I was 26 when I first started it. I like to change things up, as you can tell from the colors in my hair that are changing. I’m sure there’ll be a point when it’s gone. No one wants to see an old grey-haired Mohawk guy.

Science and the Public Parlay: Come a Little Bit Closer

BOSTON—Rarer than hen’s teeth is a bill in Congress that has bipartisan support. But such legislation exists, and if passed would open up a semi-secret world. The law—the Fair Access to Science and Technology Research (FASTR) Act—would ensure that research articles based on taxpayer-supported projects are freely available online for the public to read. FASTR was among the hot topics at a session here devoted to digital tools for communicating science, on Friday at the annual meeting of the American Association of the Advancement of Science.

Agencies such as the National Institutes of Health (NIH), National Science Foundation and Department of Energy fund more than $60 billion in research annually, resulting in about 90,000 papers—most of which are not accessible to the public (NIH has a public access policy that involves posting final journal manuscripts that arise from research it funds on the digital archive PubMed Central within 12 months), other than through big libraries. And those institutions lately are burdened by hefty subscription fees and public funding cuts. Under FASTR, agencies with large research budgets would have to make results of funded projects publicly available within six months of being published in a peer-reviewed journal.

Also discussed at the session was a new, more public-oriented way to measure the impact of a published journal article. Currently, impact is gauged by the number of citations the paper receives in other publications. But information-science graduate student Jason Priem of the University of North Carolina, Carrboro (who emphatically supported FASTR), demonstrated ImpactStory, a free, open-source web application he co-created in 2011 to more fully capture the scholarly impact and reach of a scientist’s work. ImpactStory and other “altmetric” efforts aggregate conventional citations with mentions of that work on Web sites such as Delicious, Facebook, Twitter, Slideshare, ScienceSeeker, Faculty of 1000, Mendeley, CiteULike and ORCID, as well as view counts for downloaded pdfs of papers. Rankings are computed and collated into an individual report that can be used in tenure and promotion considerations on campuses. Some audience members expressed wariness about putting energy and time into social media when academia has yet to fully value it. For the public’s part though, thanks to online media, a growing number of regular folks want in.

Another presenter talked about a more entertaining way to bring the public closer to the process of science—by enlisting largely non-academics to play a game that can help scientists solve the convoluted structures of proteins. Seth Cooper, of the Center for Game Science at the University of Washington, noted that more than 300,000 people have participated in the game his group developed, called Foldit. The free online tool allows individuals and teams of users to compete for points to find the most compact, low-energy way to fold the irregular 3-D twists and turns of bonded molecular strands that form proteins. Foldit-affiliated scientists scored a big success in 2011 when a team of it gamers required just 10 days to solve the long-elusive structure of an enzyme from an AIDS-like virus found in rhesus monkeys.

“The game is constantly updated to become a better and better tool to solve biochemical problems that are of interest to scientists,” Cooper said. A large portion of users report that they primarily like to participate because it gives them a sense of meaning and purpose. Foldit scientists, for their part, post to a blog or hold live chats with users to help explain their research problems in greater detail, offering players a deeper sense of involvement in the science.

Such games are part of a trend called public participation in science, or citizen science, which offers lay people easy and engaging ways to help scientists with data collection and analysis, often via digital tools such as smart phones and Web sites. These efforts dovetail with the larger open-access science movement, which involves such efforts as crowd-funded projects (which solicit small donations rather than relying only on government or wealthy donor funding—see SciFund and PetriDish), open lab notebooks posted online and calls for journal publishers to make research results freely available to all readers online, not just subscribers.

Karyn Traphagen, executive director of ScienceOnline, an organization that is becoming the largest virtual community of science writers worldwide, encouraged the standing-room-only gathering of about 150 session attendees—a mix of scientists, journalists, educators and others involved in outreach—to be flexible, adventurous and nimble in their experiments to use social media and other digital tools to bring science and the public closer together.

“You need to be willing to change,” she said. “You need to be willing to go with the responses that you are getting. You’re not always going to have something that works the first time.” Traphagen’s family is deeply involved in opening up science to the public: she said that her 8-year-old granddaughter plays Foldit.

Image credit: National Science Foundation/Jupiter Images

Fracking in New York? Not for Another Year, If Ever

The fracking debate in New York state is hitting new heights as regulators delay a final decision on the controversial natural gas production method, but it looks increasingly clear that it will be a year - if ever - before drilling begins again.


Reuters$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); }); reddit_url='http://www.scientificamerican.com/article.cfm?id=fracking-in-new-york-delayed'submit to reddit

By Edward McAllister

NEW YORK (Reuters) - The fracking debate in New York state is hitting new heights as regulators delay a final decision on the controversial natural gas production method, but it looks increasingly clear that it will be a year - if ever - before drilling begins again.

Governor Andrew Cuomo missed a Wednesday deadline for completing a report on the environmental impact of hydraulic fracturing, better known as fracking, that was to form the basis for new drilling rules.

As a result, a now-four year moratorium on shale gas drilling in the Empire State could extend into 2014 forcing companies such as Chesapeake Energy and a host of smaller independents to sit on their idle land leases and wait.

Over the last decade, U.S. energy companies have advanced hydraulic fracturing techniques, unlocking vast quantities of natural gas and oil trapped in shale rock. But drilling in New York's portion of the Marcellus shale deposit, one of the biggest in the country, has been halted since 2008 amid concerns that fracking, which involves pumping chemical-laced water and sand deep below the surface, can contaminate water supplies.

Fracking has become a hugely divisive issue in New York where communities are weighing the economic benefits of allowing energy development against the environmental concerns.

However, even if the drilling is allowed to proceed in the coming months, legal battles could hold up well permits, potentially delaying energy production for another year, according to lawyers representing both sides.

"I don't think we'll see a drill bit in the ground until early 2014," said Tom West, an attorney at the West Firm, which represents oil and gas companies in the state. "The outcome remains uncertain, as it has done for the last four and a half years, and we are very disappointed," he said of Wednesday's missed deadline.

The delay has pitched an increasingly vocal environmental lobby, many of whom want no more wells drilled in the New York, against energy companies invested in the state and frustrated by the pause. Both sides point to neighboring Pennsylvania, also home to the Marcellus, which has experienced a drilling boom and attracted huge investment over the past five years, but which has also experienced a number of drilling-related accidents.

New York's environmental impact statement was held back this week after the Department of Health requested more time to complete a parallel health impact study that the state wants completed before any decision on drilling is taken.

The decision to delay, announced by the Department of Environmental Conservation on Tuesday, prompted cheers from the celebrity-studded environmental lobby, including New York state resident and anti fracking activist Yoko Ono, who said "We love you, Governor," in a public email.

But behind the scenes, both sides are bracing for legal entanglements.

The Joint Landowners Coalition, a pro-fracking group, is planning to sue the New York Department of Environmental Conservation after Wednesday's deadline was missed, on the grounds that delaying drilling was a "de facto taking of property rights," according to the group's attorney Scott Kurkoski.

And if regulations are written up and drilling is allowed to go ahead, experts expect anti-fracking groups to also jump in.

"The courts will get a lot of lawsuits and people are going to want to intervene on each side," said Kate Sinding, an attorney with the Natural Resources Defense Council in New York.

In the meantime, stays on drilling permits are likely, some attorneys said. And even if the state passes the drilling permits, more than 150 New York cities and towns have their own fracking bans, a fact which is already prompting lawsuits.

"This could be tied up in the courts for well over a year. I think it is likely that it will be 2014 before we see any permits," Sinding said.

(Reporting By Edward McAllister; Editing by Martin Golan)


Reuters

Valentine's Day on the planet of the Little Prince

The wedding book; image courtesy of Seth Fishman.

When my wonderful agent, Seth Fishman, got married this summer, he decided on one of the most original and thoughtful presents for his bride-to-be, Marget, that I had ever seen: a bound book of reflections on love from his friends and clients. He asked everyone to contribute whatever they would. A drawing, a word, a story. I took my inspiration from what I consider one of the greatest love stories of all time, that of the Little Prince and his rose.

To celebrate Valentine’s Day–and Seth and Marget’s first Valentine’s Day as a married couple!–I’ve decided to post the piece I wrote for the two of them (with permission, of course). Without further ado…

The Prince and The Rose

When I think of the greatest love story of all time, what comes to mind at once isn’t Romeo and Juliet or Antony and Cleopatra or Tristan and Isolde or any one of those famed couples whose fates have been told and retold many times over. I think instead of a much less conventional pair: the Little Prince and his rose. To me, Le Petit Prince will always be, above anything else, a love story. What love means. How it grows. What it takes to water and protect it so that it can keep blooming even with the fiercest of tigers, the strongest of wind currents, and the hungriest of lambs menacing it at every turn.

Before the rose arrives, the flowers on the Little Prince’s planet are all simple. They don’t stay long. They don’t bother anyone. One morning they are there, and one evening, gone. They don’t merit much more than a passing glance.

But from the second an unknown seed makes its way to his little world, everything changes. He somehow senses that what will emerge will be nothing short of miraculous, “une apparition miraculeuse.” And so it is. It’s difficult to do justice to that first meeting, those coy flirtations, that deepening of feeling that passes at some ineffable point from coquetry to love. Was it the wind current? The glass globe to protect the rose’s delicate petals from the sun? A gentle cough in the evening air?

The Little Prince leaves the rose to explore the world outside his planet. His feelings are too overwhelming, his melancholy, too great. He doesn’t know how to handle the new range of emotions that the rose has opened up. “I was too young to know how to love her…” he reflects.

But wherever he goes, the rose travels with him. All you need to be happy, he says, is to know that your rose is out there, somewhere. Then, when you look at the sky, you can’t help but smile.

The Little Prince, watching the sunset. Image credit: Creative Commons, Don Merwin Flickr photostream.

When the Little Prince realizes that there are other roses in the world, he is devastated. How could his flower not be unique? But the wise fox teaches him that no other rose is like his. And so, the Little Prince goes back to the rose garden and tells the other roses: “My rose alone is more important than all of you, because it is she that I watered. Because it is she that I put under the glass globe. Because it is she that I have sheltered behind the screen. Because it is for her that I killed all of the caterpillars (except the two or three for the butterflies). Because it is she that I have listened to, when she grumbled, or boasted, or even sometimes when she said nothing. Because she is my rose.”

And of course, there are those final words of the fox to the prince, as the young man makes up his mind to return to his planet, to reunite once more with the rose that has become his life. “One only sees well with the heart. The essential is invisible to the eye. It’s the time that you’ve spent on your rose that makes your rose so important. Men have forgotten this truth. But you must not forget it. You become responsible, forever, for what you have tamed. You are responsible for your rose…”

It’s a love story for the ages.

Maxwell's Demon Meets Quantum Dots

Entropy. It’s at the root of one of the most famous physics thought experiments of the 20th century (second only to the infamous Schroedinger’s cat), devised by a Scottish physicist named James Clerk Maxwell, called Maxwell’s Demon. And now scientists at the Institute of Technology in Berlin have devised a nanoscale version of that thought experiment using quantum dots.

Entropy is better known as the second law of thermodynamics. Not only can you not have a closed system that puts out more energy than you consume, but you’re always going to lose a little bit of energy in the energy conversion process. One of the neat things about thermodynamics is that if you can create a large enough differential between potential and kinetic energy — for example, a big difference in temperature between two compartments — you’ve got yourself a handy energy source.

Refrigerators work on this simple concept, known as the Carnot cycle. Gas (usually ammonia) is pressurized in a chamber, said pressure causes that gas to heat up, this heat is then dissipated by coils on the back of the appliance, and the gas condenses into a liquid. It’s still highly pressurized, sufficiently so that the liquid flows through a hole to a second low-pressure chamber. That abrupt change in pressure makes the liquid ammonia boil and vaporize into a gas again, also dropping its temperature — thereby keeping your perishable foodstuffs nicely chilled. The cold gas gets sucked back into the first chamber, and the entire cycle repeats ad infinitum — or at least as long as the appliance is plugged in.

That’s the catch. The refrigerator is not a truly “closed system”: it gets a constant influx of energy from the wall outlet that enables it to operate continuously. Left on its own, without that crucial influx, and the interior would cease to be nicely chilled, and all the food therein would perish. Because entropy always increases in the end.

The second law of thermodynamics is frankly pretty unyielding. But while it can’t be broken, perhaps it can be bent by a cunning infusion of energy that escapes detection by all but the most perceptive eye. James Clerk Maxwell proposed the most famous evasion of thermodynamics back in 1871.

Maxwell is best known for formulating his famed equations for electromagnetism that are still in use today. But he was equally fascinated by thermodynamics, notably the fact that heat cannot flow from a colder to a hotter body. And one day Maxwell had an idea: what if hot gas molecules merely had a high probability of moving toward regions of lower temperature?

Illustration of Maxwell's Demon. Credit: Jason Torchinsky.

He envisioned an imaginary, tiny creature (Maxwell’s Demon) who could wring order out of disorder to produce energy by making heat flow from a cold compartment to a hot one, creating that all-important temperature difference. The imp guards a hypothetical pinhole in a wall separating two compartments of a container filled with gas — similar to the two chambers in a refrigerator — and can open and close a shutter that covers the hole whenever it wishes.

The gas molecules in both compartments will be pretty disordered, with roughly the same average speed and temperature (at least at the outset), so there’s very little energy available for what physicists call “work”: defined as the force over a given distance (W=fd) It means that you’ll spend the same amount of energy carrying a heavy load over a short distance, as you will carrying a feather over a very long distance.

In Maxwell’s thought experiment, the atoms start out in a state of thermodynamic equilibrium. But they’re still jiggling around, as atoms are wont to do, so over time, there are small fluctuations as some atoms start moving more slowly or more quickly than others. Of course, balance will soon be restored, since the excess heat will be transferred from hotter to colder molecules until they are all once again in equilibrium.

Ah, but then Maxwell’s little demon interferes. Whenever it spots a molecule moving a bit faster in the right compartment and start  to move towards the pinhole, he opens the shutter just for a moment so it can pass through to the left side. It does the same for slower molecules on the left side, letting them pass to the right compartment.

So the molecules in the left compartment get progressively hotter, while those on the right side get colder. The creature creates a temperature difference, and once you have that, well, it’s a trivial matter to harness that difference for work. Entropy has been outwitted — or so it would seem.

In reality, Maxwell’s thought experiment was a trick question. It’s statistically impossible to sort and separate billions of individual molecules by speed or temperature; Nature just doesn’t do this. You can’t throw a glass of water into the sea and expect to get back the exact same glass of water, right down to the last single molecule.

Okay, hypothetically you might be able to do this, provided you knew the exact speeds and positions of each and every molecule (at the quantum level, this is an impossibility thanks to the Uncertainty Principle). But you’d have to expend a huge amount of energy to collect that detailed information, far more than the energy you’d get out of the system once you’d succeeded in creating the crucial temperature difference.

Just like the refrigerator, Maxwell’s mischievous little imp also requires energy to operate. There is no such thing as a perfect heat engine; you’ll always lose some heat in the process. The second law of thermodynamics is the bane of every researcher striving to develop alternative energy sources, and they have to be cost-competitive as well as energy-efficient.

That hasn’t kept physicists from playing around with the concept of Maxwell’s Demon experimentally in the ensuing 130+ years. Back in 2007, there was a nifty manmade molecular machine created by another Scotsman, David Leigh, and his colleagues at the University of Edinburgh. Most biological processes involve driving chemical systems away from thermal equilibrium, so Leigh devised a chemical “information ratchet” that performs much the same role as Maxwell’s hypothetical demon: creating a temperature difference out of thermal equilibrium, thereby seemingly “reversing” entropy.

In 2008, Daniel Steck of the University of Oregon in Eugene built a laser barrier set-up in which the beam lets atoms pass through only in one direction, such that they all eventually end up on a single side, chilled to extremely low temperatures. Steck created a “box” out of laser light/electromagnetic fields, and then added two parallel lasers that together serve as the “trapdoor.” The beam on the right is the barrier, and the one of the left is the “demon,” responsible for the “sorting.”

And in 2010, physicists at the University of Tokyo built a nanoscale experiment in which a bead was coaxed up a spiral staircase without any energy being directly transferred to the bead to accomplish the feat. Per Nature: “Instead, it is persuaded along its route by a series of judiciously timed decisions to change the height of the ‘steps’ around it, based on information about the bead’s position.” So in some sense,  information is being converted to energy. I’ll let Sean Carroll (a.k.a. the Time Lord) explain:

The idea is called Szilárd’s Engine. … [I]t’s a box of gas with just one particle, moving in one dimension. (In the real experiment, they used knowledge of a particle’s position to make it hop up a staircase.) The equivalent of “maximum entropy” here is “we have no idea where the particle is.” There is energy in the box, equal to kT/2, but we can’t get it out.

But now imagine that someone gives us one bit of information: they tell us which side of the box the particle is on. Now we can get some energy out! All we have to do is wait until the particle is on the left-hand side of the box, and quickly slip in our piston coming out the right. The particle will now bump into the piston, pushing it to the right, allowing us to useful work, like lifting a very tiny bucket or something. In the process some of the particle’s energy is transferred to the piston, so we’ve extracted some energy from the box. Note that we could not have done this if we hadn’t been given the information — without knowing where the particle is, our piston would have lost energy on average just as often as it gained energy, so we couldn’t have done any useful work.

Recreating Maxwell's Demon with quantum dots. Credit: Philip Strasbrg et al. 2013 American Physical Society.

Which brings us to the latest work by the German physicists. Quantum dots are tiny bits of semiconductors just a few nanometers in diameter. It’s like taking a wafer of silicon and cutting it in half over and over again until you have just one tiny piece with about a hundred to a thousand atoms. That’s a quantum dot. Billions of them could fit on the head of a pin.

The physicists proposed that one could physically build the experimental equivalent of Maxwell’s Demon on the nanoscale with a pair of interacting quantum dots. One dot is the demon, and is connected to a pair of thermal reservoirs, making it, in effect, a single-electron transistor.

The other dot represents the controlled system and is coupled to another reservoir. One of the most useful properties of quantum dots is that they can be tuned to specific wavelengths. So it should be possible to tune the second dot in such a way that it can tell if the first dot is in either a o or 1 state — you just need to ensure both dots are interconnected.

If you think of the two dots as glasses, if they were perfectly correlated, when the first glass was empty, the second would be full, and vice versa. The end result of the process is roughly the equivalent of gaining extra information from the production of entropy.

While there is an increase in the total entropy  — per the second law of thermodynamics — that increase doesn’t occur in the demon-y quantum dot by itself. As one of the collaborators, Massimiliano Esposito (University of Luxembourg) told Phys.org: “It does, of course, respect thermodynamics…. However, if the part of the system implementing the demon is disregarded, everything looks as if the remaining pat of the system was subjected to a Maxwell demon breaking the second law while keeping the first one intact.”

Granted, they haven’t actually built such an experiment, but the researchers are optimistic that it should be possible. So Maxwell’s Demon need not be all that smart, or even sentient — just very well designed.

References:

Jarzynski, C. (1997) “Nonequilibrium Equality for Free Energy Differences,” Physical Review Letters 78: 2690-2693.

Maxwell, J. C. (1871). Theory of Heat, reprinted. New York: Dover, 2001.

Serreli V., et al. (2007) Nature 445: 523 – 527.

Strasberg, Philip et al. (2013) “Thermodynamics of a Physical Model Implementing a Maxwell Demon,” Physical Review Letters 110: 040601.

Thorn, Jeremy J. et al. (2008) “Experimental Realization of an Optical One-Way Barrier for Neutral Atoms,” Physical Review Letters 100, 240407.

Toyabe, Shoici et al. (2010) “Information Heat Engine: Converting Information to Energy by Feedback Control,” Nature Physics 6: 988-992.

[Partially adapted from a June 2008 post at the old archived Cocktail Party Physics blog.]

What Do We Know about the Russian Meteor?

Meteor researcher Margaret Campbell-Brown recaps the latest research into the cause of this morning’s fireball over Chelyabinsk

By John Matson


$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/article.cfm?id=chelyabinsk-fireball-asteroid'submit to reddit Meteor contrail over Russia FIRE IN THE SKY: A fireball brighter than the sun lit up the morning skies over Russia and left a long trail in the sky. Image: Courtesy Alex Alishevskikh/Cyberborean Chronicles via Creative Commons license

A surprise meteor strike over central Russia this morning lit up the skies, blew out windows on the ground and injured roughly 1,000 people in and around Chelyabinsk, a city of 1.1 million. The inbound object, thought to be a small asteroid, had not been discovered prior to impact. But already teams on the ground are reportedly collecting possible fragments of the meteorite, and researchers around the globe are scrambling to figure out what happened. Scientific American contacted Margaret Campbell-Brown, a professor in the Meteor Physics Group at the University of Western Ontario, to get the latest.

[An edited transcript of the interview follows.]

What do we know, as of now, about what caused the fireball over Russia this morning?
We’ve actually seen it from at least two infrasound stations. Infrasound is very low frequency sound waves, which are produced in, for example, loud explosions. There is a global network of infrasound sensors whose purpose is to detect nuclear explosions in the atmosphere. It’s part of the Comprehensive [Nuclear] Test Ban Treaty. Two of the nearest stations in this network, which were both in Russia, did detect this very large event.

So, from that, we know that the energy of the explosion was about 300 kilotons of TNT equivalent. So it was a very, very powerful explosion. It was the biggest explosion from a meteor that we’ve seen in the atmosphere since the Tunguska impact of 1908.

We know that the meteor lasted about 30 seconds. It came into the atmosphere at a very shallow angle, which is why it lasted so long. The object was moving at about 18 kilometers per second, which is about 65,000 kilometers per hour, which is typical of an asteroidal speed.

From the energy of the impact, we think that it was about 15 meters in size, so it would be the largest object to hit the Earth since the Tunguska impact, as far as we know—we haven’t recorded an object larger than that. It had a mass of probably about 7,000 metric tons, so it was a very large object.

You may have seen that the Russian Academy of Sciences issued a statement with a lower estimate for the size of the object—something in the few-meter range producing an explosion of a few kilotons.
Right. It’s the most uncertain part of the calculation, but I would be very surprised if it’s less than 100 kilotons. It was a very, very large event. And the fact that there was so much damage on the ground supports the conclusion that the energy was high. You need a lot of energy to shatter windows in the way that was seen.

Is there any reason to suspect that it was anything other than an asteroid?
An asteroid is certainly the most likely suspect. The size of it, the speed that it was going and so on, all point to an asteroid. The fact that it exploded in the atmosphere implies that it was probably a stony asteroid, maybe a chondritic type, for example, as opposed to something iron, because iron things are stronger and tend to make it to the ground, where they release their energy.

Where was most of the energy released as this object made its way through the atmosphere?
In this case the final destination, which seems to have been the largest deposit of energy, was somewhere around 15 to 20 kilometers altitude. The actual fireball probably started significantly higher than that, maybe 50 kilometers, but most of the energy was apparently deposited during that last explosion lower in the atmosphere.

Is it possible that if this meteor had hit over the ocean rather than over a populated area, we might not have known about it?
We certainly would have known about it. The CTBT, the Test Ban Treaty, constantly is monitoring for large explosions in the atmosphere, and this one was large enough that no matter where it occurred over the Earth it would have been detected by the CTBT array.

You mentioned that this event showed up in two nearby CTBT sensors. Is it possible that the explosion was picked up by other stations as well?
Not all the sensors are as straightforward to get the data from. We’re trying to get data from other sensors. It would surprise me if there wasn’t data on other sensors because this was a very powerful wave, and I would expect it to propagate a very long distance because infrasound can travel a very long way in the atmosphere. But we don’t have data from other stations yet.

How often should we expect to see an event like this?
In the 15-meter size range, we think it happens about every 50 years. It’s been more than 100 years since we’ve seen something of this size, but statistically it happens approximately every 50 years.

When you consider all the areas of the Earth that are uninhabited—the oceans, the ice caps, the deserts and so on—it’s very surprising that this happened over such a populated area. Very unlucky.

Babble-onia: Solving the Cocktail Party Problem

Walk into a crowded bar, with music blaring, and your first impression is likely to be a shudder at the sudden wall of sound — which you will interpret at first as a single loud noise. But very quickly, you adjust, and different sounds begin to emerge. We navigate by tuning our neurons to specific voices, thereby tuning out others — like that irritating, leering would-be Lothario at the other end of the bar, or all that ambient noise.

Over at Scientopia, Scicurious wrote about a new MEG study by neuroscientists on how the brain deals with the so-called “cocktail party problem” — distinguishing one conversational thread amid a cacaphony of babble in a crowded room. It’s not just a question of attention, although it can be difficult to concentrate on even the most fascinating discussion if there’s too much background noise.

The brain doesn’t just detect sounds, it also processes temporal patterns of speech and visual cues. The latter was the basis for the latest study, in which the authors set out to measure whether (as Scicurious put it)  “the visual input from the face that is speaking might help someone to “predict” what they are about to hear, easing processing of the words.” As expected, they found that people followed a conversation just fine one on one, and had difficulty in a small cocktail party setting. But their performance improved dramatically in the latter setting if they had a face to go along with the speech patterns. Per Scicurious:

Why does this help? It could be that the visual input helps you maintain attention. The visual input could also help you predict what is to be said next and help with auditory processing that way.

This is a perennially favorite topic for science writers; I blogged about it back in 2011 when Scientific American featured an article by Graham Collins on how our brains separate various auditory streams when in a crowded room, like a restaurant or a cocktail party, so why not revisit that classic post now? (Personally, my brain has never been especially good at this. I find myself having to really concentrate when the noise levels reach a certain critical threshold.) Scientists have been pretty successful at studying how the brain accomplishes this feat. They’ve been less successful at devising computer algorithms to do the same thing.

Nick and Nora Charles mastered the cocktail party problem their own way in "The Thin Man": copious martinis!

A few years ago, at an acoustics conference, I chatted with Shihab Shamma, a researcher at the University of Maryland, College Park. He believes this ability arises from auditory nerve cells in the brain that re-tune themselves to specific sounds as part of the adaptive process. It’s kind of an auditory feedback loop that enables us to sort out confusing incoming acoustical stimuli.

He’s surprised, however, by how quickly this process happens: auditory neurons in adult mammal brains make the adjustment in a few seconds. To Shamma, this suggests that the developed brain is even more “plastic” or adaptable than previously realized. We’re literally changing our minds.

Scientists are still a bit in the dark in terms of understanding the mechanisms that cause this rapid tuning, but Shamma says that if we can mimic those abilities, it could lead to the development of more effective hearing aids and cochlear implants. In the shorter term, it might help improve automatic speech recognition systems by teaching them to filter out moderate levels of background noise and other acoustical “clutter.”

And that brings us to the 2011 Scientific American article. Apparently a team of researchers at IBM’s TJ Watson Research Center have managed to create an algorithm for the “cocktail party problem” that outperforms human beings. Why is it so hard, and therefore such a bit deal? It comes down the number of possible sound combinations which quickly becomes unwieldy. Here’s how Collins phrases it:

“Whether one person is talking or many, the sound contains a spectrum of frequencies, and the intensity of each frequency changes on a millisecond timescale; spectrograms display data of this kind. Standard single-talker speech recognition analyzes the data at the level of phonemes, the individual units of sound that make up words… Each spoken phoneme produces a variable but recognizable pattern in the spectrogram. Statistical models … [specify] the expected probability that, for instance, an “oh” sound will be followed by an “n”. The recognition engine looks for the most likely sequences of phonemes and tries to build up whole words and plausible sentences.”

In other words, speech recognition works a bit like Auto-Correct — and we all know what can happen when Auto-Correct goes horribly, horribly wrong.

Collins continues:

“When two people talk at once, the number of possibilities explodes. The frequency spectrum at each moment could come from any two phonemes, enunciated in any of the ways each person might use them in a word. Each additional talker makes the problem exponentially worse.”

The good news is that such algorithms can simplify the search by focusing on the dominant speaker — c’mon, we all know there’s at least one Loud Talker in any given crowd. A number of shortcuts have been devised in recent years by exploiting this kind of thing. A “bottom-up” approach looks for segments in a spectrogram without a dominant speaker, and sets those segments aside, literally removing them from the equation so the algorithm can focus on finding phoneme sequences in the “clean regions” — i.e., where there is a dominant speaker. That approach has been adopted by scientists at the University of Sheffield in England, apparently.

Alternatively, you can use a “top-down” approach, devising an algorithm that analyzes trial sequences of the most likely phonemes for all speakers in a given spectrogram. Finnish researchers at Tampere University of Technology exploit this approach by switching between each of two speakers. As Collins explains, “Given the current best estimate of talker A’s speech, search for talker B’s speech that best explains the total sound.” Context is everything, baby. The IBM achieved their “superhuman” automated speech separation by tweaking a “top-down” approach and devising an algorithm to seek out areas on the spectrogram where one talker was bellowing so loudly s/he masked the voices of the other(s).

But you really shouldn’t worry too much just yet about secret agents eavesdropping on your party guests: the new algorithms aren’t that good. Maybe someday. In the meantime, please to enjoy this classic party scene from Breakfast at Tiffany’s to illustrate just how tough the cocktail party problem is likely to be. As one of the YouTube commenters remarked, “It’s not a party until someone is laughing and crying at themselves in the mirror.”

[Adapted from an April 2011 post from the archived Cocktail Party Physics blog.]

Hundreds Reported Injured in Blast from Meteor Strike over Russia [Video]

A meteor fireball lit up the morning sky over Chelyabinsk in central Russia, producing a shock wave that shattered windows and injured an estimated 500 1,000 people.** Although much of the parent object likely burned up in the atmosphere, Russian authorities say that several meteorite fragments have already been recovered, according to the Interfax news agency.

A preliminary analysis posted to the Web site of the Russian Academy of Sciences estimates that the object that struck Earth’s atmosphere was a few meters in diameter, “the weight of the order of ten tons [and] the energy of a few kilotons,” according to a Google translation.* That  would make the Chelyabinsk event a fairly common occurrence, although such strikes usually occur over less-populated regions, not cities of more than a million people. On average, a four-meter asteroid hits Earth every year, delivering five kilotons of energy, Southwest Research Institute senior scientist Clark Chapman found in a 2004 analysis.

The Chelyabinsk impact appears unrelated to the close passage of the 50-meter asteroid 2012 DA14, which is expected to zip past Earth at a distance of less than 30,000 kilometers around 2:30 P.M. Eastern time today—inside the orbit of some satellites. On Twitter, the European Space Agency stated that agency experts have confirmed that there is no link between the two events.

A dashboard camera captured some dramatic footage (below) of this morning’s event.

We will update this post as more information becomes available.

*UPDATE (11:33 A.M. EST): Other analyses point to a larger size for the impactor. Margaret Campbell-Brown of the University of Western Ontario told Nature that her calculations show an initial size of 15 meters for the object when it hit the atmosphere. “That would make it the biggest object recorded to hit the Earth since Tunguska,” a giant blast over Siberia in 1908, she said.

**UPDATE (4:10 P.M.): The New York Times, citing information from Russia’s Interior Ministry, reports that the number of injured is more than 1,000.

A Surefire Way to Sharpen Your Focus

How many times have you arrived someplace but had no memory of the trip there? Have you ever been sitting in an auditorium daydreaming, not registering what the people on stage are saying or playing? We often spin through our days lost in mental time travel, thinking about something from the past, or future, leaving us oblivious to what is happening right around us right now. In doing so, we miss much of life. We also make ourselves relatively miserable, and prone to poor performance and mishaps.

peaceful scene, village by the water Be in the moment. Courtesy of margory.june via Flickr.

The opposite mental state, mindfulness, is a calm, focused awareness of the present. Cultivating that state is associated with improvements in both mental and physical health, as you will learn from the current cover story of Scientific American Mind (see “Mindfulness Can Improve Your Attention and Health” by Amishi P. Jha). It can even ameliorate mental illness.

It turns out that mindfulness training works in large part by training our ability to pay attention. As we learn to focus on the here and now, we also learn to manipulate our mental focus more generally. The ability to direct our own minds at will means we control what we think about. It is no wonder that honing such a skill can make us happier. It can also boost the performance of soldiers, surgeons, athletes and many others who need to maintain a tight focus on what they are doing.

Some people are naturally more mindful than others, but it is possible to train yourself to enter this state more often. Simple exercises performed as little as 12 minutes daily can help you become more mindful. For a sample exercise, watch this video “Learn to Live in the Now.”

Putting Stock in the Future

A focus on the now, however, is not always appropriate when you have to make a choice between an immediate desire and a future outcome. In fact, a central human fallibility is our tendency to value what we can have right away much more than we do even bigger rewards down the road. In the human brain, a later benefit feels farther off than it really is, making it less appealing. This problem, which scientists term temporal discounting, leads to overeating, overspending, abusing drugs and other problems that seem to hail from lack of self-control. But we wouldn’t need self-control if our brains did not make this unfortunate miscalculation time and time again.

squares of chocolate As humans, we tend to overvalue immediate desires relative to future benefits. Courtesy of John Loo via Flickr.

Fortunately, there are tricks for fixing this glitch. One is to delay the more immediate reward. If you just wait five minutes before indulging in a chocolate bar or purchasing a pricey necklace or making any other stupid move, you’ll want that indulgence significantly less, about half as much, as you did just five minutes before. That minor postponement helps level the playing field, giving the longer-term health or financial benefit a fighting chance. Other tips for good decision-making include detailing the consequences of a downfall—in your diet, say, or your sobriety. Writing down the specifics of what happened in the past, or could happen down the road can boost the significance of those future scenarios. And if you’re weighing the future heavily, you’ll be more likely to make a wise choice now (see “Warped Sense of Time Heightens Temptations,” by David H. Freedman).

Live Action Mind Control

The March issue of Mind introduces another, more high-tech way of wresting control over your own mind and brain. With a technology called real-time fMRI, you can now visualize your own brain activity. You can then practice techniques that raise or lower it—thereby changing your conscious experience. To manage this feat, you have to lie inside a brain scanner while computers gather and analyze your brain activity, which is then put up on a display for you. A computer represents it as, say, a flame. You can then experiment to figure out what thoughts alter that activity. When you land on a thought that works, you change how you feel or even what you can do. People have used this technique to minimize chronic pain, which is notoriously hard to treat, by calming commotion in a particular brain region. Patients have also used it to combat the symptoms of Parkinson’s disease by focusing on raising activity in a brain area involved in motor control (see “How Real Time Brain Scanning Could Alleviate Pain,” by Heather Chapin and Sean Mackey).

flame from fire New technology enables people to raise or lower their own brain activity, which is sometimes represented as a flame. Courtesy of Velo Steve via Flickr.

In the future, people might boost or suppress the activity in specific brain areas to quell anxiety or speed learning. Taking the idea of controlling movements to an extreme, I wonder, too, if it might be used to enhance a person’s prowess at sports or surgery. At the moment, using this method of mind control requires a big, expensive brain scanner, so it won’t be practical for less dire applications until smaller and cheaper devices are available. But the technology seems to have enormous promise. I, for one, would like to peek at the mechanisms of my mind and try to grease the cogs.

Other feature articles in this issue highlight the mental downsides of city living (see “Urban Living Raises the Risk of Emotional Disorders,” by Andreas Meyer-Lindenberg) and newly discovered powers of the placebo. The magazine also covers psychological remediation for schizophrenia and an understanding of addiction as a learning problem. I invite you to read and enjoy these offerings. Meanwhile, don’t forget to live in the now—yet put some stock in the future!

Lead Exposure on the Rise Despite Decline in Poisoning Cases

Leaded gasoline and lead paint are gone, but other sources are keeping the danger high

By Mark Fischetti


$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/article.cfm?id=lead-exposure-on-the-rise'submit to reddit Image: Courtesy of Ben Mills on Wikimedia Commons

BOSTON—Exposure to lead—so toxic—is a problem of the past, right? Wrong. Since the U.S. took lead out of gasoline in 1976 and banned lead paint in 1978, most health scientists, regulators and the public have considered the problem largely solved. But ongoing testing shows that even though the average concentration of lead in the American bloodstream has dropped by a factor of 10 since the late 1970s, the levels are still two orders of magnitude higher than natural human levels, which have been determined by studying skeletal remains of native Americans dating to before the industrial revolution.

Equally problematic, recent health studies have shown that exposure levels previously thought to be “safe” were too high. Scientists from various disciplines are advising the Environmental Protection Agency and health departments to lower the concentration deemed acceptable in the bloodstream, which today averages 1.3 micrograms per deciliter but can be much higher for many individuals. The change is warranted because the latest set of long-term tests done over decades has revealed that many of the health complications from lead arise even at low exposures. Higher levels are not necessary to instigate damage to the body or brain, Joel Schwartz of the Harvard School of Public Health told a somewhat surprised crowd on Feb. 16 here at the annual American Association for the Advancement of Science (AAAS) meeting. Excessive lead exposure correlates with a host of ills, including impaired cognition, attention deficit disorder and lower academic test scores for children, psychiatric disorders, and increased blood pressure, hypertension and arrhythmia.

Lead is also increasingly implicated in dementia in the elderly. As we age, our bones demineralize and release calcium (which is why calcium supplements are often recommended, especially for women). “But the bones also release lead,” which accumulates in our skeletons over a lifetime, Schwartz said. “We don’t know if the brain can adapt to the higher levels” of lead in the bloodstream, he said, calling for new research to find out.

The ramifications of lead exposure are financial as well, costing the U.S. about $209 billion a year, said Jessica Reyes, an economist at Amherst College. The bill includes everything from direct medical costs to a heightened need for special education classes and incarcerations for violent crime, which also correlates with higher lead exposure.

The ongoing trouble with lead exposure is not to be confused with lead poisoning, which has dropped significantly in developed countries, including the U.S. The latter condition is caused by acute exposure at high concentrations, which can occur from eating lead paint chips. But all the other problems “are more like chronic diseases that build over time,” said A. Russell Flegal of the University of California, Santa Cruz. “We need to start thinking about the risks in that way.”

Lead is still prevalent in our environment for many reasons. Because lead does not degrade, heavy emissions from the past accumulate in soil. Winds, especially during drought—like that afflicting the Midwest for the past year or so—kick it up as dust, and runoff from heavy rains and flooding can re-suspend the particles in the atmosphere. Trees take up soil particles, too, but when forests burn in wildfires, as has been occurring more frequently worldwide with global warming in recent years, that lead is released back into the air. Fires also release lead from old houses and buildings coated with lead paint that was applied prior to the U.S. ban. Lead smelting and refining is still an enormous industry worldwide, sending more of the metal into the environment. Aviation gas used in planes still contains lead.

Researchers Home in on Biological Ways to Restore Hearing [Excerpt]

In the section of “Shouting Won’t Help” excerpted here, journalist Katherine Bouton surveys cutting-edge research into biological ways to reverse the “sensioneural” hearing loss that she and millions of others suffer, caused by impaired function of the inner ear

By Katherine Bouton


$(document).ready(function () {if ($(window).width() $(function() { var offset = $("#shareFloat").offset(); var topPadding = 60; $(window).scroll(function() { if ($(window).scrollTop() > (offset.top - '30')) { $('#shareFloat').css('top', $(window).scrollTop() - offset.top + topPadding); } else { $('#shareFloat').css('top','20px').css('left','-88px'); }; }); });reddit_url='http://www.scientificamerican.com/article.cfm?id=researchers-home-in-on-biological-ways-to-restore-hearing'submit to reddit Shouting Won’t Help Image: Katherine Bouton, published by Sarah Crichton Books, an imprint of Farrar, Straus and Giroux, LLC

Editor’s Note: Excerpted from Shouting Won’t Help: Why I—and 50 Million Other Americans—Can’t Hear You, by Katherine Bouton, published by Sarah Crichton Books, an imprint of Farrar, Straus and Giroux, LLC. Copyright © 2013 Katherine Bouton. All rights reserved.

“You’ll never be deaf,” Dr. Hoffman said to me years ago. At the time, I thought he meant I’d never lose all my hearing. But what I know now is that technology would take over when my ears no longer worked. Through a cochlear implant, I would continue to hear long after my ears ceased to function.

Research holds the promise that the kind of hearing loss I have may someday be reversible, returning the ear to close to its original pristine condition. Probably not soon and not for me, but most researchers think that within a decade they may have the tools that will eventually allow doctors to stop the progression of sensorineural hearing loss, including age-related hearing loss. Putting those tools into practice will take much longer. (Gene therapy, for people whose hearing loss has a genetic basis, will probably come sooner, possibly in the next decade.) The best guesses for hair cell regeneration—for the much larger group of people whose sensorineural loss is caused by noise or ototoxins or age—range anywhere from twenty to fifty years.

Until recently, scientists focused on the development of devices that would take the place of normal hearing: hearing aids and cochlear implants. The pharmaceutical industry, usually so quick to jump on the opportunity to medicalize a chronic age-related condition—dry eyes and wrinkles, trouble sleeping, lagging sexual function, bladder control, memory loss—has not paid much attention to age-related hearing loss, in terms either of prevention or cure. There are no FDA-approved drugs for the treatment of hearing loss. Demographics alone would suggest they are missing a big opportunity.

In October 2011, the Hearing Health Foundation (formerly the Deafness Research Foundation) held a symposium in New York to kick off its new campaign, called the Hearing Restoration Project, an ambitious program that had enlisted, at that point, fourteen researchers from ten major hearing & loss research centers in the United States. This consortium will share findings, with the goal of developing a biological cure for hearing loss in the next ten years. With a fund-raising target of $50 million, or $5 million a year, the Hearing Restoration Project will tackle the problem of hearing loss with the aim of curing it, not treating it.

The funding is relatively small right now, but there is hope that the foundation will be able to raise more in future years. Individual consortium members may currently receive somewhere between 5 to 20 percent of a laboratory’s annual bud get from the Hearing Health Foundation. But the collaborative nature of the venture is unusual. (A similar consortium exists for the study of myelin diseases—a factor in multiple sclerosis as well as hereditary neurodegenerative diseases.) Under its previous name, the Deafness Research Foundation, funding was limited to early career support to researchers. They’ve now added the Hearing Regeneration Program.

The symposium, titled “The Promise of Cell Regeneration,” brought together leading researchers in the field of hearing loss. Dr. George A. Gates, an M.D. and the scientific director of the Hearing Restoration Project, chaired the program. The speakers included Sujana Chandrasekhar, an M.D. and director of New York Otology, who talked from a clinical perspective about the current state of hearing loss research. Ed Rubel, from the University of Washington, discussed the history of hair cell regeneration research and his current work on regenerating hair cells through pharmaceutical applications. Stefan Heller discussed his lab’s announcement in May of 2010 of the first successful attempt at generating mammalian hair cells (of mice) in a laboratory setting from stem cell transplants. Andy Groves, from Baylor, discussed the many still-existing hurdles to hair cell regeneration in humans. Unable to attend was Douglas Cotanche, currently working at Harvard on noise-induced hearing loss in military personnel.