Another Big Psychology Theory Fails to Replicate

The bakery at most grocery stores is a minefield. Cakes to the left, muffins to the right, pastries dead ahead, and cookies... cookies everywhere. If you escape without making a purchase, congratulations, you have tenacious self-control. Or you were just lucky. But though your wallet and waistline won't take a hit, according to a leading psychological theory, your willpower will.

Originally put forth back in 1998 by Florida State psychologist Roy Baumeister, the notion of ego depletion states that self-control is a limited resource. Like a muscle, it can fatigue with use, and needs time to recharge. According to the theory, saying "no" to sweets in the grocery store will leave you temporarily vulnerable to subsequent temptations.

In the sixteen years since its inception, ego depletion has been tested and validated in a variety of situations. Psychologists emphasize its role in many arenas, such as dieting, athletics, and consumer behavior. Some even propose that willpower can be trained and strengthened via repeated use, again, just like a muscle.

But critics aren't so sure. They note that much of the research has been done on young, WEIRD (Western, Educated, and from Industrialized, Rich, and Democratic countries) college students, and thus may not carry over to the general population. They also suggest that the effect could benefit from publication bias, the tendency to only publish flashy or positive results. It is in this light that a team of psychologists recently attempted to replicate the ego depletion effect using the two most frequently used measures of self-control in scientific research. Their results were just published to PLoS ONE.

Four groups of participants took part in the study: two diverse groups from the general population with an average age in the mid-forties, and two more groups of young adults with an average age around 20. Each group was assigned to one task, either a grip challenge in which subjects had to hold a grip machine at 70% of their maximum grip strength for as long as possible, or a Stroop Test, in which subjects viewed color words on a computer (like 'green', 'yellow,' or 'blue') that appeared one at a time in a mismatched font color (for example, 'red' may be shown in blue font) and have to press the key corresponding to the font color.

The subjects completed each of their respective tasks twice. In between the repetitions, some subjects engaged in an activity meant to diminish their self-control whilst others performed an easy control activity. The researchers then compared the scores of the subjects who performed the control activity with those who performed the experimental one.

The subjects who had performed the task meant to diminish self-control should have performed significantly worse on their assigned tasks compared to the control groups the second time around. But they did not.

"There was no evidence for significant depletion effects in any of these four studies," the researchers found.

The researchers don't believe their results are in error. The tasks they used were identical to those most frequently employed in past research, and they also used similar sample sizes. In fact, the researchers suggest their study design may be stronger, because, unlike many previous studies which found evidence for ego depletion, they used a control group and recruited subjects from a broader population.

Despite the failure to replicate, the researchers don't believe their study is enough to invalidate ego depletion altogether, just that the effect may be more limited than has previously been theorized.

Ego depletion is not the first big idea in psychology to face replication problems. A key theory, priming, took a notable hit last year.

Some social psychologists, like Harvard University's Jason Mitchell, have fired back at detractors, suggesting that they are impugning the integrity of their colleagues, overstating the problems of publication bias, and are likely producing "false negative" results.

(Images: AP, Nevit Dilmen)

Source: Xu X, Demos KE, Leahey TM, Hart CN, Trautvetter J, et al. (2014) Failure to Replicate Depletion of Self-Control. PLoS ONE 9(10): e109950. doi:10.1371/journal.pone.0109950

The Healthiest Diet 'Proven' by Science

It's actually happened.   

After decades of research filled with millions of meals eaten by hundreds of thousands of subjects, the verdict is in. Science is now ready to proclaim the healthiest way to eat: one diet to rule them all.

So which is it? Atkin's, perhaps? Or Paleo? Low-Carb? Low-Fat? South Beach? Raw? Fruitarian? Veganism?

The answer, my friends, is none of the above. But it could also be all of the above. That's because healthiest diet isn't a specific diet at all. It's the absence of a diet.

This is not a sudden, world-changing, mind-altering finding. It is not well suited to a blaring news headline. It is not share fodder on social media. What it is, however, is a realization that surfaced gradually and methodically: Science will never conclusively prove that a single diet is the best diet.

Author Matt Fitzgerald summarized the finding, or rather, the lack thereof, in his new book Diet Cults:

"Science has not identified the healthiest way to eat. In fact, it has come as close as possible (because you can't prove a negative) to confirming that there is no such thing as the healthiest diet. To the contrary, science has established quite definitively that humans are able to thrive equally well on a variety of diets. Adaptability is the hallmark of man as eater. For us, many diets are good while none is perfect."

Further support for this notion comes from a simple glance back at the history of our species. Mankind has populated almost every corner of the earth, and in every diverse situation, humans were able to survive, even thrive, on whatever food their homes had to offer.

Even more convincing evidence has been found by observing those who have lived the longest. The University of California-Irvine's 90+ Study has tracked thousands of Americans who've made it to age 90 and beyond, yielding an unprecedented wealth of information about their lifestyle habits. For lead investigators Claudia Kawas and Maria Corrada, the most surprising finding they made is that most participants didn't seem to be too concerned with their health. Generally, the 90-year-olds said they didn't really keep to a restrictive diet. Nor did they abstain from alcohol, quite the opposite actually! The researchers found that up two drinks a day -- no matter the type -- was associated with a 10-15% reduced risk of death. They also discovered other things that might disturb ardent dieters. Vitamin supplements did not affect lifespan in any way, and being a little overweight starting in middle age positively affected longevity.

But what if you're already overweight and want to shed some pounds? In that case, pick whatever diet works for you, because they all can work. What matters the most for weight loss is finding a solution that you can adhere to. That much was elucidated in a review recently published to the Journal of the American Medical Association. Scientists reviewed a multitude of randomized trials on popular diets and, lo and behold, found that all the diets helped subjects shed pounds, with minimal differences in weight loss between each diet.

Just like there is no one true religion, there is no one true diet. So why do so many dieters believe that there is?

"The short answer is that people believe what they want to believe," Fitzgerald wrote in Diet Cults. "The complete answer is that people want to believe that a certain way of eating is the best way because it gives them a sense of identity and a feeling of belonging. It's the work of that old, no-saying human impulse to eat according to the rules of a special group, which is often much stronger than the reasoning faculties."

"It feels good to believe in something."

(Image: AP)

How to Make Scientists Publish the Truth

John Ioannidis is (in)famous in the scientific community. Using straightforward logic and statistics, he convincingly demonstrated that most published research articles are wrong. This is not because scientists are liars and crooks, but because studies often do not have large enough sample sizes or are testing unlikely hypotheses. Ioannidis's revelation sent a shock wave through the biomedical community. Partially in response to his findings, biomedical scientists began to embrace reforms in scientific publishing, such as using more open access journals and publishing replications and negative data.

Now, in a new paper in PLoS Medicine, Dr. Ioannidis proposes additional reforms. Some of the more interesting ones include:

Registration of studies. Clinical trials already do this. (See This would allow researchers to monitor ongoing studies. Others have proposed that all registered studies should be accepted for publication upon their completion, regardless of the outcome of the experiment. This would eliminate "publication bias," the phenomenon in which only "sexy" results are published and negative (or uninteresting) results are ignored.

Adoption of better statistical methods. It is not a big secret that biologists are bad at math. (Well, except for John Ioannidis.) Papers with a lot of mathematical equations are avoided by biologists. The heavy mathematical lifting required in some biomedical fields, such as epidemiology and genomics, is outsourced to biostatisticans. Dr. Ioannidis suggests more stringent thresholds for statistical significance. That is certainly necessary, but there also should be a requirement for all biomedical PhD students to take courses in biostatistics.

Improvements in peer review. Though Dr. Ioannidis does not offer specific details, one group is strongly advocating post-publication peer review. F1000Research publishes papers along with their expert reviews, which are not allowed to be anonymous.

Consideration of stakeholder interests and modification of incentives. Members of the scientific community place different values on research. For example, professors want research that is publishable, while industry wants research that is profitable. Dr. Ioannidis identifies four interests that need to be considered: publishability, fundability, translatability, and profitability. Furthermore, he proposes a radical change in incentives, such as eliminating academic ranks (e.g., tenure).

Scientific publications are on the cusp of a dramatic shift. Thanks to the efforts of Dr. Ioannidis and others like him, the monolithic biomedical establishment is beginning to embrace change. If only the rest of academia was so reflective and self-critical.

Source: Ioannidis JPA (2014). "How to Make More Published Research True." PLoS Med 11(10): e1001747. doi:10.1371/journal.pmed.1001747

Hilariously Stupid Science Questions: Not Another One... Yes Another One!

Back in April, we made a subtle threat: to continue "sharing hilariously stupid science questions until they stop being funny." Granted, it could just be because we have a poor sense of humor, but we're still laughing. And so, today, we make good on that threat.

This will be the fifth time we've shared questions like these. For some of the past lists, click here, here, here, or here. But not here.

It takes some serious forethought to evade logic so skillfully. And so, we tip our hats once again to the brilliant jokesters over at Reddit who thought up these jocular, reason-defying queries.

If Catholics only have mass on Sundays, do they cease to exist the rest of the week?

How can I access my Daylight Savings account?

Why are red-handed people more genetically predisposed to crime?

If 200,000 people die every year from drowning and 200,000 people have already drowned this year, does that mean I can breathe under water?

I just bought a Prius. At what point do I develop a sense of superiority, and will I still be able to eat gluten?

Can we achieve higher education by building taller schools?

If the body replaces all of it cells every 7 years, shouldn't we release all inmates after 7 years as they're not the same person anymore?

How come some mountains look like presidents?

Since humans share 50% of their DNA with bananas, can scientists merge two bananas to create a human?

Why do meteorites always land in craters?

If electricity comes from electrons, does morality come from morons?

Is the Islamic State solid, liquid or gas?

via Reddit

(Image: Secret Ingredient via Shutterstock)

Meet the 'Terror Birds'

Often, when explaining how birds are related to dinosaurs, people compare a picture of a Tyrannosaurus rex with that of a chicken. It's an apt juxtaposition, but it would be more elucidating with another animal wedged in between. I nominate Kelenken.

Standing almost 10 feet tall with a beak 18 inches long, Kelenken was a member of the extinct phorusrhacid family, otherwise known as the "terror birds."

By most accounts, the birds earned their ominous nickname. The vast majority of scientists believe that the "terror birds" were carnivores, and brutal ones at that. Kelenken and its stockier brothers Brontornis and Titanis were something like avian jackhammers, likely using their massive, bone-reinforced beaks to peck their prey to death. Since the birds mustered a top speed of around 27 miles per hour, the pummeling could have been preceded by a chase. It seems more likely, however, that the feathered predators ambushed their prey from forest brush or long savannah grass. Though it's enticing to picture Kelenken taking down and devouring animals the size of deer or antelope, its regular fair probably fell in the size range of modern day dogs or cats.

Fortunately for the mental health of our children, these big birds predated appearing on Sesame Street by a couple million years. Living between 62 and 2 million years ago, the "terror birds" primarily populated South America back when it was still an island, though a few fossils have been unearthed in Europe, prompting paleontologists to scratch their heads and draw arrows on maps.

Though dominant on their South American island paradise, the "terror birds" would eventually meet their downfall when faced with competition from hearty North American animal rivals. When the Panama land bridge formed between South and North America some 2.5 million years ago, it was the beginning of the end for the "terror birds." The best explanation for their demise is that competition from prehistoric dogs and saber-tooth cats drove them to eventual extinction.

(Images: Shutterstock, FunkMonk, Degrange et. al.)

Why Laser Fusion Power Is So Difficult

Is the National Ignition Facility's goal of generating practical electrical power realistic? How long would it take? Having learned the history lesson on inertial confinement fusion and its many acronyms, we know how we got here. Now we can cover the fun stuff: will we ever get it to work as a power plant?

Two distinct types of challenges face ICF "laser fusion" as a commercial power source. Engineering problems tackle transforming established scientific principles into working machines. First though, science has to know what will work. That is the question of what is possible and, if so, how to attempt it.

When we imagine an Apollo project or a Manhattan project for nuclear fusion, we frame the problem somewhat incorrectly. The basic science of using liquid propellant rocket engines and multi-stage rocket platforms had already been figured out well before Apollo began. American Robert Goddard, in parallel with German and Russian rocket scientists, had worked it out decades before. At the end of WWII the US claimed V2 rockets and their designers from Germany; we already had a working small-scale liquid fueled rocket capable of reaching the edge of space. Improving and supersizing rockets enough to blast people into space was primarily by this point an (admittedly enormous) engineering problem. A huge infusion of money and brilliant mindpower accomplished it in a matter of a decade.

So too with the Manhattan project. We believed our scientific concept that a sufficient mass of fissile material could attain a runaway chain reaction. Einstein had already supplied the matter-to-energy conversion factor. Brilliant physicist Enrico Fermi built the world's first nuclear reactor under the bleachers at the University of Chicago football field. This showed the energy output from a neutron chain reaction in fissile material. Manhattan was all about finding an engineering method to get enough uranium and plutonium to build a handful of bombs and to design a bomb mechanism capable of assembling the necessary critical mass instantly and perfectly. This was a tremendously hard issue on its own, even with most of the science previously resolved.

NIF is in a more difficult position. We don't know the science well enough to reach the engineering phase.

Scientific unknowns abound in laser fusion. First, our models of the appropriate conditions needed for useful extraction of fusion energy are not tested by experiment. Currently, we think that we need about 10 megajoules (MJ) of laser energy (i.e., about 138,000 professional tennis serves or a pickup truck hitting a wall at 223 mph) to get net power out of NIF. The facility's record-breaking laser produces only about 1.8 MJ.

Furthermore, 10 MJ from the pellet is just the energy break-even point. To see practically useful gain instead of eeking out almost no power, we think we need a 10-fold increase in power: a 100 MJ laser. Then figure that most heat is lost when being converted to power. So perhaps we need another factor of 10: a 1000 MJ (1 gigajoule) laser. Most historical predictions of the energy required for energy gain with ICF have been vast underestimates too.

The trouble is that designing entirely new lasers is more of a science question. Entirely new laser gain materials such as specialized doped glass that costs thousands of dollars per inch need to be invented. New concepts for generating bursts of photons and controlling their conversion from one frequency to another must be found. The field of optics needs time to develop and understand ideas of how to make more energetic laser pulses. Perhaps better fuel design could lessen the need for laser advances.

The moment of the pellet implosion is another deep science question. At laser impact, a chaotic ball of x-ray photons, nuclear ions, gamma rays and energy is created, which is extremely hard to analyze. An entire field, hydrodynamics, studies these messy situations. We've been working on hydronamics for centuries, but the models we use don't do a good job of predicting what happens in moments of pressure, temperature, and chaos so high as these. The poor projections of these models are part of why we are so far behind our goals with ICF so far.

Say we invent a laser 1000 times more powerful, design new pellets, and new methods to perfectly crush them. We're still not done. Engineering challenges will be an enormous obstacle. Essentially, we'll need a Manhattan Project for fusion.

Currently, NIF fires at a maximum rate of roughly once per hour. For commercial energy production, blasting a pellet roughly ten times per second is required. This means we'll need to figure out a way to run the entire experiment 36,000 times more often and also 36,000 times more quickly. Engineering and science are at play here. Powering up the laser to fire this quickly is a problem for both scientists and engineers: it needs to charge up with power 600 times faster than the current model.

More traditional engineering challenges are also manifold. A method is needed to collect the heat from the target chamber with reasonable efficiency. The fusion energy is currently not collected at all. Then we need a system capable of feeding in a pellet, triggerering the laser, entering the chamber, removing the remnants of the blasted target, replacing it with a fresh one, and resealing the chamber. This needs to happen in 100 ms or less, while also not leaking significant heat out of the chamber.

The fuel pellet too will take some engineering. Roughly 100,000 pellets will have to be blasted every day for every plant. This means we'll need to produce fuel pellets by the millions and at low enough cost not to ruin the economics of the machine. Large and stable sources of deuterium and tritium will be need to be founded.

These are just the foreseen challenges. Entirely unforeseen complications will almost certainly arise. While it is foolish to say that we will never see consistent electrical production from this method, it will be an enormous struggle. The fight is utterly impossible to win within a decade. A more realistic expectation is probably 100 years for a commercially viable powerplant.

That's a shocking number. However, it says more about the difficulty of the project than the quality of the scientists involved and their work. For fusion powr to be successful, we must plan for the long, long haul.

The author would like to gratefully acknowledge discussion with former NIF director Mike Campbell for insight into NIF and ICF energy projects.

(AP photo)

Ebola, Marburg, and a Real Life Cave of Death

The expansive mouth of Kenya's Kitum Cave can appear out of nowhere. Mount Elgon National Park's jagged landscape, coupled with a dense cover of lush, green plant life, conceals the cave surprisingly well from human wanderers until they're right on top of it. Animals, however, have no problem finding Kitum. Fruit bats, insects, and especially elephants are frequent loiterers. The large pachyderms shelter within the 700-foot system, and also seek out a surprising snack. Using their hardened ivory tusks, the elephants dislodge stones from the cave wall and grind them to bits with their teeth, sucking up the stores of salt within.

Kitum Cave is, in fact, a petrified rain forest. Seven million years ago, the volcanic Mount Elgon erupted and buried the surrounding rain forest in ash. Kitum is deep within what was once a molten sea. Mineralized logs jut out from the cave walls. So do all sorts of bones, from animals like crocodiles, hippos, and elephants. It's eerily beautiful.

In 1980, a 56-year-old Frenchman living in Kenya entered Kitum cave and may have been entranced by that beauty. After a few hours of exploration, he left spellbound and in awe. But something evil left with him.

Seven days later, the headache started. Fever and nearly nonstop vomiting arrived three days after that. Then the Frenchman's face became droopy and lifeless, and the skin turned yellow. His whole personality changed. Friends helped him board a plane so he could go to a hospital in Nairobi. Onboard, he continued vomiting, now heaving up a black liquid. He filled up a couple of barf bags. His nose started bleeding... and it wouldn't stop.

When the plane landed, he stumbled off in a stupor, finding the nearest taxi and mumbling the words "Nairobi... Hospital" to the driver.

The taxi driver was kind enough to help the Frenchman into the hospital when they arrived, and made it very plain to the attending nurse that this man was in dire need of help. Assurances were made that a doctor would see him promptly.

It was already too late.

While sitting in the waiting room, the Frenchman suddenly leaned over, vomited up an immense quantity of blood, then fell unconscious. Blood began to seep from every orifice and creep along the floor. The evil entity that originated in Kitum Cave had destroyed its host. Now, it was leaving its mangled, corporeal home in search of a fresh one.

The entity that infected the Frenchman was Marburg (seen right). A filovirus like Ebola, Marburg's symptoms are identical to those of its more notorious sibling. Genetic differences distinguish the two viruses. To date, there have been 467 documented cases of Marburg, resulting in 377 deaths. That's a kill rate of 80.7%. (For reference, the mortality rate of the current ebola epidemic is around 50%.)

The death of the Frenchman, who science journalist Richard Preston dubbed "Charles Monet" in his 1994 book The Hot Zone, was a landmark case for the scientific study of Marburg. Not only was it the first time that Marburg had surfaced since 1975, when it infected three people in South Africa, but it also allowed investigators to try and nail down the source of the infection. Since Monet was a bit of a loner, the investigation proved difficult. Seven years later, however, Marburg reared it's gruesome head again, infecting and killing a young Danish boy in Africa. When it was discovered that the boy had visited Kitum Cave with his family 11 days before his death, investigators decided they had to pay the site a visit.

In Spring 1988, a joint U.S. and Kenyan operation led by the United States Army Medical Research Institute of Infectious Disease's Gene Johnson began. Decked out in full biohazard gear, the team members scoured the cave, capturing bats, birds, and insects, sampling bat guano and elephant feces, and scraping slime off walls. At first, Johnson was sure he'd come to the home of one of nature's most ancient and proficient killers. But when all was said and done, he left empty-handed. If Marburg lurked in Kitum Cave, it had vanished without a trace.

"It must have been a bitter disappointment for Gene Johnson," Preston wrote in The Hot Zone, "so disheartening that he was never able to bring himself to publish an account of the expedition and its findings."

Two decades later, a cave in neighboring Uganda would divulge some of Marburg's secrets. In the wake of an outbreak among miners working at Kitaka Cave, scientists detected the virus in local Egyptian fruit bats, the same species of bat that occasionally shacks up in Kitum. The researchers esimated that 5% of the 100,000 animal colony at Kitaka was infected. It was the first time that hard data implicated bats as a major natural reservoir of the virus.

Scientists still aren't sure precisely how the Ebola strain currently terrorizing Western Africa made the leap into humans. Ebola and its sibling Marburg are as mysterious as they are insidious.

(Images: Richard Preston, Wilson Disease)

Getting Cancer at San Francisco Airport

Last week, on my way back to Seattle, I came across a peculiar sign posted next to Gate 62 at San Francisco Airport. (See photo above.)

I refer not to the maximum occupancy sign, which is itself a bit strange considering there are no well-defined boundaries for Gate 62, but the one next to it: This area contains chemicals known to the State of California to cause cancer and birth defects or other reproductive harm.

So, I looked around. There was a sandwich shop, a few trash cans, and several tired passengers in the waiting area. No cataclysmic carcinogens there. The carpet looked clean. Was the sign referring to carpet cleaner? Or was it referring to the jet bridge, where you might get a brief whiff of jet fuel? The absurdly ominous and vague sign left the threat entirely to your imagination.

Who put the sign there? The people of the State of California. Back in 1986, they passed Proposition 65, an attempt by environmental do-gooders to create a carcinogen-free utopia. In accordance with the law, the governor must publish a list of "known" carcinogens or chemicals capable of causing birth defects. The 23-page-long list (PDF) includes such terrifying molecules as aspirin, alcohol, ganciclovir (an antiviral), metronidazole (an antibiotic), nickel, rifampin (another antibiotic), testosterone, tetracycline (a very common antibiotic), and wood dust. The fact that testosterone made the list is particularly problematic, since every human being alive produces testosterone. According to California, we're all doomed.

However, properly informed citizens know that the dose makes the poison. Carcinogens are everywhere. Many of them are natural compounds. But the vast majority of them are not at concentrations high enough to worry about.

The foolishness in California is the inevitable consequence of a chemophobic society that wields the evil twins of regulation and litigation as weapons in a fruitless effort to achieve the impossible: a life completely free of any risk whatsoever. And while it's easy to point and laugh at California, the truth is the vast majority of Americans favor another equally absurd policy: The labeling of GMOs.

But, just like Proposition 65, the outcome of any GMO labeling law is entirely predictable: Because most (up to 75% of) food at the grocery store contains at least one genetically modified ingredient, the GMO warning label will appear everywhere. And warning labels that appear everywhere are meaningless and absurd, just like the airport cancer sign.

Yet, there are two other pernicious effects to a proliferation of warning labels: First, people start ignoring them. That is not a good thing. Some products, such as cigarettes and drain cleaners, really are toxic and dangerous. Will people ignore warning labels on all products? Second, warning labels encourage manufacturers to seek alternatives to alleged "carcinogens." Unfortunately, the alternatives are often less studied and potentially more carcinogenic.

The irony, of course, is that in their struggle to make people safer, the world's chemophobes may be achieving the exact opposite.

(Photo: Alex Berezow)

New Fusion Reactor Cheaper than Coal? BS.

Inventor: "I've just created my most perfect work: a new type of paper airplane."

Funding agency: "Wow great but what does it do?"

Inventor: "Oh, right now it's useless, but soon I'll just scale up the concept and we'll have a cheaper space shuttle!"

Shame on the University of Washington for hyping its research with this exact logic. Touting the "great potential" of a new cheaper-than-coal fusion plant, they see reality and choose to look the other way. Or maybe they're just incredibly naive.

To quote the press release: "Perhaps the biggest roadblock to adopting fusion energy is that the economics haven't been penciled out". BS! Let's talk about the real obstacle to fusion power. The report itself actually leads us to the culprit.

Among many statements ranging from meaningless to wrong, this is the one that dodges the heart of the matter: "They [researchers] have designed a concept for a fusion reactor that, when scaled up to the size of a large electrical power plant, would rival costs for a new coal-fired plant with similar electrical output." (Emphasis added.)

What is keeping fusion energy from reaching market? The fact we don't understand how to do it well yet. It's impossible with current science. Why? Precisely because we don't know how to scale it up. We can make little demonstration fusion reactors like the one in this report, but expanding them to become large enough to produce useful power eludes our grasp. The entire problem of current fusion devices is scaling them up.

The basic idea in this proposal is a magnetically confined plasma device that uses a geometrical plasma configuration called a spheromak. It's a magnetic field bottle built to trap plasma inside. It's a sibling of the tokamak, the best known and best working of the current fusion devices. The idea is quite old, dating to the 1950s. Several of these machines were built in the 1970s and 1980s, notably by the lead investigator of this work.

The press report claims that the spheromak device is simpler than the tokamak. This is only true to a point. There are fewer external magnets in such a device. Enormous electromagnets require very high electrical current, so the spheromak needs less power. However, part of the magnetic field confinement of the plasma is performed by the magnetic field produced within plasma itself. (Travelling charged ions produce magnetic fields calculable with the laws of electrodynamics.) While this idea sounds simpler, it's actually more difficult in many ways: you have fewer magnets for external control, and the internal plasma configuration is actually much more complicated.

This very difficulty is why the world's biggest and best fusion projects are tokamaks and not spheromaks. The extra control magnets and simpler plasma configuration inside the machines have allowed them to be scaled up to larger sizes much more easily.

So, not only is scaling up the entire problem with magenetic fusion, this device is probably much more difficult to scale up even than current tokamaks, which have not yet been economically scaled up and may not be for several decades.

RCS enthusiastically supports fusion research and increased funding for fusion projects. However, we do not tolerate misleading information being reported.

(AP photo)

Tropical Cave Art Alters Origins of Creativity

The tropical karst landscape of Southern Sulawesi in Indonesia is dotted with sinkholes and caves, forming, in many places, a vast, underground world. You can thank soluble rocks like limestone, dolomite, and gypsum for that. The caves afforded our ancient ancestors cover from the torrential rains that define the island chain's climate. The walls inside also granted them easels to express their creativity.

Archaeologists have known about the wondrous rock art in the Maros–Pangkep caves of Southern Sulawesi for over half a century. What they didn't know was how old they were. A team primarily based out of the University of Wollongong in Australia has just found out, and the answer has altered the timeline and geography of human creativity.

Dr. Anthony Dosseto and his colleagues dated the art to roughly 40,000 years ago. Among the dozen drawings analyzed is a hand stencil that -- at a minimum age of 39,900 years old -- is now the oldest known in the world, and a drawing of half-deer, half-pig looking animal called a babirusa, which may be the earliest figurative depiction in the world.

Dating cave art is easier said than done. Often, due to weathering, the pigments in the paintings themselves don't contain enough carbon for typical dating methods. Contamination can also lead to inaccuracies. So instead, scientists may rely on clues in the immediate vicinity. For example, they might date nearby artifacts or remains and apply those ages to the artwork. Occasionally, scientists will bring out their magnifying glasses and attempt to sleuth out a painting's age by examining its depictions. A drawing of a mammoth, for example, would lend a rough estimation because we already know when mammoths existed.

To gauge the paintings' ages in this case, the researchers dated mineral deposits called speleotherms that had grown on top of the drawings. When formed, the crystal-looking objects contain small amounts of uranium which slowly decay to thorium. Since the scientists already knew that rate of decay, they were able to extrapolate and determine the rough age of the paintings.

For a long time, human creativity was thought to have been born in Europe in a sort of prehistoric Renaissance. Only from there, did it truly flourish. A great many cave paintings dating to more than 30,000 years ago have been discovered in France and Spain, including the amazingly preserved and transcendent artwork in Chauvet Cave and the oldest known artwork in the world, a simple red "disk," found in El Castillo Cave. (The latter may actually have been painted by Neanderthals!) The current finding may require anthropologists to rethink that Eurocentric narrative.

"I think this suggests that modern humans had this creativity, this artistic expression, with them when they spread out of Africa," Chris Stringer, a paleontologist at the Natural History Museum in the United Kingdom told Nature.

“Europeans can’t exclusively claim to be the first to develop an abstract mind anymore. They need to share this, at least, with the early inhabitants of Indonesia,” Dr. Dosseto said.

The discovery was published October 8th to the journal Nature.

(Images: Kinez Riza)

Source: Dosseto et. al. Pleistocene cave art from Sulawesi, Indonesia. Nature 514, 223–227 (09 October 2014) doi:10.1038/nature13422

Will Laser Fusion Power Work? Part I

Crushing tiny capsules of matter into oblivion...

Turning matter into energy via nuclear fusion with massive lasers is a dramatic and beautiful way to create energy. Problem is, it's not working very well. In fact, it's kind of a mess. What happened, and why is it failing? Will laser inertial confinement fusion ever become a viable energy source?

The Past and Present is NIF

Recently, I called the world's biggest and best laser fusion experiment a failure. There is no doubt that the National Ignition Facility (NIF) failed to meet its stated goal of reaching fusion "ignition." Misleading press releases attempted to breathe life into this stone-cold failure: their best shot produced 100 times less energy than was fired in. Worse, the real efficiency as measured from the electrical input was just one unit of power out for 28,000 in. It's not pretty but there is more depth to the story. Examining the history and current status of NIF as the premiere laser fusion project in the world tells us where.

Simulating the heart of the atom bomb

"Laser fusion" is more properly called inertial confinement fusion (ICF). The 1970s and 80s saw a series of big laser programs run at US national labs. These facilities were essentially miniature, primitive NIFs. They tested the preliminary feasibility of ICF: using a massive laser pulse to crush a pellet made of hydrogen atoms with extra neutrons until the atoms fuse. Most of the fused mass forms a helium atom, but some is directly converted from matter to energy. The NOVA laser at Lawrence Livermore National Laboratory and other projects hinted at the requirements to perform ICF on an energy efficient scale. It was clear, despite early over optimism, that vastly more powerful and advanced efforts would be needed to come vaguely close to a viable power source.

Still-secret tests, carried out under the Nevada desert the 1980s crushed deuterium (H + neutron) pellets with nuclear explosions to explore how much energy was needed to produce fusion. Whatever the results were, they were so encouraging that laser fusion began to look much more promising. Physicists decided that a laser 25-250 times more energetic than NOVA could achieve success in a lab, without the A-bombs. However, the billion-dollar price tag and as-yet undeveloped technology required to build such a project made it a very hard sell.

Things changed completely in 1992, when the U.S. stopped all live tests of nuclear weapons. Suddenly, the country faced the prospect of designing, building and maintaining weapons that had never been tested, and running weapons programs with personnel who had never conducted any experimental nuclear test work. That's clearly a scary proposition, even with the goal of reducing nuclear proliferation. The entire concept of non-proliferation rests completely upon the expected ability of the weapons to work. Weapons that don't deliver upon their threat upset the balance of power.

It was clear that facilities to test and simulate conditions found in nuclear fission and fusion reactions were necessary if we were going to design and maintain weapons and train future weapons program technicians. A massive laser system like the one proposed for NIF was seen, correctly, as the only facility capable of producing the heat and pressure conditions to simulate a nuclear or thermonuclear blast. Abruptly, NIF became much more feasible.

Something for Everyone

The real trick with getting NIF started was that it was pitched from several angles simultaneously. To the Department of Defense and more conservative politicians, the project was sold as a testing bed for weapons technology and designer training. To the DOE, liberals, the press and the public, the fusion power aspect was highlighted.

Winning funding to build the facility was a brilliant piece of negotiation and bipartisanship: the department of defense and the weapons program supported a facility that would pursue peaceful civilian energy generation and the anti-weapons lobby and the press supported a facility that would at its core be focused on nuclear weapons. NIF weathered changing political and public sentiments and construction setbacks because it won such broad political, scientific, defense and press support.

Unfortunately, several aspects of the fusion program were botched from the beginning. The project leadership lacked experimental experience. The designers placed too much confidence in numerical models extrapolating from previous experiments into unknown parameter space. These turned out to be wildly optimistic in projecting the capabilities of NIF. Management failed to listen to criticism.

The marketing of the facility as a fusion testbed combined with the mistakes made in pursuing that goal laid the groundwork for embarrassment. The majority of public, press and scientific support (and even the name ignition facility) was based on the possibility of producing fusion breakthrough. To the world at large, it is a fusion experiment.

So, when the fusion results came back late and wildly underwhelming it suddenly looked as though the entire project was a bust. But this isn't entirely true.

Is NIF a success despite never approaching fusion ignition?

How is NIF doing as a platform for weapons testing and research? That's classified! Due to the sensitive nature of nuclear weapons data and testing, it's very unclear. It is easy to believe that data gathered at extremely high temperatures and pressures not available anywhere else on the planet is invaluable to nuclear weapon design. Designers must extrapolate out from available data to the unknown conditions they anticipate; having data that is closer to real conditions can only help tremendously.

I have heard from sources familiar with the field that there have been some major weapons results from NIF. Unfortunately, we members of the public aren't privy to them, so we have to take the vague word of those involved.

Another important consequence of NIF: we now know far more about how to build a successful laser fusion facility in the future. In the same way that tests shed light on nuclear explosions, they also illuminate the correct conditions for a laser fusion facility. For instance, it's now clear that a still larger laser is needed to make any realistic simulacrum of a power plant. We are also learning how to design the fuel, the crushing system and the laser pulses experimentally instead of relying upon speculative simulation.

This brings us back to the big question: can we make laser fusion work? NIF is revealing clues about what it will take to reach practical inertial confinement fusion. In part II, I'll explain this in detail.


The author gratefully acknowledges discussion with the former director of the NIF facility, Bruce Campbell, for insight into the NIF facility and ICF research.

The Graying of Obama's Hair: A Scientific Analysis

On April 28th, 2012, President Barack Obama donned a tuxedo for the annual White House Correspondent's Dinner, and standing in front of a ballroom brimming with journalists, celebrities, and politicians, he made a bold prediction.

"Four years from now, I will look like this," he said, as a photo of the suave, white-haired Morgan Freeman appeared on the screen behind him.

President Obama wasn't, of course, insinuating that he would somehow morph into the Academy Award winning actor, but that his hair, which began dark and vivacious at the beginning of his tenure in the oval office, would be gray and listless by the time he left.

A recent study, however, finds otherwise.

Researchers perused hundreds of photos of President Obama taken indoors with comparable lighting, then used Photoshop to focus in on his hair. What they were left with was a collection of 68 photos of Obama's hair -- one for every month of his presidency. They then measured the median gray value of each.

Though the two researchers found a clear graying trend over the course of Obama's presidency -- on average, his hair color grew roughly 0.452% closer to matching Morgan Freeman's each month -- they noted that his hair almost certainly will not achieve the whiteness of Morgan Freeman's by May 2016, four years after he made his prediction.

"If we extrapolate this trendline to the 89th month of his presidency, we estimate that his hair will only be about 61.7% similar to Morgan Freeman’s."

However, the researchers added, there is a coin flip's chance that on at least one day, President Obama's hair color may seem to match Morgan Freeman's. Hair color can vary depending on the time of the year. Sunlight destroys melanin, the pigment in hair follicles, bleaching hair a blondish-white. If Obama spends a lot of time in Hawaii or playing golf during his two lame duck years in office, those odds may go up. Our perception of hair color can also change depending upon how short the hair in question is and the ambient lighting.

One thing that probably won't affect Obama's hair color is anxiety. Contrary to popular belief, it is not proven that stress rapidly grays hair; genes most strongly influence when and how fast that happens. Cells in the skin called melanocytes produce the pigment melanin that colors hair, and there's no direct evidence that stress limits their productivity or reduces their lifespans.

The current study was published in the brand spanking new Proceedings of the Natural Institute of Science (PNIS) an open-access journal that publishes satirical Onion-style articles or genuine research with humorous topics. Other topics tackled in the journal so far include an assessment of how often men think of chicken wings and an analysis of how costly it would be to raise a child like Calvin from the comic strip Calvin and Hobbes.

PNIS editor Matt Michael started the journal because he saw a lack of quality in the cacophony of scientific publishing.

"In 2014, it is estimated that there are now 29,147 active scientific journals, publishing roughly 700,000 papers every year, contributing to the estimated 55 million papers to ever have been published. Yet all of them suck," he joked in the journal's introductory editorial, before predicting that PNIS will crush leading journals Nature and Science in two years.

Michael's actual goals with PNIS are quite worthwhile:

We feel (and we are not alone in this) that science has a problem with effective communication. We believe that part of this problem stems from the view that science is an exclusive club that only a few can participate (and, thus, lacks transparency) and that current scientific studies are beyond the understanding of most people.

To change this view, PNIS uses satire, humor and its open publication policy to demonstrate that: 1) people use scientific concepts (most especially the scientific method) every day, frequently without even realizing it, 2) scientific discoveries are not limited to scientists (much like how playing sports is not limited to professional athletes), 3) most scientific concepts are easy to understand and 4) scientists are all too willing to laugh at themselves.

Source: K. Hernandez and C. Drexler (2014) "Yes We Canities! A quantitative analysis of the graying of Barack Obama's hair." Proceedings of the Natural Institute of Science | Volume 1 | HARD 3

(Images: AP, PNIS)

Women's Farts Smell Worse, and Five More Facts You Need to Know About Flatulence

Flatulence is a fact of life. Americans collectively break wind to the smelly tune of up to 6.3 billion times each day. That's a lot of hot air. For such a ubiquitous activity, it's amazing how taboo it is. Face palms and pinched noses mark the passing of gas in most social settings. Science, however, has no ingrained distaste for flatulence. Here are six facts we've learned about farting.

1. There are three main fart smells. Hydrogen sulfide produces the signature "rotten eggs" note, methanethiol produces hints of "decomposing vegetables," and dimethyl sulfide adds a hint of "sweetness."

2. The average fart is roughly 100 milliliters in volume and lasts approximately two seconds. More interesting than the statistic itself is how it was calculated. Basically, it involved subjects farting into specially designed, airtight, gas-collecting underwear.

3. There's a way to make your farts (mostly) odorless. Marketed as the only "internal deodorant," the over-the-counter drug Devrom, with its active ingredient bismuth subgallate, reduces almost 100% of the odor caused by sulfur gasses, the primary contributors to smelly farts. Bismuth is an interesting metal -- it's extremely dense yet surprisingly nontoxic. The only known side effects of taking bismuth subgallate is a harmless darkening of stools or the tongue, which the user's friends and family undoubtedly describe as "well worth it."

4. Women's farts smell worse. In studies conducted by eminent flatulence researcher Michael Levitt, women's farts consistently sported significantly greater concentrations of hydrogen sulfide. Odor judges have confirmed that -- at similar volumes -- this translates to a noticeably worse odor compared to men's farts.

5. Red meat kicks up a stink. Sulfur compounds contribute the most to flatus malodor, but compounds called thiols also royally reek. Methanethiol is one of the worst. Naturally found in blood, and, in turn, red meat, it can be released via the digestive process and eventually off-gassed via the anus.

6. Holding in your farts won't kill you, but it won't be comfortable either. As Tara from D-News explained, "When we hold farts in, the gas retreats back into our body and gets absorbed into the intestinal walls where it eventually mixes in with our blood. At best, that can cause bloating, abdominal pain, and constipation but if you do it repeatedly it can lead to a distended bowel."

(Image: Shutterstock)

Primary source: Gulp: Adventures on the Alimentary Canal, Mary Roach, 2013

The Rudest Space Cloud in the Known Universe

In 1999, astronomers controlling the Hubble Space Telescope zoomed in on a section of the Carina Nebula, approximately 8,000 light years away, and snapped the picture shown above. The image depicts a dense cloud of gas roughly two light years in length that has broken off the greater nebula. The cloud is "striking," NASA noted in 2003, "because its clear definition stimulates the human imagination."

"It could be perceived as a superhero flying through a cloud, arm up, with a saved person in tow below."

That's not what I see...

While NASA deserves respect for keeping things G-rated, that cloud can only be accurately described as the grandest middle-finger salute in the known universe; a giant, cosmic "%@#$ you."

A perceptive redditor spotted the vulgar resemblance two weeks ago.

The rude stellar object is a molecular cloud of gas and dust, dense enough and large enough to permit molecules to form. In the image above, the bright red specks shining to the left are young stars that grew inside.

What I will term the "%@#$ you" cloud is part of the vastly larger Carina Nebula (seen above), which spans over 300 light years. The nebula was discovered by French astronomer Nicolas Louis de Lacaille in 1751. Larger and brighter than the more famous Orion Nebula, the Carina Nebula is home to Eta Carinae, the best-studied luminous hypergiant star. Eta Carinae is 100-150 times as massive as our sun and four million times brighter! Because the star is so huge, it flirts with the Eddington limit, the point where a star's radiation is powerful enough to overcome the gravity that keeps it together. In other words, the star is precariously walking the line of exploding as a supernova or even a hypernova. Astronomers actually expect this to occur within the next million years.

Should that happen, we here on Earth will, in all likelihood, be just fine. The "%@#$ you" cloud, however, may suffer a brute form of stellar censorship.

(Images: Hubble Heritage Team (STScI/AURA), N. Walborn (STScI) & R. Barbß (La Plata Obs.), NASA. & ESO)

How to Spread Misinformation

With the advent of the Internet and social media, the average person is afforded unprecedented power to consume and spread information. But with great power, comes great responsibilities -- to be skeptical, to seek out facts and evidence, to restrict, or at least not aid, the spread of rumors. Of course, one also has the ability to completely ignore all those responsibilities. When that happens, you may discover that you're a purveyor of misinformation.

And if you're going to spread misinformation, you might as well do it right. To all the devout Natural News readers, anti-vaxxers, and alternative medicine scam artists out there, this guide's for you! If truth and logic is your thing, feel free to read these tips as well, then forget them, or better yet, do the exact opposite.

Embrace your biases. Whatever you're into, chances are you can find an advocacy group, corporation, or political party whose tweets and Facebook posts you can pass on without question! We humans have a tendency to favor information that confirms our beliefs -- give in to it! Seek out sources that share your stances and like-minded individuals to revel with. Turn your life into a sounding board.

Don't stop. Don't think. According to psychologists, assessing the veracity of a piece of information requires both motivation and brain power. You certainly don't want to waste either of those precious resources. Better to stick to your carefully tailored and guarded mindset. Avoid visiting pesky websites like Snopes or TruthOrFiction. Don't bother searching out conflicting sources or reading the entire article. Let the mass media be your guide: if you want to be the most popular, you have to share the information first!

Repeat, repeat, repeat! Your friends, family, and followers need to know that juicy bit of information or controversial discovery! So what if all the "facts" don't line up? Facts don't change minds! Opinions are formed and molded by a barrage of information. Whether that information is true or not doesn't matter.

Ignore the Fallout. Who cares if vaccine-preventable diseases like measles, mumps, and whooping cough are on the rise because you've convinced everyone that vaccines are evil? So what if you blindly shared a fake story that an asteroid was set to wipe out all life on Earth? Who cares if you fueled a fire of false allegations that nearly destroyed somebody's life? If you do start to feel guilty, just take a look at that insane photo that's too unbelievable to be true... then hit "share."


Whatever you do, don't read these: "Digital Wildfires in a Hyperconnected World." World Economic Forum. 2013

Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook. "Misinformation and Its Correction: Continued Influence and Successful Debiasing." Psychological Science in the Public Interest December 2012 vol. 13 no. 3 106-131

"I Don't Want to Be Right." Maria Konnikova. The New Yorker. May 2014.

Can you Poop Via Your Mouth?

Last week, with the help of Mary Roach's fascinating book Gulp, I tackled a question surely on the back of everybody's mind: Can you eat via your anus?

This week, again guided by Gulp, I do the disgusting and flip the question. Ladies and gentlemen, restrain your gag reflexes.

Can you poop via your mouth?

Sketchy websites, questionable YouTube videos, and a hilarious episode of South Park all present evidence pointing to "yes." What's more, the American Heritage Medical Dictionary defines a condition dubbed fecal vomiting: "The vomiting of fecal matter that has been drawn into the stomach from the intestine by spasmodic contractions of the gastric muscles." The term is even used various times in the medical literature.

The answer to the question, however, seems to be a nuanced "no."

Indeed, fecal vomiting is a genuine condition, rarely occurring in cases of severe constipation in which the colon is completely full of feces. The vomit, however, doesn't actually contain what we recognize as poop, which comes from the colon. It contains liquid from the small intestine, ejected in with the help of powerful, reverse contractions of muscles in the small intestine and esophagus.

"A well-formed stool does not exit the upper end of the colon," Roach writes.

To the few unfortunates afflicted with fecal vomiting, the difference may seem borderline semantic. After all, liquid from the small intestine can be darker in color and doesn't exactly smell like roses.

Strangely enough, there are a couple dozen reports of actual feces being expelled from the mouth, however all of them come from the early days of modern medicine and are highly dubious. The reports prompted one skeptical physician, Dr. Gustav Langmann, to put a claimant to the test. In 1889, he undertook the care of a female patient who witnesses reported had passed stools via the mouth. At some point during the woman's hospital stay, nurses discovered "some hard feces, wrapped in paper" under her pillow. Ewwww. Case closed.

Source: Gulp: Adventures on the Alimentary Canal, Mary Roach, 2013

(Image: South Park Studios)

Why Eating McDonald's Is Completely 'Natural'

To a great many ardent nutrition purveyors, the Big Mac is the embodiment of evil: greasy, loaded with hard-to-pronounce chemicals, and entirely unnatural.

Packing in 550 calories, 29 grams of fat, and 65% of your daily value of sodium, it's easy to see why you probably shouldn't eat a Big Mac at every meal. However, with that said, caricaturing the act of eating McDonald's and other fast food as unnatural is completely unwarranted. (It is also, of course, fallacious to assume that what is "natural" is always better.)

Humans are perhaps the ultimate omnivores. We evolved to eat whatever is around, be it plant, animal, or burger. Before the rise of agriculture roughly 10,000 years ago, our ancestors were predominantly hunter-gatherers. Spread far across the globe, their diets matched their surroundings. For examples, we can look to modern hunter-gatherers. 95% of the Inuit's diet is comprised of meat and fish. The !Kung of southern Africa eat mostly seeds, nuts, fruits, and vegetables. Farther north, the Hadza predominantly consume meat, fish, and roots.

There is no specific "natural" human diet, evolutionary biologist Marlene Zuk wrote in 2009. What's "natural" for us is to eat whatever we can. Over time, our bodies may adapt to take advantage.

"Take dairy products, one of the classic modern foods we supposedly aren’t meant to eat. Most people who can’t tolerate them lack a gene that confers the ability to break down lactose, the sugar in milk, after the age of weaning. Our Stone Age ancestors couldn’t digest milk as adults either, but a recent study shows that about 5,000 years ago, mutations that keep that gene switched on spread throughout Northern Europe. That’s also when cattle began to be domesticated; being able to drink milk as well as lower-lactose cheese would have been advantageous as a source of nutrition and fluids."

So then what about the epidemic levels of obesity? Our Paleolithic ancestors certainly weren't as voluptuous as we are. That's true, but they also only lived till age forty. Moreover, obesity levels aren't sky high because humans have strayed from our natural dietary path. They're sky high because food is much easier to attain than it used to be. In the developed world, the question now is not "Will we eat?" It's "How much will we eat?" If an "unnatural" McDonald's diet made us fat, then you wouldn't be able to lose weight eating there. But you can!

Writing at Science-Based Medicine, Harriet Hall dismantles the notion of a natural human diet:

Arguments that we should eat what we evolved to eat are undercut by three facts: humans have continued to evolve since the Paleolithic, humans evolved to be adaptable and to thrive on a wide variety of dietary intakes; and we evolved to have the survival advantage of intelligence and inventiveness to develop technology to improve our access to food (for instance, cooking). In other words, technology is “natural” for humans, and eating in a variety of ways is natural for humans too.

Imagine if humans weren't naturally flexible eaters.

"If our ancestors had been less adaptable and if there was a single healthy diet, humans could not have spread to new continents or survived the climate changes of the Ice Age," Hall notes.

Humans' natural ability to thrive on almost any diet is a key to our species' success. So go ahead. Have that Big Mac.

Changing the World, One Tweet at a Time

It is easy for journalists to succumb to the notion that we make very little difference in this world. According to Pew, only 28% of Americans believe journalists contribute "a lot" to society, while 27% believe journalists contribute "not very much" or "nothing at all." And a Gallup poll showed that only about 20% of Americans believe that reporters are honest and ethical. On the bright side, at least we beat out car salesmen (9%), Congressmen (8%), and lobbyists (6%). Huzzah!

Many young people go into journalism because they "want to change the world," but by that, they often mean engaging in advocacy journalism. I believe that is the wrong approach. Too much of it, in my opinion, comes across as unobjective and even dishonest, far more akin to propaganda than to journalism. That is because advocacy journalists -- by definition -- are wedded to an ideological worldview and, hence, are not particularly amenable to changing their opinions if the facts change. Instead, they resort to cramming and distorting evidence to fit into their worldview.

We manifestly reject that sort of journalism at RealClearScience. Like most journalists, we also want to change the world, but we do not want to do so by embracing political or social causes. Instead, we just want to tell the truth to the best of our ability. And when the facts change, our opinions change. Those are two of the guiding principles upon which this website is based.

A logical corollary to those principles is that we should hold influential people accountable for their actions and words. When we come across scientific misinformation, we believe it is our duty to correct the record. We believe the traditional "watchdog" role of journalism is important for keeping the public properly informed and maintaining a healthy democracy.

To that end, Todd Myers -- the environmental director of the Washington Policy Center -- recently brought to my attention a curious tweet posted by the "Ecoconsumer," a King County (Seattle and vicinity) employee whose job is to promote various environmental causes. (No, I don't think that's a legitimate use of taxpayer money, but that's a subject for another day.)

His tweet, since removed (but forever memorialized here), advertised a new "documentary" about how cell phones cause cancer. Loyal readers of RealClearScience, as well as most reasonably educated people, know that is complete rubbish. Because cell phones operate using microwaves, it is not physically possible for them to cause cancer. As Michael Shermer explains, microwaves do not have enough energy to break chemical bonds, which is a requirement for something to be carcinogenic. Besides, we are constantly bathed in electromagnetic radiation -- from the sun, wi-fi devices, laptop computers, radio broadcasts, etc. If all electromagnetic radiation caused cancer, none of us would be alive.

Disturbed that a government employee would post such nonsense, I contacted the Ecoconsumer. He sounded rather flustered by my phone call, so I reached out to his superior. Within mere moments following a brief conversation, in which he agreed with me that King County should be disseminating accurate information, the offending tweet was removed.

I must admit that I was pleasantly surprised. Encounters with bureaucrats rarely end so quickly and favorably. I felt that, for the first time, my job as "watchdog" actually paid off. And though removing a single post from Twitter hardly constitutes an Earth-shaking victory, maybe it is quite possible for journalists to make a difference in this world -- albeit, for some of us, just one tweet at a time.

(Photo: Make a Difference via Shutterstock)

The NFL Has a Lower Rate of Domestic Violence Than the General Population

This year, three National Football League players -- Adrian Peterson, Ray Rice, and Greg Hardy -- have either admitted to or been convicted of domestic violence. Their stories coalesced into a storm this past week with the release of a damning new video of Ray Rice punching his wife (then fiancée) and the indictment of Adrian Peterson, debatably the NFL's best running back, for child abuse.

The media onslaught of updates, analysis, and opinion on what has been called the National Football League's "worst week ever" leaves a distinct impression: the NFL is a league stocked full of criminals.

Evidence, however, doesn't bear that out.

Back in 1999, leading criminologist Alfred Blumstein teamed up with author Jeff Benedict, who has written five books focused on crime and athletics, to compare rates of criminal violence among NFL players to that of the general population. Controlling for age, they found that the annual rate of assault and domestic violence among NFL players was less than half that of the general population.

But Blumstein and Benedict's analysis is fifteen years dated. Perhaps things have changed in that time?

It doesn't appear they have. Back in July, FiveThirtyEight's Benjamin Morris tallied up the incidents in USA Today's NFL Arrests Database to discern crime rates among NFL players. He then compared those numbers to the national averages among 25-29 year olds, and found the rate of domestic violence in the NFL to be 55.4% that of the general population. And the overall crime rate was a mere 13% of the national average.

So why then do 69% of Americans believe that the NFL suffers a "widespread epidemic of domestic violence problems"? The answer is rooted in how we think. Humans are prone to rely on examples and experiences that can be easily recalled. The idea is that if we can remember it, it must be important. This mental shortcut is termed the availability heuristic. A key drawback of the heuristic is that it leads us to overestimate the prevalence of memorable events. Here, you can legitimately blame popular media. Because plane crashes are widely covered, many erroneously view flying as more dangerous than driving. Thanks to Shark Week, people are wearier of sharks than deer. Because 91% of people have seen, read, or heard something about Ray Rice's domestic violence, they overestimate the problem of domestic violence in the NFL.

That's not to say that domestic violence isn't a problem in the NFL. By type of crime, domestic violence is the closest the NFL comes to the national average. Moreover, Morris noted that NFL players do seem to commit acts of domestic violence at a higher rate than individuals with a similar socioeconomic status, though a direct comparison wasn't available.

As public figures, football players must hold themselves to higher standards, and be punished appropriately when they fail to meet them. But more quintessentially, as human beings, they need to recognize that unprovoked violence against others, particularly those not able to defend themselves, is utterly reprehensible.

(Image: AP)

Should Macho Men Shave Their Legs?

Every man comfortable enough with his masculinity to squeeze into performance-enhancing lycra athletic body wear has sooner or later confronted the next frontier: The question of whether to shave his legs!

Aside from perhaps staunch feminists, female athletes don't face this social conundrum. Among male athletes, the otherwise socially uncool body-smoothing "manscaping" has long been a tradition. Cyclists will tell you that it might improve recovery from road rash or make leg massages better. They'll also admit that it's fashion. Tough, big-deal bike riders do it. Nobody races Le Tour de France with hairy legs. Male shaving is a form of machismo.

It also has negligible performance benefits, or so the conventional wisdom goes. Now, new data from men on bikes in wind tunnels contradicts this view. Bicyclists were measured to move more quickly with shaved legs! In theory this makes sense. Generally, smooth surfaces are more aerodynamic than rough or uneven ones.

Aerodynamics is so important to cyclists because the practical limit of their speed is not their muscle power, but the aerodynamic drag of their ride: bike and body. Terminal velocity on level ground (on a properly geared ride) is determined by how cleanly the forward-facing shapes cut into the wind. The more carefully a surface cleaves oncoming air into parts without disturbing it into a chaotic turbulent mess, the faster it goes.

An everyday pleasure rider may hit 15 mph on a brisk ride, a commuter may cruise at 16-18, and a professional racer can hold speeds in the mid 20s. A rider on a bike with an extremely aerodynamic fairing like the nose of a rocket can reach speeds of more than 80 mph!

"Aero" has become a huge buzzword and selling point in the cycling industry. Most competitive races have actually banned certain bike designs for being too fast. Within a limited bicycle geometry range, the next gains to be made are those from the other half of the aerodynamics of the system: the rider himself. Riders often employ a hunched position, with the arms out and the head tucked down, to reduce aerodynamic profile. They may smooth even their natural body profiles with seamless skinsuits.

Here's where the hairy legs come in. Smooth legs should be slightly more aerodynamic than hairy or, heaven forbid, "stubbly" legs, right?

Previous tests said no, there was no measurable effect. Leg-shaving is just machismo. This new test says otherwise. A cyclist going into the wind tunnel for aero testing at the bike industry "Specialized" forgot to shave his legs first. His test showed significantly higher drag. Surprised, he came back days later with legs as smooth as a baby's cheek, in addition to a 7% gain in aerodynamic slipperiness!

It was a repeatable result too. Several more cyclists tested in the same wind tunnel gained similar aerodynamic advantage. 7% doesn't sound massive, but it can mean more than a minute faster in a one-hour race against the clock. That is a huge competitive advantage. A similar gain in aerodynamic profile might require hundreds of dollars of specialized bike parts.

These results fly in the face of the last serious study of the subject in the 1980s. The big lesson: verifying previous results is really important. Also, men need to get to work with those razors. I'll no longer be making fun of you for your "macho fashion."