Astronomers Find First Earth-Sized Exoplanet in Habitable Zone

Our sun is not the only star in the Milky Way Galaxy; that goes without saying. In fact, it has at least 400 billion brothers and sisters! So if our treasured sun has such an extended stellar family, surely Earth is not alone...

Thus far, astronomers have spotted around 1,800 exoplanets. Some are big. Some are small. Some are even similar in size to Earth. More are gassy. A few are rocky. And a select bunch, around 20 or so, even reside with the prized "Goldilocks zone," the region around a star within which planetary objects with sufficient atmospheric pressure can support liquid water on their surfaces.

But so far, astronomers haven't discovered a planet combining the best of both worlds: Earth-sized and inside its star's habitable zone. Such a planet would surely be a prime candidate for supporting life as we know it!

Well, that planet-finding drought is over. With a paper in the prestigious journal Science, a team primarily based out of NASA's Ames Research Center and the SETI Institute has announced the discovery of such a planet.

Ladies and gentlemen, meet Kepler-186f. With a radius roughly 1.11 times larger than Earth's, it could be a slightly bigger sibling to our home planet. The outermost planet in a system with four others, Kepler-186f orbits its star every 130 days at a distance of roughly 30 million miles, much closer than Earth's 93 million miles. One might be worried that Kepler-186f would be baked to a crisp, but that's not a concern -- the planet's sun is a little less than half the size of ours. So rather than smoldering, Kepler-186f receives just 32% of the intensity of solar radiation that Earth receives.

"Despite receiving less energy than Earth, Kepler-186f is within the habitable zone throughout its orbit," lead author Elisa Quintana, an astrobiologist at NASA's Ames Research Center, reassuringly writes.

However, if there's liquid water on the surface, it may be in danger of freezing.

"It is... slightly larger than the Earth, and so the hope would be that this would result in a thicker atmosphere that would provide extra insulation," San Francisco State University astronomer Stephen Kane, also a member of the team, said in a press release.

The chances of life on Kepler-186f are hampered by a glaring fact: M-class Stars like Kepler-186 have a bad habit of emitting flares, flares that are proportionally more powerful than those emitted by our sun. And as Kepler-186f is much, much closer than Earth is to the Sun, the planet might be periodically hit by a flare, which could wreak all sorts of havoc.

Quintana and her compatriots are very certain of Kepler-186f's size, but they are less certain about some of the planet's other features, like its atmosphere, mass, and composition. Sometimes, astronomers can analyze the spectrum of light given off when a star's light traverses a planet's atmosphere, allowing them to determine the elements present. But sadly, Kepler 186f's sun is far too dim for spectroscopy to be feasible. However, given its radius, it is highly unlikely that the planet has a hydrogen-rich atmosphere like Jupiter, Saturn, or other gas giants.

Kepler-186f could be composed of pure ice, pure rock, or even pure iron, yielding a range of masses from 32% that of Earth's to 3.77 times as much. If it features an Earth-like composition, it would be 44% more massive.

The team also isn't sure of Kepler-186f's rotation. For instance, if it's tidally locked, one side would always face its sun.

The discovery was made using the Kepler Space Telescope. The stargazing machine primarily locates planets by observing faraway stars. As planets pass in front of these stars, some of the light they emit is blocked. By studying this occluded light, astronomers can glean all sorts of information, like a planet's size, density, and sometimes even the content of its atmosphere.

If you're hankering to travel to Kepler 186f, you're sadly out of luck. The planet is roughly 500 light years away. But look on the bright side: we've just found a planet that's a lot like ours! Earth is not alone! Who knows, maybe we're not either!

Source: Elisa V. Quintana et. al. "An Earth-Sized Planet in the Habitable Zone of a Cool Star." SCIENCE VOL 344 18 APRIL 2014

(Top Image: Danielle Futselaar)

Harnessing Earth's Massive Wave Power Resources

We have solar panels to absorb the sun's energy, giant propellers to collect the wind and even machines to extract power from the tides. Short of widescale implementation of nuclear fission or fusion power, there are few other options for power generation without burning hydrocarbons. None of these methods is yet cost efficient on the market, but development continues.

Another renewable resource may soon be entering the competition: the energy carried in the rising and falling of ocean waves.

The U.S. consumes between roughly 300 and 1,000 billion watts of electricity at any given time, varying by season, weather, and many smaller factors. According to U.S. government scientists, several times this much power exists in the form of ocean waves. Only a fraction (an estimated 625 billion watts of equivalent conventional energy) can be easily extracted with current technology. That's a lot of power!

Several types of machines are being built to tap this potential.

The simplest conceptual design is a floating buoy, tethered to the sea floor below. The tether is of a fixed length, with a magnet at the very top, inside of the buoy. As waves rise and fall, the magnet position within the buoy oscillates up and down. This motion generates electric current in coils of wire surrounding the magnet, through Faraday Induction. (Here's a diagram.)

Another design relies on a floating buoy to act as a sort of pump handle. Recognizing that waves move objects not only up and down but side to side, this machine uses the lateral motion to derive power. As the buoy moves back and forth in the waves, it pumps sea water through a pipeline to the shore and back. A station at the shore converts the push of this pressurzed water to electricity. (Here's a video and a description of the device.)

An air-column turbine is another clever idea. The bottom of the machine remains underwater. As waves pass across, the change in liquid pressure forces water up into the hollow inside of the device. The pressurized air passes out of the machine through the blades of a wind turbine. The propeller turns an electrical generator, just like an on-shore wind generator. As the wave troughs, air is sucked back in through the turbine, generating more current.

A fourth type of machine is very different: it's named after a mythical sea-snake. The Pelamis system consists of several tubes, hinged together in a line. The line faces into the oncoming waves. Waves moving down the line raise and lower the segments at angles to one-another. Hydraulic pistons connecting each segment are powered by the bending. The forced piston motion pushes liquid through an electrical generator.

Whether these designs will be able to scale to create larger power stations remains to be seen. Even then, it is unclear if they will have economic viability. A few systems have been built and installed, but these amount to test programs, which are not designed for commercial use.

Will wave power ever be practical? We'll have to wait and see.

(AP photo)

Religion Didn't Kill Science in the Middle East

Science in the Middle East isn't dead, but it isn't exactly alive, either. According to Thomson Reuters' Science Watch, the Arabian, Persian and Turkish Middle East produces only 4% of the world's scientific literature. Paltry by almost any standards, that value is even more diminutive when paired with the fact that the Middle East, at one time, led the world in science.

Between the dawn of the 9th Century and the middle of the 13th Century, a time when Europe was languishing in the Dark Ages, Islamic scholars were taking monumental strides in mathematics, medicine, and physics. Thinkers of all religions and ethnicities gathered in cosmopolitan cities like Baghdad and Damascus to discuss the latest discoveries and theoretical concepts. Ideas became so highly valued, they were almost a form of currency. Observatories were built to study the sky. Algebra was born. The use of Arabic numerals -- 0, 1 2, 3, 4, 5, 6, 7, 8, 9, -- which were originally devised in India, became widespread.

So what happened?

It's easy to point to modern fundamentalists in the Middle East and utter a single answer: "religion." But most historians of science dismiss this oversimplified explanation. Instead, a confluence of factors ended science's golden age in the Muslim world, and created a mire in which science has been bogged down ever since.

War was perhaps the biggest reason for the decline. In the 11th and 12th Centuries, crusading Christian armies from Europe invaded the Middle East in order to reclaim the Holy Land. The attack left the Islamic Empire severely weakened. When the Mongols invaded from the east some years later, they were met with meager resistance. Ultimately, Baghdad was put to the torch in 1258, along with a great deal of priceless books and manuscripts.

Fast forward to the 1400s. The printing press is beginning to revolutionize the spread of ideas. Sadly, the Muslim world is left out for a crucial two hundred years. The Arabic language, which in the past served science incredibly well due to its precision, proved unwieldy for typesetters. While ideas flowed in Europe, mostly through books printed in Latin, their spread stagnated in the Middle East.

Christopher Columbus' discovery of "The New World" was another nail in the coffin of Islamic science. Suddenly, trade routes changed, and money started pouring into Spain, Italy, and England instead of the Middle East. In turn, wealthy benefactors began bankrolling scientific endeavors in Europe. Concurrently, squalor began seeping into the Muslim world.

The Middle East would eventually be united under the banner of the Ottoman Empire between the 15th and 19th Centuries, and though society saw a bit of resurgence during the time, science and technology did not. The lead in that category had been ceded to Europe, and Europe wasn't going to give it up.

After the Ottoman Empire's downfall in the 1800s, much of the Middle East was occupied by European powers, primarily France and Britain. Under such control, science could not grow. Religion, however, grew more entrenched.

Though Islam can be interpreted as condoning, even compelling, the study and exploration of the natural world, that view has been in the minority among those in power. Thus, it is political autocracy and theocracy that has likely held science back in the Middle East for the last century or so. Science appears to be germinating in parts of the Islamic world -- in Iran and Turkey, for example -- but whether the trend will continue remains to be seen.

(Image: AP)

Unlike Voters, Fish Make Better Group Decisions

The entire idea of democracy rests upon the notion that large groups of people will, more often than not, make prudent decisions. In theory, all the stupid voters will cancel each other out, and society's collective intelligence will result in the best candidates getting elected. However, American voters, particularly since 1992, have almost single-handedly challenged the idea of the "wisdom of crowds."

Still, collective intelligence exists, despite Americans' vigorous attempts to disprove it. Large groups of animals are known to make synchronized, life-or-death decisions with great rapidity. Think of a school of fish evading a predator, or a flock of birds finding refuge in a storm. How are those coordinated decisions made? Scientists are just now beginning to unravel how such complex behavior works.

New work from a group led by Angelo Bisazza sheds more light on this decision-making process. But instead of studying large groups (10 or more individuals), which is customary for this type of research, Bisazza's team looked at how pairs (called "dyads") of fish made decisions. Specifically, they wanted to determine if a pair of fish was better at making basic numerical calculations than a single fish alone.

Fish can't count, but they do have a vague sense of relative magnitude known as "numerosity." For instance, if school of fish "A" is twice as large as school of fish "B," a fish will be likelier to join the larger group, presumably because it is safer for a fish to swim around with a larger group than a smaller one. The researchers took advantage of this natural instinct in their experiments. (See figure.)

As shown in figure A, Bisazza's team placed either a single or a dyad of fish into a middle tank. On each side was another tank filled with either 4 or 6 fish. The test was straightforward: Which tank would the fish swim toward? Figure B depicts the result: Single fish were indecisive, spending roughly equal amounts of time with both the group of 4 and the group of 6. Dyads were much smarter; they spent more time adjacent to the group of 6. (The results were statistically significant.) Dyads of fish that had spent time together in the same tank ("familiar") yielded similar results to dyads of fish that were not previously acquainted ("unfamiliar").

Further experiments by Bisazza's team hinted that the smarter fish led the way. In other words, merit-based leadership was responsible for dyads making better decisions than single fish alone.

If only humans were so smart.

Source:  Angelo Bisazza, Brian Butterworth, Laura Piffer, Bahador Bahrami, Maria Elena Miletto Petrazzini & Christian Agrillo. "Collective enhancement of numerical acuity by meritocratic leadership in fish." Scientific Reports 4, Article number: 4560. Published 02-April-2014. doi:10.1038/srep04560

Hilariously Stupid Science Questions: Yes, Again!

It's never stupid to ask a question about science, but that doesn't mean there aren't hilariously stupid science questions! One, two, three times already, we've shared selections of them. (Wow, RCS, run stuff into the ground much?) We'd now like to share twelve more. And we'll contine sharing them until they stop being funny. As always, our hats are tipped to the esteemed panel of "logic-dodging" jokesters over at Reddit that came up with most of these zany, thigh-slapping queries.

Why would string need a theory? (from RCS reader David Eisenberg)

Where on the periodic table is the element of Surprise? (from RCS reader Nemo_of_Erehwon)

How did the thesaurus survive the dinosaur extinction?

We've long known the speed of light, but what is the speed of heavy?

My neighbor said he's an "acidic Jew". Are there basic Jews? What happens if you combine one of each?

Why does the amount of people required to change a light bulb vary so greatly between cultural groups?

Do hydrophobic objects yell slurs at water when they see it?

Where on the periodic table is the element of surprise? Has it been discovered yet or is it expected to appear suddenly?

Is a right angle 90° celsius or 90° fahrenheit?

If you put a vial of Germanium (Ge) next to a vial of Francium (Fr), will the Ge occupy the Fr?

How did humans reproduce before the discovery of alcohol?

Looking at a map of the US, I noticed that the states all perfectly fit together with no gaps. How is this possible?

If Pluto is a Dwarf planet, shouldn't we try to contact the Dwarves living there?

via Reddit

(Image: Secret Ingredient via Shutterstock)

Why Physics Teachers Preach Fiction

Have you ever wondered why hurricanes spin in only one direction in each hemisphere? How about why you get thrown against the side of your seat if you careen around a corner in your car?

You’ve probably heard the answers: the Coriolis and centrifugal forces, respectively. Trivia, yes, but there is more here than mere academic facts.

Physicists will name these forces to explain events, but then call them “fictitious.” Why? Well, partly because we’re a mathematically pedantic lot. Mostly though, it’s because, at the root of things, these forces don’t exist as physical pushes. They’re actually gateways from the very complicated real world to the slightly less complicated world of Isaac Newton.

Newton’s laws govern sets of objects found in common “reference frames.” A reference frame is like the backdrop for a play, the setting for a novel, or the grid work of a graph plot. The gridiron yard lines of football are a reference frame to describe the position of the ball and the players. Latitude, longitude and altitude are a (spherical) reference frame for places on earth.

Mathematically, a reference frame means a set of 3D coordinates that can be used to describe all of the objects and forces under consideration. The most difficult aspect of a physics problem can be choosing these coordinates in the way that makes the problem simplest to solve.

Newton’s laws live and work in reference frames that are inertial. That is, the frames do not accelerate. A parked car is an inertial reference frame. Taking off, speeding up, and braking are non-inertial. Put the car on cruise control, and you are back in an inertial reference frame.

Acceleration is a change in velocity. Speeding up or braking in a straight line is a change in the magnitude of velocity (speed). However, turning is also a change in velocity: a change in its direction. An object going in circles is doing nothing more than constantly changing the direction of its velocity, while staying at the same speed. Yet, this is still considered acceleration and is, hence, non-inertial.

Here’s where the fictitious forces fit in. They allow you to fudge a non-inertial frame to look like an inertial frame. Then you can use Newton’s laws to understand what is happening, instead of resorting to much harder physics.

Centrifugal and Coriolis forces are the fictitious forces for translating rotating frames to inertial ones. This comes in handy not just for cars and merry-go-rounds, but for everything we do. Living on a giant revolving orb makes all of our frames slightly non-inertial.

The centrifugal force explains why you get thrown against the car door in a sharp turn. That’s because you’re rotating with the vehicle in a non-inertial frame. To someone standing on an overpass watching you pass under, there is no centrifugal force at all; inertia simply carries you in the direction you’d have been going before the turn. The door pushes you away from your straight-line path.

Given that we live on the surface of an enormous spinning object, none of our reference frames are truly inertial; they are all rotating, just a little bit. Most of the time the difference is so small that you can neglect it. But for things travelling a long way, it matters.

Airplane flights, hurricanes, missiles, artillery shells and other large things moving long distances appear to experience the Coriolis force. As the object travels through the atmosphere in a straight line from the surface, we spin underneath it. From the ground, it looks like the plane is turning. The pilot sees his plane flying a straight line and the earth below falling away.

The Coriolis force also dictates the direction of cyclonic rotation in a hurricane. Air is attracted to the low-pressure center of the storm, but while the air travels in a straight line toward the center, the earth beneath rotates away. Missing and passing by the low-pressure center, the air is pulled back in again, forming a spiraling vortex.

Meteorologists must always account for the effects of the fictitious forces not only for hurricanes but all low-pressure systems. The rest of us only have to worry about them when playing racer on our commutes. Or when asked to solve a physics problem in college.  

(AP photo)

Deadlier Than Sharks: The Science of Deer in the Headlights

Spring: a beautiful time of year. When temperatures creep above freezing, rains refresh the landscape, and flowers start to spurt from the ground, puncturing the drab uncovered as the snows of winter recede. Wildlife also starts to emerge. For drivers, that means headaches.

As animals take to the skies and pastures, many sidetracked by the primordial urges of mating, they also inadvertently make their ways onto roadways, waterways, train tracks, and runways, and thus into the path of oncoming vehicles. Estimates suggest that billions of vertebrates are killed each year in these collisions.

Deer are the primary concern. Up to 1.5 million cars collide with deer annually in the United States, killing more than 200 Americans, causing approximately 10,000 injuries, and resulting in almost a billion dollars worth of damages.

By those numbers, deer are roughly two hundred times deadlier than sharks (though the comparison isn't exactly apples to apples).

On average, more collisions with deer occur in November, during mating season, but springtime is up there, too. Starting in May, mother deer will be tailed by one to three fawns. Problems arise when the families decide to cross treacherous rural roadways, especially in the evening.

Why are deer the poster children for animal-automotive collisions in the United States? The simplest explanation is that they are large in size, abundant, and widespread. As many as 30 million of the 100 to 300-pound mammals reside in the U.S., ranging from Maine in the Northeast, to Florida and Texas in the South, to Idaho in the West.

For a more nuanced explanation, we can look to their behavior. According to Purdue University ecologist Esteban Fernandez-Juricic, when confronted with oncoming vehicles, deer rely on anti-predator instincts. First and foremost, that means freezing -- the stereotypical "deer-in-the-headlights" response. In a normal predatory situation, this would allow them to avoid detection and gauge the situation. But while such a reaction is well suited to a gun-toting human or a lurking wolf, it is not at all useful when confronted with a one-ton hunk of metal traveling at 60 miles per hour. Often, a deer's decision to flee comes too late, if at all. The consequences to animal, car, and driver can be dire.

Yet despite the clear scope of the problem presented by the unfortunate interactions of deer and machine, the whole situation is relatively unstudied. Of course, a key problem is the question of how to study it.

Dr. Bradley Blackwell, a research wildlife biologist with the United States Department of Agriculture, plans to examine the problem using variables that we can control, namely, a vehicle's approach and speed. In a recently published review article, Fernandez-Juricic also noted that humans on foot tend to provoke greater alarm in deer than approaching vehicles do. Perhaps there's a way to make our cars more startling?

Until scientists discover a way to deer-proof our roadways, the best advice for avoiding them is to take it slow in rural, wooded areas in the evening hours, especially on winding roads with blind approaches.

Source: Lima SL, Blackwell BF, Devault TL, Fernández-Juricic E. "Animal reactions to oncoming vehicles: a conceptual review." Biol Rev Camb Philos Soc. 2014 Mar 25. doi: 10.1111/brv.12093.

(Images: Public Domain, Wikimedia Commons)

Dear Potheads, Stop Poisoning Your Pets

Prior to the election last November, our assistant editor Ross Pomeroy reminded voters, before they pulled the lever to legalize marijuana, to consider the negative effects that pot has on animals. Many people did not listen to his advice. A recent article in USA Today reports that incidents of dogs suffering from marijuana toxicosis in Colorado are on the rise.

Dogs won't eat potted plants or bags of weed, but they will happily eat brownies, butter, and any other culinary delights into which a little bit of the ganja has been added. But unlike most humans, dogs don't know when to stop and will continue eating until it's all gone.

Consider the humble pot brownie. The first big problem for your dog is the chocolate, which is itself highly toxic to dogs. If the massive intake of chocolate doesn't make Fluffy violently ill, then pot's active ingredient, tetrahydrocannabinol (THC), certainly will. Just like chocolate, dogs don't metabolize THC the way humans do, so the outcome isn't pretty. As Pomeroy wrote previously, symptoms can include "anxiety, hallucinations, severe lethargy, unconsciousness... coma... drooling, vomiting, and loss of bladder control."

"We see dogs stoned out of their minds for days," Colorado State University veterinarian Dr. Tim Hackett told USA Today. "The dogs are terrified," chimed in Denver veterinarian Kevin Fitzgerald.

Dogs aren't the only innocent bystanders, either. Cats can also be harmed by your wayward desserts, and there is at least one reported case of a ferret going into a coma. And of course, children occasionally get into their parents' cannabis, as well, with similarly terrifying results.

What should be done about this? It's far too early to label the issue an "epidemic," but the dangers of marijuana need to be made explicitly clear to the American public. It is not a harmless drug, no matter what the neighborhood flower child says.

Yet, I am deeply sympathetic to the libertarian argument that people should be able to do whatever they want, as long as it doesn't harm anybody else. That's why I voted to legalize marijuana in my state of Washington, though I do not partake of the wacky tobacky myself. I appreciate my neurons, and I prefer keeping my IQ eight points higher than it would be if I regularly puffed the magic dragon.

But, I get very angry when allegedly responsible weed-loving adults live up to the stereotype of being irresponsible potheads. If you keep marijuana in the house, you have the responsibility to keep it locked up and safe -- just like prescription drugs, alcohol, or guns. If you do not, and your pet gets stoned because of it, you should be charged with animal cruelty and lose the right to own pets. If your children get stoned, you should lose custody and go straight to jail.

Maybe then America's stoners will finally get the message that their newfound freedom comes with the price of greater responsibility.

(AP photo)

Time to Bring Pseudoscience into Science Class!

Pseudoscience is a "claim, belief, or practice which is presented as scientific... but lacks supporting evidence or cannot be reliably tested." America is awash in it.

"Roughly one in three American adults believes in telepathy, ghosts, and extrasensory perception," a trio of scientists wrote in a 2012 issue of the Astronomy Education Review. "Roughly one in five believes in witches, astrology, clairvoyance, and communication with the dead. Three quarters hold at least one of these beliefs, and a third has four distinct pseudoscientific beliefs."

Who can we blame for becoming adrift in such hogwash? Thank popular TV hosts like Mehmet Oz, who's touted more than 16 weight-loss miracles on his show, none of which has yet resolved America's obesity epidemic. Thank celebrities like Mayim Bialik and Jenny McCarthy, both anti-vaccine advocates. Thank TLC for giving "Long Island Medium" Theresa Caputo a medium with which to popularize her charlatanism. Thank New Age guru Deepak Chopra, who pushes all sorts of ineffectual alternative medicine through books and media appearances, while collecting a tidy fortune.

By stressing the importance of critical thinking and reasoned skepticism, groups like the New England Skeptical Society, the James Randi Educational Foundation, and the Committee for Skeptical Inquiry constantly battle these forces of nonsense, but their labor all too often falls on deaf ears. It's time to take the problem of pseudoscience into the heart of American learning: public schools and universities.

Right now, our education system doesn't appear to be abating pseudoscientific belief. A survey published in 2011 of over 11,000 undergraduates conducted over a 22-year period revealed that nonscientific ways of thinking are surprisingly resistant to formal instruction.

"There was only a modest decline in pseudoscientific beliefs following an undergraduate degree, even for students who had taken two or three science courses," psychologists Rodney Schmaltz and Scott Lilienfeld said of the results.

In a new perspective published Monday in the journal Frontiers in Psychology, Schmaltz and Lilienfeld detail a plan to better instruct students on how to differentiate scientific fact from scientific fiction. And somewhat ironically, it involves introducing pseudoscience into the classroom.

The inception is not for the purpose of teaching pseudoscience, of course; it's for refuting it.

"By incorporating examples of pseudoscience into lectures, instructors can provide students with the tools needed to understand the difference between scientific and pseudoscientific or paranormal claims," the authors say.

According to Schmaltz and Lilienfeld, there are 7 clear signs that show something to be pseudoscientific:

1. The use of psychobabble - words that sound scientific and professional but are used incorrectly, or in a misleading manner.
2. A substantial reliance on anecdotal evidence.
3. Extraordinary claims in the absence of extraordinary evidence.
4. Claims which cannot be proven false.
5. Claims that counter established scientific fact.
6. Absence of adequate peer review.
7. Claims that are repeated despite being refuted.

They recommend incorporating examples of pseudoscience into lectures and contrasting them with legitimate, groundbreaking scientific findings. These examples can be tailored to different classes. For example, in physics classes, instructors can discuss QuantumMAN, a website where people can pay to download digital "medicine" that can supposedly be transferred from a remote quantum computer directly to the buyer's brain. (Yes, that's a real website.) Or in psychology classes, professors can expound upon psychics and the tricks they use to fool people.

But teachers need to be careful, the authors warn.

"Research suggests that the use of pseudoscientific examples enhances scientific thinking, but only if framed correctly."

Teachers must stress the refutation of pseudoscientific claims more than the claims, themselves. Otherwise, their worthy efforts to instill critical thinking could backfire. Prior research has shown that repeating myths on public fliers, even with the intention of dispelling them, can actually perpetuate misinformation.

"The goal of using pseudoscientific examples is to create skeptical, not cynical, thinkers. As skeptical thinkers, students should be urged to remain open-minded," Schmaltz and Lilienfeld say.

But when claims are revealed to be specious, students should also be prepared to discard them.

(Image: Shutterstock)

Source: Schmaltz RM and Lilienfeld SO (2014). Hauntings, homeopathy, and the Hopkinsville Goblins: Using pseudoscience to teach scientific thinking. Front. Psychol. 5:336. doi: 10.3389/fpsyg.2014.00336

Correction 4/7: An earlier version of the post mistakenly referred to the Committee for Skeptical Inquiry as the Center for Skeptical Inquiry.

CrossFit Almost Definitely Won't Kill You

LAST DECEMBER, Outside Magazine pointedly raised a question that may have crossed the mind of many a curious exerciser: "Is CrossFit Killing Us?"

Translated from the language of over-the-top, misleading headlines to something normal, the query roughly meant, "Is CrossFit -- an exercise program that emphasizes a blend of high intensity interval training, weighted movements, and a 'push yourself to the limits' mindset -- more dangerous than more traditional modes of exercise?" To answer that, the article employed anecdotes, a single, small study, and an interview with a chiropractor. The response, returned by author Grant Davis -- could best be paraphrased as "Mmmm, potentially."

Just three months earlier, Eric Robertson, an assistant professor of physical therapy at Regis University exposed "CrossFit's Dirty Little Secret." Rhabdomyolysis, a disease in which overexertion prompts the body's muscles to break down rapidly, potentially causing kidney failure, is a common occurrence in CrossFit, he warned, and its everyday practitioners are "largely unaware" of the risk. Except among CrossFitters, rhabdomyolysis isn't a secret at all. In fact, as Robertson himself admitted, CrossFit's unofficial mascot is a clown suffering from the condition. Moreover, CrossFit has been warning its users about rhabdomyolysis since the company was founded back in 2000.

In fact, Robertson's accusations were based entirely on anecdotal evidence. But that didn't matter, because he had the power of foresight.

"My prediction: in a few years, the peer-reviewed scientific literature will be ripe with articles about CrossFit and rhabdomyolysis," he wrote.

WHAT DOES THE evidence really say on CrossFit's safety? A study published in the December 2013 issue of the Journal of Strength and Conditioning Research attempted to ascertain just that. The researchers submitted an online questionnaire to national and international CrossFit forums. To mask the survey's true purpose -- determining CrossFit's injury rates -- the researchers included general questions about training programs, demographics, and supplement use. 132 anonymous responses were collected, with 186 total injuries reported. That may sound like a lot, but the associated injury rate came out to 3.1 incidents for every 1,000 hours of activity. Moreover, no cases of rhabdomyolysis were reported. Mr. Robertson's crystal ball may need to be checked.

"Injury rates with CrossFit training are similar to that reported in the literature for sports such as Olympic weight-lifting, power-lifting and gymnastics and lower than competitive contact sports such as rugby union and rugby league," the researchers reported.

And that rate is positively puny compared to sports like soccer, skiing, and football. Even running may be more dangerous. A 2010 study followed recreational runners for eight weeks as they trained for a 4-mile race. 30.1 injuries were reported for every 1,000 hours of running.

The CrossFit injury study suffers from potentially crippling limitations. For one, it's based on recall, and human memory is notoriously fallible. Since it was posted to online forums and open for anybody to take, sampling bias is also an issue. It's highly unlikely that the study group is totally representative of the CrossFit population.

WHILE THE CURRENT study is by no means exemplary (frankly, it sucks), it is still far preferable to the anecdotal evidence rampant across the Internet. What's truly needed is a longitudinal study that tracks CrossFitters from various demographics and levels of experience over a long period of time -- at least six months.

"This should not be difficult to do, given the vast popularity of the sport," Strength and Conditioning Research's Chris Beardsley wrote.

While we wait for that, both CrossFit's opponents and proponents need to be more reasonable. CrossFit, like any form of exercise, is not without risk. But the benefits far, far outweigh them. CrossFit coaches and trainers need to look out for the health of their athletes, and take care not to push them beyond the bounds of what is safe. That means dialing back intensity, when necessary, and ensuring that participants always utilize proper lifting form. Dying for fitness just doesn't make much sense. Getting fit and living to one's fullest potential; now that does.

(Image: AP)

End the Hype over Epigenetics & Lamarckian Evolution

You might recall from high school biology a scientist by the name of Jean-Baptiste Lamarck. He proposed a mechanism of evolution in which organisms pass on traits acquired during their lifetimes to their offspring. The textbook example is a proposed mechanism of giraffe evolution: If a giraffe stretches its neck to reach higher leaves on a tree, the giraffe would pass on a slightly longer neck to its offspring.

Lamarck's proposed mechanism of evolution was tested by August Weismann. He cut off the tails of mice and bred them. If Lamarck was correct, then the next generation of mice should be born without tails. Alas, the offspring had tails. Lamarck's theory therefore died and remained largely forgotten for over 100 years. 

However, some scientists believe that new data may at least partially resurrect Lamarckian thinking. This recent resurgence is due to a new field called epigenetics. Unlike regular genetics, which studies changes in the sequence of the DNA letters (A, T, C, and G) that make up our genes, epigenetics examines small chemical tags placed on those letters. Environmental factors play an enormous role in determining where and when the tags are placed. This is a big deal because these chemical tags help determine whether or not a gene is turned "on" or "off." In other words, the environment can influence the presence of epigenetic tags, which in turn can influence gene expression.

That finding is certainly intriguing, but it isn't revolutionary. We've long known that the environment affects gene expression.

But, what is potentially revolutionary is the discovery that these epigenetic tags, in some organisms, can be passed on to the next generation. That means that environmental factors may not only affect gene expression in parents, but in their yet-to-be-born children (and possibly grandchildren), as well.

Yikes. Does that mean Lamarck was right? That question was addressed by Edith Heard and Robert Martienssen in a detailed review in the journal Cell.

Of particular concern is the idea that mammalian health can be affected by epigenetic tags received from parents or grandparents. For example, one group reported that pre-diabetic mice have different epigenetic tag patterns in their sperm and that their offspring have a higher chance of contracting diabetes. (Virginia Hughes has written an excellent article summarizing this and other related epigenetic studies.) A flurry of other biomedical and epidemiological research has strongly hinted that a susceptibility to obesity, diabetes, and heart disease can be passed on through epigenetic tags.

However, Heard & Martienssen are not convinced. In their Cell review, they admit that epigenetic inheritance has been demonstrated in plants and worms. But, mammals are completely different beasts, so to speak. Mammals go through two rounds of epigenetic "reprogramming" -- once after fertilization and again during the formation of gametes (sex cells) -- in which most of the chemical tags are wiped clean.

They insist that characteristics many researchers assume to be the result of epigenetic inheritance are actually caused by something else. The authors list four possibilities: Undetected mutations in the letters of the DNA sequence, behavioral changes (which themselves can trigger epigenetic tags), alterations in the microbiome, or transmission of metabolites from one generation to the next. The authors claim that most epigenetic research, particularly when it involves human health, fails to eliminate these possibilities.

It is true that environmental factors can influence epigenetic tags in children and developing fetuses in utero. What is far less clear, however, is whether or not these modifications truly are passed on to multiple generations. Even if we assume that epigenetic tags can be transmitted to children or even grandchildren, it is very unlikely that they are passed on to great-grandchildren and subsequent generations. The mammalian epigenetic "reprogramming" mechanisms are simply too robust.

Therefore, be very skeptical of studies which claim to have detected health effects due to epigenetic inheritance. The hype may soon fade, and the concept of Lamarckian evolution may once again return to the grave.

Source: Edith Heard and Robert Martienssen. "Transgenerational Epigenetic Inheritance: Myths and Mechanisms." Cell 157 (1): 95–109. (2014). DOI: http://dx.doi.org/10.1016/j.cell.2014.02.045

(AP photo)

Quantum Mechanics Could Yield Ultimate Privacy

Code-makers and code-breakers are locked in an eternal conflict. Thus far, they've matched each other pretty much blow for blow, with triumphs by one side followed by resurgences from the other. But when quantum computing arrives, that balance could be forever altered.

RSA, a widely used public key cryptosystem developed back in the 1970s, allows for secure data transmission. It's based in the practical difficulty of factoring two large prime numbers. But while RSA proves difficult to crack for both modern day computers and our meager human minds, it could very well be child's play for a quantum computer. In the next few decades, RSA may be obsolete.

"Confidence in the slowness of technological progress is all that the security of our best ciphers now rests on," laments Artur Ekert and Renato Renner. But the scientist duo hasn't given up hope for privacy just yet. In a new paper published to the journal Nature, Ekert, a Professor of Quantum Physics at the University of Oxford, and Renner, a Professor of Theoretical Physics at ETH Zurich, Switzerland, elucidate how the very same system that gives rise to quantum computers -- quantum physics -- can be blended with a dash of free will to generate a form of privacy so flawless that not even the NSA could eavesdrop.

According to Ekert and Renner, the method for achieving perfectly secure communication is as simple as constructing a cipher, basically a key. Instead of unlocking doors and being composed of metal, however, this key takes the form of an algorithm, one that can transform a jumbled mass of meaningless information into a clear and precise message.

"It is vital though that the key bits be truly random, never reused, and securely delivered..." The researchers say. "This is not easy, but it can be done."

To sate these requirements, particles of light -- a.k.a. photons -- can be utilized. Governed by quantum theory, polarized photons can be used to generate random, yet counter intuitively correlated outcomes. Using two matched devices designed to read those outcomes, two people can transmit a cipher. If an outsider were eavesdropping, they would just see randomness.

But surely the makers of such a device would be able to listen in on conversations? Not so, says Renner. With a little bit of freedom, randomness can be amplified.

"As long as some of our choices are not completely predictable and therefore beyond the powers that be, we can keep our secrets secret."

"It all looks bizarre and too good to be true," the researchers admit. "Perfect privacy, secure against powerful adversaries who provide us with cryptographic tools and who may even manipulate us? Is such a thing possible? Yes, it is, but ‘the devil is in the detail’ and we need to look into some practicalities."

Vitally, the photon detectors within the devices would have to be keenly tuned. A load of theoretical observations about quantum physics would also have to manifest in the real world. Even if ultimate privacy is possible, it's likely decades away. Although it is a very intriguing possibility, especially today, when secrecy is the exception, not the rule.

(Image: Shutterstock)

Was Robert Hooke Really as Big of a Jerk as Shown on Cosmos? And Why Didn't He Have a Face?

Viewers of last Sunday's episode of Cosmos were treated to an empowering, true story: of comets and intellectual brilliance, and learned knowledge conquering blind fear. We learned how one of the greatest scientific works of all time -- Isaac Newton's Principia Mathematica -- came to be published. We also learned how it almost wasn't; how the shuttered and mercurial Newton nearly withheld his work from the world, afraid to face the critical scorn of his colleagues.

Every story needs a hero. This one graced us with two: Isaac Newton, of whom you are no doubt aware, and Edmond Halley, a wonderful and worldly thinker who actively encouraged and funded Newton's revolutionary work. (You may know him for his comet.)

Every story also needs a villain. Playing the role: Robert Hooke. Though on paper he may not sound like it. Hooke discovered the cell, inspired the use of microscopes for scientific endeavors, mathematically described how springs work, deduced that light travels in waves, and, through his correspondence with Newton, helped his colleague to formulate the law of universal gravitation -- which, at the most fundamental level, describes how everything in the universe affects the movement of everything else.

But in Cosmos, Hooke was depicted as crooked and dark, with a wicked and raspy voice, and a vindictive character; a demented mix between Ebenezer Scrooge, a scarecrow, and the Grinch Who Stole Christmas. And oddly, we never saw his face. Was the harsh portrayal merited?

Perhaps. Hooke was short-tempered and fiercely protective of his ideas. To his intellectual rivals, like Isaac Newton, he could be petty and vindictive.

"Hooke has been going around London, saying that you got the Law of Gravity from him," Edmond Halley told Newton in Cosmos. To which Newton replied, "That litigious little..."

Did Hooke tell his friends and colleagues that he was the discoverer? Almost certainly. But likely not in the outright mean-spirited manner in which Cosmos portrayed it. Hooke wasn't lying about originating the law of gravitation; he genuinely believed that he did. And he had a case. He was hypothesizing on such a law as early as 1665. But Newton undeniably beat him to the finished product.

As io9's Alasdair Wilkins pointed out, history remembers winners, and so we recall a tale in which a bright Isaac Newton overcame the oppression of an underhanded Robert Hooke. Whether or not that telling is completely true will never be conclusively known. There are also legitimate questions of whether or not Hooke really was a huge jerkwad. It's possible that the popular assessment of his demeanor may simply be a case of historical heaping, where historians primarily base their descriptions of his character on the works of their predecessors.

It's also easy to cast shadows upon a man without a face. No direct portraits of Robert Hooke exist. The most popular image (seen right) was painted in 2004 by Rita Greer, based on descriptions from his colleagues, John Aubrey and Richard Waller.

Faceless villains also make for captivating storytelling.

(Images: FOX, Wikimedia Commons)

Ibn al-Haytham: The Muslim Scientist Who Birthed the Scientific Method

If asked who gave birth to the modern scientific method, how might you respond? Isaac Newton, maybe? Galileo? Aristotle?

A great many students of science history would probably respond, "Roger Bacon." An English scholar and friar, and a 13th century pioneer in the field of optics, he described, in exquisite detail, a repeating cycle of observation, hypothesis, and experimentation in his writings, as well as the need for independent verification of his work.

But dig a little deeper into the past, and you'll unearth something that may surprise you: The origins of the scientific method hearken back to the Islamic World, not the Western one. Around 250 years before Roger Bacon expounded on the need for experimental confirmation of his findings, an Arab scientist named Ibn al-Haytham was saying the exact same thing.

Little is known about Ibn al-Haytham's life, but historians believe he was born around the year 965, during a period marked as the Golden Age of Arabic science. His father was a civil servant, so the young Ibn al-Haytham received a strong education, which assuredly seeded his passion for science. He was also a devout Muslim, believing that an endless quest for truth about the natural world brought him closer to God. Sometime around the dawn of the 11th Century, he moved to Cairo in Egypt. It was here that he would complete his most influential work.

The prevailing wisdom at the time was that we saw what our eyes, themselves, illuminated. Supported by revered thinkers like Euclid and Ptolemy, emission theory stated that sight worked because our eyes emitted rays of light -- like flashlights. But this didn't make sense to Ibn al-Haytham. If light comes from our eyes, why, he wondered, is it painful to look at the sun? This simple realization catapulted him into researching the behavior and properties of light: optics.

In 1011, Ibn al-Haytham was placed under house arrest by a powerful caliph in Cairo. Though unwelcome, the seclusion was just what he needed to explore the nature of light. Over the next decade, Ibn al-Haytham proved that light only travels in straight lines, explained how mirrors work, and argued that light rays can bend when moving through different mediums, like water, for example.

But Ibn al-Haytham wasn't satisfied with elucidating these theories only to himself, he wanted others to see what he had done. The years of solitary work culminated in his Book of Optics, which expounded just as much upon his methods as it did his actual ideas. Anyone who read the book would have instructions on how to repeat every single one of Ibn al-Haytham's experiments.

"His message is, 'Don’t take my word for it. See for yourself,'" Jim Al-Khalili, a professor of theoretical physics at the University of Surrey noted in a BBC4 Special.

"This, for me, is the moment that Science, itself is summoned into existence and becomes a discipline in its own right," he added.

Apart from being one of the first to operate on the scientific method, Ibn al-Haytham was also a progenitor of critical thinking and skepticism.

"The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and... attack it from every side," he wrote. "He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency."

It is the nature of the scientific enterprise to creep ahead, slowly but surely. In the same way, the scientific method that guides it was not birthed in a grand eureka moment, but slowly tinkered with and notched together over generations, until it resembled the machine of discovery that we use today. Ibn al-Haytham may very well have been the first to lay out the cogs and gears. Hundreds of years later, other great thinkers would assemble them into a finished product.

(Image: Wikimedia Commons)

China's Disastrous One-Child Policy

Since 1979, China has engaged in a gigantic social experiment the likes of which humanity has never seen. Stemming from fears of overpopulation and an inability to feed its own people, the communist Chinese government imposed a one-child policy. Recently, China announced an end to the policy; any couple can now have two children, provided that one of the parents is an only child.

What lessons should the world take away from China's experiment? An analysis by Michael Gross in Current Biology is worth reading, though it misses one key point.

First, the part that Gross misses: He claims that the one-child policy was "successful in averting an imminent population disaster." That certainly might be true, but China's government bears an enormous responsibility for putting the country in such a wretched state to begin with. Communism has not been good to China. The following figure compares the GDP per capita of the United States and China for the years 1960 to 2012 (adjusted to U.S. dollars in the year 2000):

(Via Wolfram Alpha)

As shown in the figure, China was almost a completely impoverished nation until very recently, when GDP per capita started to climb. (For dramatic images comparing the Shanghai skyline in 1987 to that in 2013, click here.) Today, the Chinese economy is doing much better, largely thanks to reforms that helped liberalize the economy. Yet, hundreds of millions of Chinese citizens are still desperately poor. That is one reason why China's GDP per capita is still far lower than that of the U.S.

So, while it may be true that the one-child policy averted immediate disaster, it was a bad government solution to a problem largely created by bad policies. The far better option would have been to implement even greater economic reforms, encouraging faster growth. A growing economy will create wealthier and more educated people, and those types of people tend to have fewer children. Europe, for instance, has a very low fertility rate, and Europeans have chosen to do this of their own volition; no population control policies were required.

Gross goes on to discuss three important demographic challenges that the one-child policy created: A generation of "little emperors," an inversion of the age pyramid, and skewed gender ratios.

The "little emperor" generation refers to the fact that so many young Chinese people grew up as the only child. The effect of having a society full of people without any siblings is just now beginning to be understood. According to Gross, the "little emperor" generation is "less altruistic, less trusting, less trustworthy, more risk-averse and less competitive than the generations born before 1979."

The inversion of the age pyramid is the second major demographic problem that Gross identifies. Basically, it means that there are too many elderly people and not enough young, working people to support them.

Finally, Gross touches upon a third, and perhaps China's biggest, demographic challenge: There is a "shortage" of women due to the one-child policy. Typically, normal biology produces about 105 to 107 human males for every 100 females. But, in China, that ratio is skewed to 115 males for every 100 females. This is because Chinese parents prefer to have boys, and they selectively abort baby girls. In some regions, the ratio is as lopsided as 130 to 100. How Chinese society will respond to this problem remains to be seen.

Gross correctly concludes that "the best hope for tighter population control is probably that development will naturally reduce the family size everywhere." It's too bad that the Chinese government didn't figure that out 35 years ago.

Source: Michael Gross. "Where Next for China's Population Policy?" Current Biology 24 (3): R97-R100. 3-February-2014. doi:10.1016/j.cub.2014.01.035

The Female Psychopath

BETWEEN DECEMBER 1989 and November 1990, Aileen Wuornos killed seven men in Florida. Multiple shots from a .22 firearm did each of them in, and their bodies were left in secluded areas off of main roads. The "who," "what," and "when" of the murders are well known. The "why" is not. During her decade-long stint in prison after being apprehended, her accounts vacillated, from self-defense, to robbery, to cold blood. A few months before her execution by lethal injection, she seemed to settle on a story:

“I killed those 7 men in 1st degree murder and robbery.... Not so much for thrill kill, I was into the robbery biz. I was into the robbery and to eliminate witness[es].... I pretty much had ‘em selected that they were gonna die...there was no self-defense [sic].”

Aileen Wuornos was a psychopath. Cold and callous, she also succumbed to most of the other symptoms of the personality disorder. She was impulsive and irresponsible, borderline mentally deficient, aggressive and unremorseful. When examined by psychiatrists from the University of Florida, Wuornos scored a 32 on the Hare Psychopathy Checklist, placing her in the 97th percentile of female North American female offenders.

"The severity of psychopathy in this case would provide Aileen Wuornos with the emotional callousness and cruel aggression to carry out a series of murders," the Florida psychiatrists would later report in a 2005 issue of the Journal of Forensic Sciences.

AILEEN WUORNOS was actually not your typical psychopath. For one, it's very rare for a psychopath to be a serial killer. Most aren't even criminals. There are around 3.2 million psychopaths in the United States, and around 500,000 of them are incarcerated. Yes, that's a notable percentage, especially when compared to the much lower incarceration rate among the mentally healthy, but it's plain that the majority of psychopaths are law-abiding members of society. Though these people may show up as a psychopath on the Hare Checklist, they retain key mental faculties that render them functional.*

Wuornos also differed from most other psychopaths in one glaring category: she was a woman. Though the exact rates are in question, almost all psychologists agree that male psychopaths clearly outnumber female psychopaths. The difference has resulted in a dearth of research on female psychopathy.

Like their male counterparts, female psychopaths are egocentric, manipulative, lack empathy and guilt, and are often grandiose. These broad traits manifest in different ways, however. Norwegian scientists Rolf Wynn, Marita H Høiseth, and Gunn Pettersen explained some of the distinctions in 2012:

Women who are manipulative more often tend to flirt, while manipulative men are more likely to run scams and commit fraud. In women, the tendency to run away, exhibit self-injurious behavior, and manipulation, all characterize impulsiveness and behavioral problems. Moreover, their criminal behavior consists primarily of theft and fraud. In men, however, the criminal behavior often includes violence. Indeed, the form of aggression that is displayed appears to differ between the sexes. Although the results are divergent and inconclusive, some studies have suggested that while men more often show physical aggression, women more often display a more relational and verbal form of aggression. This may, for instance, occur through manipulation of social networks in attempting to exclude the victim from a community. Alternatively, it may take the form of threats of self-injury, with consequences for family and friends.

Just like in men, psychopathy in women develops as a result of "complex interactions between biological and temperamental predispositions as well as social and environmental influences." Female psychopaths may have been abused as children, subjected to abandonment, or perhaps suffered some sort of head trauma.

WHO ARE FEMALE psychopaths? That's a tricky question to answer -- demographic studies on female psychopaths are scarce, to the point of nonexistence. We know that roughly 16% of incarcerated women are psychopaths, while rates among men are much higher. Clearly, a higher proportion of female psychopaths are functional, perhaps even quite successful, members of society. Remember that psychopathy is a continuum. In minor doses, certain psychopathic traits can be beneficial. Some of the most talented female CEOs, lawyers, media personalities, actors, journalists, and politicians would likely display detectable levels of psychopathy. While psychopaths like Aileen Wuornos get all the attention, they're almost certainly the exception, not the norm.

(Image: Shutterstock)

*Section updated 3/20 to clarify the statement that "most psychopaths are law-abiding..."

Waiting Totally Sucks. Here's How to Do It Better

Cue Jeopardy music.

Waiting. We've all done it, and pretty much all of us hate it. Can science help us do it better?

Sadly, when it comes to waiting in line at Disney Land, McDonalds, or the DMV, you're at the mercy of the machine. All you can really do is think of sunny, sandy beaches and steer clear of anothing potentially antagonizing.

But when it comes to another ubiquitous form of waiting, anticipating uncertain news or outcomes, Kate Sweeny has you covered. Waiting on information regarding your health, relationships, professional prospects, or academic outcomes can be torturous. Sweeny wants to alleviate the agony.

An assistant professor of psychology at the University of California-Riverside, Sweeny has extensively explored the psychology of waiting, with a specific goal of minimizing any associated stress and anxiety. In 2012, she developed a model of "uncertainty navigation" to depict the process people go through during difficult waiting periods and to help them healthily soldier through it. Her strategy can be broken down to three broad categories: mitigating consequences, reappraising the outcome, and regulating emotions.

Sweeny is currently testing her Uncertainty Navigation Model in two longitudinal studies. "Study 1 will examine the experiences of people taking the California bar exam during the several months while they await their exam results, and Study 2 will examine the experiences of students in an upper-division psychology course over the several days while they await their midterm exam grades," she explained.

As we wait for those results, we can all benefit from five tips she offered up on how to wait well:

1. Distract yourself from uncertainty. Read an enthralling book, watch a captivating movie, play a video game that transports you to another realm. In essence, find ways to minimize your anxiety in ways that are totally irrelevant to the situation.

2. Manage your expectations. There are two ways to do this: brace for the worst or hope for the best, and both have their merits. Yes, the former is basically adopting a pessimistic outlook, but it also means you may not be disappointed if the news is sour. On the other hand, hope offers tangible, immediate benefits. According to Sweeny, "research supports a number of benefits of maintaining hope under difficult circumstances, such as better adjustment to breast cancer, reduced risk for of hypertension, increased immune functioning, and faster recovery from illness."

3. Look for the silver lining in all outcomes. It may surprise you to know that people with chronic and deteriorating diseases do not often report worse quality of life compared to their healthy counterparts. Expectation plays a huge role in life satisfaction, so generally, when people come to terms with their new predicament, they're able to redefine their personal measures of happiness. Therefore, while waiting for potentially bad news, you can take solace in knowing that, though your life might have to change, you'll still be just as happy. "People who find potential benefit in possible bad news will likely respond with less distress should the negative outcome actually occur," Sweeney says.

4. Keep perspective regarding the news. Consult with friends, family, and experts to ascertain the ramifications of potentially bad news. Evaluate how important the moment truly is in the grand scheme of things.

5. Plan ahead for the consequences of bad news. Take steps to make your life easier should the disastrous outcome you're dreading actually come to pass. For example, if you're waiting on news from the doctor about whether or not surgery is required for some malady or injury, contact your employer to secure time off from work. Or, say you're waiting for the results of a consequential exam. Start planning and taking actions to improve your score on your next potential exam. Design new study habits, or begin searching for tutors. Sweeny hypothesizes that these costs and efforts are worthwhile. "Consequence mitigation serves not only to prepare for the future, but also to manage anxiety in the present."

(Images: Shutterstock, Sweeny & Cavanaugh)

The Incredible Shrinking Planet Mercury!

29 million miles away from the Sun at its closest, Mercury is the nearest of the eight planets to the burning center of our Solar System. It's also the smallest.

And it's getting smaller.

Back in the 1970s, the Mariner 10 spacecraft swung by Mercury on three occasions, photographing about 45% of the planet's surface in the process. Examining those images, planetary scientists uncovered telltale signs of shrinkage: lobate scarps, geological structures where crustal rocks had been pushed up and over each other, sinking down in the process. They estimated that Mercury had lost one to two kilometers of its global 2,440-kilometer radius since forming and hardening approximately 4.6 billion years ago. (For comparison, Earth's radius is 6,371 kilometers.)

According to a new report, however, Mercury has shrunk more than we thought: as much as 7 kilometers! The new finding, published in the journal Nature Geoscience, comes courtesy of a team led by Paul Byrne, a planetary scientist at the Carnegie Institution for Science in Washington D.C.  But Byrne and his colleagues couldn't have succeeded without a little help from a newfound mechanical friend.

In August 2004, the Messenger spacecraft blasted out of Earth's atmosphere. Armed with an array of cameras and fortified with special shielding to guard against the Sun's damaging radiation, it set out to become the first spacecraft to orbit Mercury, and help us Earthlings learn about our planetary neighbor. Three years ago, Messenger entered orbit around Mercury. Since then, it's buzzed around the planet 2,886 times. A dedicated and busy bee, Messenger has put those orbits to good use, imaging the entirety of the planet's surface.

Armed with this unprecedented view, Paul Byrne and his team found that the planet was littered with the aforementioned scarps. They also found lots of wrinkle ridges. Resembling veins on skin, they're clear signs of contraction. The moon has lots of aged wrinkle ridges.

Surveying 216 ridges and scarps, the researchers arrived at a new estimate of Mercury's shrinkage: 5 to 7 kilometers radially.

Referred to as a "Land of Confusion," Mercury is an enigmatic planet -- its day is actually longer than its year! While Mercury zips around the sun every 88 Earth days, it completes one rotation every 59 Earth days.* Thus, to a fictitious observer standing on Mercury's inhospitable surface, a solar day from sunrise to sunset would take the equivalent of 176 Earth days. Moreover, its crust and mantle appear to be joined together into a single tectonic plate, and its solid iron core comprises as much as 65% of the planet!

"That's twice the percentage of our own Earth," NASA's Charlie Plain wrote.

Mercury's oversized core is likely to blame for its dwindling radius. The iron is likely still cooling, compacting in the process. William B. McKinnon, a planetary scientist at Washington University in St. Louis, summed up the paper succinctly and poetically.

"As Mercury’s interior cools and its massive iron core freezes, its surface feels the squeeze."

Source: Paul K. Byrne, Christian Klimczak, A. M. Celâl Şengör, Sean C. Solomon, Thomas R. Watters and Steven A. Hauck. "Mercury’s global contraction much greater than earlier estimates. "Nature Geoscience. DOI: 10.1038/NGEO2097

(Images: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington, NOAA)

*Correction 3/16: An earlier version of this article incorrectly stated Mercury's rotational period as six Earth months. This is incorrect.

Will Fukushima Radiation Poison California?

This week marked the three-year anniversary of the massive Tohoku earthquake off the Japanese coast. This 9.0 Richter scale tectonic event and the 100-foot tsunami that it triggered killed six times more people than the 9/11 attacks. It also resulted in the meltdown of three reactor cores at the Fukushima-Daiichi nuclear plant. 

Radioactive isotopes released during the meltdown have drifted along the Pacific currents and are now approaching the US Pacific Coast. Should Californians worry about this? Not at all. Fukushima radiation levels in seawater will be so low that you could drink gallons of ocean water every day and be at no health risk -- except dehydration and vomiting. 

Here’s why. 

Fukushima released three radioactive isotopes into the ocean, each with a different half-life. Half-life is the amount of time for the total amount of radioactive material to drop by 50%. Radiation is 94% lower after four lengths of this time, and 99.2% reduced after seven. 

Iodine-131 has a half-life of just eight days; all released I-131 vanished long ago. Cesium-134 has a half-life of two years, so exactly half of it remains. Cesium-137 is mostly still intact with a half-life of just over 30 years. 

Roughly 11.3 kg (25 lb.) of Cs-137 was released into the Pacific Ocean and 95% of it (24 lb.) remains. Similarly, roughly 729 grams (26 oz.) of Cs-134 were released and 365 grams (13 oz.) remain. 

The Pacific Ocean has a total weight of around 6.6x10^20 kg. That’s more than 1.4 million trillion pounds. 

The point of these mind-blowing numbers is that the concentration of leaked cesium in the ocean is unimaginably small. So small, that the amount of Cs in the water is far below the amount deemed harmful to humans. Even if they were drinking it. 

Scientific models have predicted the worst-case concentration of Cs-137 in ocean water reaching the US West coast. They find this level to between 0.002 and 0.03 Bq/L (becquerels per liter of water). How many Bq/L is safe? 

Japan certifies water with up to 200 Bq/L safe to drink. The US FDA considers it safe to consume a liter of water with up to 1200 Bq, so long as you don’t do it often. The US EPA has a stricter limit (7.4 Bq/kg), but this is based upon the idea that such water is consumed every day for 70 years. 

The bottom line about cesium radiation on California beaches: the water will likely contain hundreds to millions of times less radiation than safe drinking water. Bathing is even less risky, as skin exposure limits are much higher still. Dental X-rays, air travel, tube-televisions and even bananas are more worrisome than Fukushima isotope levels in our seawater.

What Would Happen if You Stuck Your Head in the Large Hadron Collider's Particle Beam?

The Large Hadron Collider (LHC) is the world's largest and most powerful particle accelerator. Within its 17-mile loop, beams of particles are slammed into each other at speeds just 3 meters per second shy of the speed of light. By observing these collisions, physicists may be able to explore some of the most nagging questions in the universe, like, "What is the nature of dark matter?" and, "Are there additional dimensions?"

As a machine constructed on the forefront of possibility, the LHC has also raised questions of its own, like, "Will it spawn micro black holes that could swallow the Earth?" or, "Will it create strange matter particles that could overtake the planet?" or, "Will it open a gateway to Hell?" Obviously, the safety concerns at the LHC are a tad more cataclysmic than you'll find at a run-of-the-mill manufacturing plant or science lab. Thankfully, these doomsday scenarios are also unfounded, fomented in the first place by misunderstanding and fear of the unknown, not any sort of fact.

The LHC isn't completely harmless, mind you. All particle accelerators release DNA-damaging radiation during their operation, and those who work in close proximity run the risk of exposure. CERN maintains rigorous procedures to minimize the danger to its employees.

But let's say, completely hypothetically, that all of CERN's stringent safety mechanisms failed (which is highly unlikely to the point of impossibility), and someone managed to climb inside the LHC as it was turned on, and was subsequently struck by the particle beam? What would happen?

"I certainly wouldn't advise doing that," CERN scientist David Barney told the University of Nottingham's Sixty Symbols. "The beam itself is focused down very tightly to less than a millimeter across, extremely intense. The actual energy carried by the beam is like an aircraft carrier in motion."

Another scientist at CERN, Steven Goldfarb, was more blunt and to the point. "It would burn right through you."

Barney explained that a much wider halo of radioactive subatomic particles, mostly electrons and muons, accompanies the "extremely" intense proton beam.

"Your whole body would be irradiated. You'd die pretty quickly."

Barney and Goldfarb's learned estimations are likely the closest we'll ever get to knowing what would happen if a person directly encountered the LHC's particle beam. Quite understandably, nobody with access to CERN and their sanity intact is especially keen on finding out firsthand. But, incredibly, real-world, experimental evidence exists!

Back in 1978, Russian physicist Anatoli Bugorski was struck in the head with a particle accelerator's beam. Moscow journalist Masha Gessen chronicled the event for Wired Magazine in 1997.

Bugorski was working with the U-70 synchrotron at the time (for reference, the U-70 had a comparatively measly 1% of the LHC's maximum power) and stuck his head into the accelerator tube, obviously thinking that the machine was off. It wasn't. He saw "a flash brighter than a thousand suns," but felt no pain. Bugorski was taken to a hospital where he was expected to die of radiation poisoning over the ensuing two to three weeks. Gessen describes what happened next:

Over the next few days, skin on the back of his head and on his face just next to his left nostril peeled away to reveal the path the beam had burned through the skin, the skull, and the brain tissue. The inside of his head continued to burn away: all the nerves on the left were gone in two years, paralyzing that side of his face. Still, not only did Bugorski not die, but he remained a normally functioning human being, capable even of continuing in science. For the first dozen years, the only real evidence that something had gone neurologically awry were occasional petit mal seizures; over the last few years Bugorski has also had six grand mals. The dividing line of his life goes down the middle of his face: the right side has aged, while the left froze 19 years ago.

If there's a lesson to be learned, it's that charged beams of particles should be smashed into other particles, not people.

(Images: CERN, Wikimedia Commons via Today I Found Out)