When did human ancestors first learn to count? Well now that is a tricky question to answer. Rudimentary counting likely began with our fingers and may have advanced to dividing out pebbles into ordered groups. But there's no way to really know this. Archaeologists would be hard-pressed to find an ancient human fossilized in the act of counting his or her digits, and they'd be equally unlikely to find pebbles neatly piled into ordered groups, untouched and unaltered for tens of thousands of years.
There is, however, an artifact that hints at prehistoric mankind's counting skills. A 43,000-year-old baboon fibula recovered in the Lebombo Mountains between South Africa and Swaziland, has, on one side, 29 carved markings arranged in a row. Scientists surmise that, at one point, one of our ancestors took tally of something, notching the count into the bone, but it's really anyone's guess as to what the individual or individuals were keeping record of. Artifacts like this one, called tally sticks, are relatively common in the fossil record and have been unearthed in Europe and Africa.
One of these tally sticks, in particular, stands out from the rest. In 1960, while exploring the Semliki Valley in what is today the Democratic Republic of the Congo, Belgian geologist Jean de Heinzelin de Braucourt discovered a strange bone amongst the remains of an ancient settlement buried in a volcanic eruption. The Ishango bone, as it is now called, was thinned down, polished, and engraved, with a piece of quartz protruding from the top. What really set the artifact apart, however, were the defined markings along its three sides. There were 168 notches in all, arranged into sixteen groups and three rows. Were these more than mere tallies? Did they constitute an understanding of mathematics? If so, that certainly would be amazing, especially considering that the Ishango bone is roughly 20,000 years old!
The bottom row is arranged into four groups of 11, 13, 17, and 19 notches, collectively totaling 60. Fascinatingly, all of these numbers are prime numbers between ten and twenty.
The middle row is composed of eight groups of notches. The first six give a number followed by its double: 3 and 6, 4 and 8, 5 and 10. The last two numbers are 5 and 7, which don't follow the pattern. The row sums to 48.
The final row is arranged into four groups of 11, 21, 19, and 9 notches. Heinzelin suggested that they all were related to the number ten (11 - 1 = 10, 21 - 1 = 20, 19 = 20 - 1, 9 = 10 - 1). Like the first row, all of the notches total to 60.
Many have agonized over the significance of the notches and numbers. Do the notches constitute an attempt at a rudimentary numeral system? Is the bone, perhaps, a basic calendar, potentially used to track the lunar phase in relation to the female menstrual cycle?
Mathematician Vladimir Pletser of the European Space Research and Technology Centre is particularly enamored with uncovering the artifact's secrets.
"The...Ishango bone has notches that seem to form patterns," he writes, "making it the first tool upon which some logic reasoning seems to have been done." Pletser suggests the bone may evince that ancient humans in the region used a base twelve numbering system.
"The Ishango bone is certainly the best early example of precounting," Amir Aczel, a historian of mathematics, writes in his latest book Finding Zero.
Of course, out of this wide search for meaning, the notches may in fact be meaningless, simply scratched in to create a better gripping surface.
Regardless, the Ishango bone regularly draws crowds at the Royal Belgian Institute of Natural Sciences in Brussels, where it is on permanent display.
It starts with a quick, sideways glance at your neighbor; a harmless motion. The eyes lock on to the paper she's hovering over, then to a question number, and finally -- and most importantly -- to the answer. After a mere second your head snaps back to your own test. Nobody noticed. You jot down the stolen answer that had previously eluded you and move on to the next question. You know what you did was cheating, but it was only once... It's not a big deal...
Adulthood presents its own array of misdemeanor dishonesty. Maybe it's sneaking in a few extra items on your property insurance claim or neglecting to report some minor income on your taxes. But that's okay. Everyone does it...
In fact, a lot of people do do it. In 2003, the Insurance Services Office reported that 10% of total payments for property and casualty insurance, roughly $24 billion annually, are for fraudulent claims. Tax cheating is even more prevalent. Each year, there's more than a $300 billion gap between what people should pay according to the law and what is actually paid.
For many, minor dishonesty is a daily occurrence, yet they are able to shirk off their sins and live knowing that they are generally good and honest.
Duke University psychologist and behavioral economist Dan Ariely has spent a lot of time studying dishonesty. He and his colleagues have tested thousands of subjects from various countries under controlled condition, and have arrived at an intriguing conclusion.
"We find that almost everyone cheats but only by a limited amount."
The specific source for this sweeping statement is a subtle test of dishonesty called the matrix task. Ariely recently described it in an op-ed published to the Journal of Clinical investigation:
"Participants are presented with a test sheet with 20 matrixes, with each matrix consisting of 12 numbers. Participants’ task is to find and circle in each matrix two numbers that add up exactly to 10 (e.g., 1.53 and 8.47). For each correctly solved matrix, we pay $0.50. After five minutes, participants count the number of correctly solved matrixes on their test sheet and write down their performance on a collection slip. In our baseline control condition, participants submit both the test sheet and the collection slip. The experimenter then checks each participant’s performance and pays them accordingly. In our treatment condition, we instruct participants to shred their test sheet and only submit the collection slip to the experimenter. In this latter condition, participants face a conflict of interest at the end of the task: they know that they can either be honest or overstate their performance to earn more money.
What Ariely has consistently found is that virtually everyone given the opportunity to cheat does, but only by a little. Out of a possible twenty matrices, they usually "solve" between two and three more than participants in the control condition. Even with no risk of being caught, they still hold back. Why?
"People behave dishonestly enough to profit but honestly enough to delude themselves of their own integrity. A little bit of dishonesty gives a taste of profit without spoiling a positive self-view," Ariely explained in 2008. He added more detail in his recent op-ed. "In other words, if people cheat too much, it becomes hard to rationalize away their immoral conduct so that they can continue feeling good about themselves in terms of their morality."
If Ariely's finding was a glass of water, it can be viewed as a half-empty or half-full. Though many choose to cheat when given a clear opportunity, they do hold back. Humans may not be angels, but at least they're not demons.
There are two varieties of flamingo in the world: The first is a majestic bird that makes its home in warm coastal waters all over the world. The second is a plastic ornament -- and official city bird of Madison, Wisconsin -- that deeply troubled and misguided souls believe will add a touch of class to their lawns. Fortunately, this article is about the former, not the latter.
Flamingos are known for their beautiful red and pink plumage which comes from pigments called carotenoids. (Carotenoids come in a variety of shapes and colors. Red, orange, and yellow vegetables, such as carrots and peppers, contain carotenoids.) Flamingos are unable to produce their own carotenoids. Instead, they aquire them from their diet. Once in the bloodstream, the carotenoids make their way to follicle cells and then into the feathers.
Diet, however, appears not to be the only source of coloration. In a new paper in the journal Scientific Reports, Korean researchers show that flamingo feathers are covered with haloarchaea, a type of extreme salt-loving microbe. Haloarchaea also come equipped with carotenoids, which they use to produce energy in a rather bizarre way that is more akin to solar power than to photosynthesis.
The authors studied captive flamingos that had been fed an artificial diet that lacked carotenoids and subsequently examined the birds' feathers. They isolated 13 different types of haloarchaea and confirmed the presence of bacteriorubrin, a red-colored carotenoid. Thus, these extreme salt-loving microbes were responsible for the eye-catching color of the captive flamingos.
Of course, wild flamingos are not fed an artificial diet that lacks carotenoids. The natural coloration of wild flamingos, therefore, would be the result of both diet and environmental factors, such as the decorative microbes described above.
The harmonious symbiosis between bird and bug is almost enough to make me purchase a plastic knickknack of my own. Almost.
Source: Kyung June Yim, Joseph Kwon, In-Tae Cha, Kyung-Seo Oh, Hye Seon Song, Hae-Won Lee, Jin-Kyu Rhee, Eun-Ji Song, Jeong Rae Rho, Mi Lyu Seo, Jong-Soon Choi, Hak-Jong Choi, Sung-Jae Lee, Young-Do Nam & Seong Woon Roh. "Occurrence of viable, red-pigmented haloarchaea in the plumage of captive flamingoes." Scientific Reports 5, Article number: 16425. Published online: 10-November-2015. doi: 10.1038/srep16425
For more than 50 years, our progress in understanding the universe has been blazed by a series of spectacularly complex particle accelerators. Each new machine was built bigger, badder, and more powerful than the last. The Berkeley Bevatron was succeeded by the Fermilab Tevatron, which itself was succeeded by the LHC.
These machines were built in service of what might be called "fundamental" science: not to directly design a new bomb or a new rocket or a new source of power. Their mission -- and they have been wildly successful at it -- has been to experimentally verify the standard model of particle physics. Their purpose was almost purely to gain and test knowledge about the universe for its own sake. Let's look at everything, smaller and smaller, to see what's there and how it works!
With the announcement of the Higgs Boson discovery in 2012, the final and most elusive important speck of subatomic matter was nailed down. In some ways, this capped the achievements of particle physics as a science. We now have the last major particle needed to verify the Standard Model. What comes next? The answer comes from two considerations: those factors scientific in nature and those monetary.
Each of the succession of particle physics machines has cost an order of magnitude more than the last. The U.S. famously axed its own plan to find the Higgs Boson ten years earlier, even after spending $2 billion and building 15 miles of circular tunnel, over fiscal concerns. If these things were cheap, we could just go on building more of them. Unfortunately, a machine only one step larger would probably cost tens of billions of dollars. A machine powerful enough to directly look for the strings of string theory would need to be roughly the diameter of our entire galaxy. Good luck funding that.
Given that physicists can no longer point to any major missing subatomic particles, there simply is no argument strong enough to command the funding necessary to build bigger particle smashers.
Current facilities like SLAC at Stanford, RHIC at BNL, and the LHC at CERN are clearing up certain areas of physics outside of fundamental particles. RHIC, for example, is currently looking to measure the shape of atomic nuclei. A few new machines of similar scale may be built in the near future. Colliders in the mold of LHC but bigger and more explosive are not coming, barring a massive and unforeseen new discovery. For just this reason, we should be skeptical about sensationalized results that carry requests for more money for just such projects.
From here, two things are likely to happen, barring that unexpected momentous discovery.
A few clever experiments, such as ACME that looks for an electron dipole moment, will find ways to test certain advanced theories with one thousandth the money and resources of a particle collider. These sorts of experiments have a reasonable shot at gradually confirming or ruling out many speculative fundamental physics ideas. Some of the wilder predictions of the more bizarre theories, such as string theory, may never be tested. This of course brings us to a fair but separate debate: What's the point of a theory that makes no testable predictions?
Mostly, however, I believe that the budget and interest in big physics projects is going to transition to applied science research. That is to say work undertaken largely for the sake of bombs, rockets, energy sources, computers, medicines, and so on. And that's not a bad thing at all. Applied physics brought us our electricity, our phones and internet backbones, our televisions and engines and airplanes, and the standard litany of modern wonders of our lives that we take for granted.
The largest physics project under construction in the world today is the fusion power experiment ITER, which has burned through $14 billion though it will not begin operating until at least 2027. New discoveries in the science of plasmas will surely be made, but the purpose and drive of the project is to directly build a useful device. ITER is planned to eventually transform itself into a working a prototype for a real electrical plant.
The NIF laser implosion facility and something called the Spallation Neutron Source are two of the next most expensive recent physics facilities. These projects too can illuminate new science. But NIF was both sold and built upon the principles of its applications: nuclear weapons simulation and possibly fusion power generation. The Spallation Neutron Source provides neutron beams for literally hundreds of experiments. But its purpose is not so much about learning more about neutrons themselves as doing something useful with them.
These projects with direct applications for our society are capable of winning multi-billion dollar funding for the future.
But what, you ask, of discovery for discovery's sake? Will we be abandoning the spirit of scientific inquiry because it's not easily affordable?
We might worry that fundamental science fizzles in applied science projects like these. But, what really happens is that projects of such enormous scale venture into unheard of realms of high power, enormous heat, crushing pressure, pure vacuum, or other unknown territory. These frontiers, never before reached in a lab, require and push the development of new fundamental science to describe them. If anything, this may be better for fundamental science. Physical theories developed in a vacuum, so to speak, without experiments to push and guide it and test them, often wander into untestability or even uselessness.
Today, we marvel at the enormity, complexity and genius of the LHC and similar large projects. I suspect that in the future, we will look back at them as the last and most amazing of a lineage of extinct creatures. We will marvel at the precision and intellect that designed and ran them, that drove them to produce fantastic fundamental scientific data. Then we'll turn back to the work of that future time: directly applying the physics that those machines learned to address human problems and demands.
On July 30th, 2014, 46-year-old Wayne Wade broke into a home in Hollywood, Florida. Unluckily for Wade, as he was casually loading a number of stolen valuables including a television and a coffee table into his pickup, the owner returned home and confronted him. Flustered, he quickly jumped into his pickup and drove off, but in his understandable haste to depart the scene, he left his cell phone on the owner's bed.
Some time later, Wade did what a lot of people who lost their smartphone would probably do: he called it in the hopes that someone would answer and return it. A police officer answered. Wade told the officer his name. He was arrested shortly thereafter.
When Wade appeared in court, Judge John Hurley was in awe of the burglar's stupidity.
"So, the allegation is you burglarized the home, you left your cellphone, you realized you left your cellphone and then called back, and the police answered the phone and you told them what your name was over the cellphone," Hurley said. "I'm just trying to absorb that."
"Because it was stolen, sir," Wade explained.
From criminals and politicians to friends and family, all sorts of people do stupid things every single day. But what exactly makes an act "stupid"? It's a harder question to answer than you might think. Rather than formulate a textbook-style response, many might proffer a simple reply: "You know stupidity when you see it."
While probably true, that's simply not a good enough explanation for inquisitive scientists. In the latest issue of the journal Intelligence, Balazs Aczel and Bence Palfi of Eotvos Lorand University in Budapest, Hungary teamed up with Baylor University's Zoltan Kekecs to come up with a more complete and empirical answer.
Their quest wasn't only fueled by curiosity.
"Studying the attribution of stupid should have psychological interest for several reasons," the trio wrote. "Firstly, it is a frequent everyday behavior and our knowledge of its social, affective and cognitive roots and consequences is scarce. Secondly, our behavior is often guided by the aim of avoiding actions that we might label ‘stupid’... Thirdly, if calling one's actions stupid is a sign of interpersonal conflict, then understanding what people mean by this label can bring us closer to discovering the roots and, thus, a potential dissolution of the conflict."
To uncover what defines an act as "stupid," the researchers analyzed real-life examples of stupidity. They first built a formidable assortment of 180 stories describing stupid actions collected via the Internet and from daily reports provided by a group of 26 college students. All of the stories were reviewed by a group of seven raters to ensure that they described a "stupid" action, were comprehensible, and were relatively brief.
The next phase of the study called for a large group of participants to assess the stupidity of each story on a 1-10 scale, determine from a list of thirty psychological factors what precisely contributed to the stupidity, rate the responsibility of the actor and environment in the situation, and gauge any associated consequences. 154 undergraduates (122 females) took part, completing one of twelve questionnaires each containing fifteen stories from the large pool.
Importantly the undergraduate subjects showed wide agreement on what was stupid and what wasn't, hinting at a common definition. From the collected data, the researchers distilled three key categories that make an action stupid:
"The first situation in which people call an action stupid is when the actor takes high risks while lacking the necessary skills to perform the risky action. A typical story for this is when burglars wanted to steal cell phones, but instead stole GPS navigation devices. They didn't switch them off so the police were able to track them easily. We named this category ‘Confident ignorance’. The second cluster consisted cases of ‘Absentmindedness – Lack of practicality’... A typical story here is when someone inflates more air in the car tires than allowed. Here the person either forgot to pay attention to the action or he or she doesn't know something essential about tire inflation. The third category is ‘Lack of control’. Cases here are thought to be the result of obsessive, compulsive or addictive behavior. For example, one of the stories in this category described a person who canceled a meeting with a good friend to instead continue playing video games at home."
The researchers also found that a stupid action is judged more harshly when the person committing the act bears a highly level of responsibility or if the act itself results in serious consequences.
A big limitation of the study is the homogeny of the study group, which was composed of Hungarian college students, roughly 79% female. What they consider stupid might be decidedly different from what people of other cultures and age groups consider stupid.
Despite that obvious limitation, the researchers should be applauded for bringing a blending an unikely cocktail of science and stupidity. Now, the next time you call a politician stupid, you can explain to your peers exactly why your assessment is correct, and provided a dash of evidence to back it up.
Source: Balazs Aczel, Bence Palfi, Zoltan Kekecs. "What is stupid?: People's conception of unintelligent behavior." Intelligence. Volume 53, November–December 2015, Pages 51–58. doi:10.1016/j.intell.2015.08.010
Last year, my home state of Minnesota became the first state in the country to ban the use of triclosan in most consumer retail hygiene products, an act that quickly perked my skeptical senses. At the time, I knew triclosan was a widely used antimicrobial agent in hand soaps, and that it is generally regarded as safe by the FDA and the scientific community. Thus, I shirked off the ban as yet another example of excessive, chemophobia-driven policymaking.
However, after reviewing the scientific evidence, I've changed my mind. While I believe the decision in Minnesota was probably more political than evidence-based, I also think there is now enough of a scientific case for the FDA to prohibit the use of triclosan in consumer products nationwide. I have arrived at this conclusion not out of fear, but by a nuanced exploration of the evidence.
Triclosan is an intricately woven chemical of double and single carbon bonds, oxygen, chlorine, and hydroxide. But don't let its pristine structure fool you; its design is deadly. Triclosan binds to a specific enzyme in bacteria, an act that ultimately renders the bacteria unable to build cell membranes. Without cell membranes, the bacteria fall apart at the seams.
Triclosan is used in roughly 75 percent of all antimicrobial liquid hand soaps, as well as toothpaste and body wash, among many other products. Basically, it's everywhere. A 2012 systematic review showed that triclosan was present in 62% of all freshwater environments exposed to wastewater treatment plant effluent in the United States, at roughly 130 nanograms per liter, a minute, but disconcerting amount.* Why disconcerting? Because we really don't know what it will do to ecosystems as it continues to accumulate. At present levels, triclosan can kill aquatic bacteria and may potentially disrupt algal colonies, but there really hasn't been a smoking gun that would decry triclosan as harmful. For now, all scientists can do is continue to monitor the situation.
Triclosan's ubiquity extends to humans as well. Studies have found the chemical to be present in roughly 75% of urine samples in the U.S. But that's actually a testament to how often we come into contact with triclosan, not to its ability to persist within the human body. In fact, triclosan is rapidly metabolized and excreted, meaning there's little to no reason to worry that triclosan is inherently harmful to humans.
Claims to the contrary exist, but they are often fueled by preliminary or tenuous research. Stanford health researchers may have been a tad quick on the trigger when, in 2013, they announced that they had uncovered a link between higher levels of triclosan in urine and increased body mass index in humans. Earlier this year, Tulane researchers examined the exact same data set -- but over a longer time duration and with more subjects included -- and came to the opposite conclusion! Animal studies have demonstrated that triclosan can disrupt proper functioning of the endocrine system in rats, but only at concentrations far higher than anything a human would experience.
Perhaps the most widely covered study showing potential harm from triclosan came out late last year. It concluded that triclosan could promote liver tumor growth in lab mice. The claim seems to hint at a link to cancer, but any supposed link falls apart in the eyes of those who read the study with a skeptical eye, as Scientific American blogger Hilda Bastian did:
[The researchers] did two experiments - all in male mice, because they judged them more likely to get liver cancer. One involved only 12 mice - and its findings did not suggest that triclosan could cause cancer.
The other experiment had more mice, but was still small. The mice weren't just exposed to far higher amounts of triclosan than would normally be encountered: cancer was induced separately. And the cancer in the ones who had triclosan progressed further - it promoted, but didn't cause cancer.
Once again, don't let the scaremongers frighten you; triclosan probably isn't harmful to humans. However, there are downstream effects that merit consideration. There is growing evidence that triclosan use is associated with an increase in allergies. Antimicrobial resistance is also a concern. Exposure to triclosan may cause bacteria to develop resistance. A focused review of the literature published earlier this year concluded that "the risk of potential antimicrobial resistance outweighs the benefit of widespread triclosan use in antimicrobial soaps."
As for the benefit of triclosan, that actually doesn't seem to be much. Consumer soaps with triclosan actually aren't any more effective at preventing infectious illness and reducing bacteria counts on the hands! Evidence does suggest that triclosan in toothpaste may reduce plaque and gum inflammation, as well as caries, compared with normal toothpaste. However, the authors of an extensive review on the topic noted that the "reductions may or may not be clinically important."
Ultimately, a cost-benefit analysis of triclosan's use in consumer products yields a number of as yet unproven costs, but almost no benefits.
The FDA is currently in the process of reviewing its policy on triclosan. When the agency concludes its deliberations next year, it would be wise to err on the side of caution and follow Minnesota's lead in banning the chemical's use in consumer settings.
*Update 11/16: This sentence used to read, "triclosan was present in 92% of all freshwater environments tested in the United States, at roughly 11,270 nanograms per liter." That is incorrect. The paragraph has been updated with the correct statistics.
The United States has the best higher education system in the world. That isn't a statement of flag-waving patriotism; it's simply a matter of fact. According to Times Higher Education, 39 of the top 100 schools in the world are located in the United States. There are many reasons for that.
First, the British bequeathed to us (and the world) a strong tradition in higher education. It is no accident that 69 of the top 100 universities in the world are in former British colonies (United States, Canada, Australia, Singapore, Hong Kong). Second, U.S. education, including its decidedly mediocre K-12 education system, has focused on encouraging creativity among students. Third, despite all the political grandstanding and endless griping by teachers, U.S. education is incredibly well-funded. Americans spend more per student than any other country in the world. Finally, the U.S. eagerly accepts the world's best and brightest, including foreigners, into universities.
Yet, despite these advantages, there is no reason to believe that U.S. dominance of higher education will continue indefinitely. A frightening and unprecedented two-front assault on academia could topple the American juggernaut.
The first assault comes from outside academia. Scientists are being harassed by political activists who are abusing FOIA requests. Instead of using the law to hold public officials accountable, activists are using it to dig through private emails to fabricate controversies and concoct self-incriminating statements, often crafted from passages taken entirely out of context. Both climate and GMO researchers have been targeted.
The second assualt is far more disturbing because it originates within academia itself. A long-standing tradition in academia is the rigorous defense of the freedom of thought. Over the last several years, however, this freedom has eroded. An era of "microaggressions," "trigger warnings," and "safe spaces" has led to a mass silencing of dissent. Some professors have grown afraid of teaching classes for fear of offending students. (An anonymous article in Vox titled "I'm a liberal professor, and my liberal students terrify me" ought to be read in its entirety.)
Alarmingly, professors themselves sometimes enthusiastically participate in the mass silencing, as did a communications instructor at the University of Missouri who tried to intimidate a student journalist covering campus protests. Such behavior is not simply intolerant; instead, it is intended to exterminate any deviation from majoritarian orthodoxy.
This is toxic and dangerous. Research cannot thrive in the face of anti-intellectual aggression.
The ultimate impact of the two-front assault on higher education will be to jeopardize America's lead position in the world. If free speech, free press, and free thought continue to be attacked — from both inside and outside academia — then we should not be surprised if the world's best and brightest choose to go elsewhere.
Physics is built out of philosophically fascinating ideas. Or, at least, ideas that fascinate us as physicists. We are often moved to reverentially proclaim the beauty of various concepts and theories. Sometimes this beauty makes sense to other people (we're made of star stuff) and other times it's opaque (Frobenius manifolds in psuedo-Euclidean spaces).
I have my own personal favorite idea. It arises from the philosophically fantastic (but mathematically moderate) workings of Einstein's relativity theory. The theory of special relativity holds that time and space are not separate entities, each operating on its own; rather they are intimately and inextricably codependent. We are born, live, and die along "world-lines" through a four-dimensional spacetime.
Here's what awes me: we travel through this 4-D spacetime always at a constant speed: c, the speed of light.
No matter what we do in our momentary lives, we are always truly traveling through our universe in time and space together, always at at the same rate. Let's consider a few facts that follow from this realization.
A man who sits still uses none of his lightspeed to travel through space. Instead he is travelling in time at the speed of light. He ages--in the view of those around him--at the fastest rate possible: light speed. (How's that for a philosophical argument against sloth?)
As we travel about in our daily lives, we use up a miniscule amount of our alotted light speed to move through the spatial dimensions surrounding us. We borrow that speed from our travel forward in time and thus we age more slowly than our sedentary neighbors. You've probably never noticed that fact, and there's a clear explanation why. It's only when you travel at unimaginablly high speeds that the weirdness of time becomes large enough to notice. The mathematical reason for this is that the effect of time dilation at a particular speed "v" is only (v/c)2.
Try putting the fastest you've ever traveled into the top of that equation and then dividing it by the 671 million miles per hour that light travels. Then square that tiny number to make it vastly smaller.
Imagine a strange jet-setter who spends an entire 80-year lifespan cruising at 500 mph on a Boeing 747. When his long flight finally touches down, the watch on his wrist, set to match the airport clock at takeoff, will be only one millisecond behind. However, we can watch a subatomic particle live five times longer at 98% of light speed than sitting still.
Maybe the strangest case of this phenomenon is light itself, the sole thing capable of travel at c. From our point of view, then, a photon is using the entirety of its spacetime velocity to travel through space. It never ages (from our frame of reference, watching)! That's why we see photons will fly through space in a straight line from one side of the universe to the other for all of eternity without changing in any way unless externally influenced. This imperviousness makes them excellent historical records. And here, the deeper general theory of relativity (also courtesy of Einstein) leads us to something more bizarre.
Many of the photons generated at nearly the beginning of the universe are still travelling through space in their birthday suits. But, over the course of their billions of years in transit to us, the space they inhabit along their path through the stars has grown more than 1000 times bigger since they were born. This expansion of spacetime has stretched the wavelength of the photons along with it, like an enormous slinky being pulled apart. Now they are a thousand times longer but still timeless to us.
Spacetime physics, adhering to relativity as we know it, reveals utterly surreal truths. Many of these are posed as famous puzzles and arguments, such as the twin paradox, the ladder paradox, and the failure of simultaneity. But the mere fact that we always travel through spacetime at the speed of light never ceases to stop me in my tracks (metaphorically speaking). I believe it is the most stunning thing I've ever absorbed in a physics class.
Here at RealClearScience, we comb the Internet each and every day to bring you the best, most interesting, and topical science stories out there. The ongoing search places us in the perfect position to gauge the quality of science writing and journalism. One thing we've learned is that the landscape is constantly changing. As writers come and go and editorial practices change, the quality of science journalism rises and falls.
Amidst this constant flux, these are the media outlets that ascended to the top in 2015.
While science reporting is growing stagnant in many mainstream newspapers, Britain's The Guardian bucks the trend with an excellently produced science section. It features a repertoire of well-written blogs, daily news coverage that's generally on point, and a diverse opinion section.
Wired Magazine has long been held in high regard for its business and technology reporting, but the hip periodical is now bringing its style and storytelling skill to science journalism. Wired's editors and writers have a tremendous skill for spotting "what's next." Their reporting imparts awe without embellishing and is highly accessible to everyday readers. Don't forget to check out Wired's science blog section, which has remained strong despite the departure of a number of high-profile writers.
In 1970, Smithsonian Magazine's founding editor Edward K. Thompson envisioned a publication that -- in his words -- would "stir curiosity in already receptive minds. It would deal with history as it is relevant to the present. It would present art, since true art is never dated, in the richest possible reproduction. It would peer into the future via coverage of social progress and of science and technology. Technical matters would be digested and made intelligible by skilled writers who would stimulate readers to reach upward while not turning them off with jargon."
In 2015, Smithsonian Magazine's website is a terrific extension of its print publication. Featuring a marvelous medley of culture, history, nature, and science articles, it's a wonderful place for a curious mind.
In 2013, its opening year, Nautilus earned an honorable mention on our list of top science news sites. This year, it firmly cracked the top ten. With one of the most beautiful and innovative designs of any website on the Internet, Nautilus is easy on the eyes. But the website isn't all style; it's got loads of substance. Featuring an array of academics as well as some of the best science writers out there, every monthly issue of Nautilus is both entertaining and elucidating.
The Atlantic is not typically known for its science coverage, but talented young writers like Olga Khazan, James Hamblin, and Adrienne LaFrance placed the more than 150-year-old publication on our top ten in 2015. Veteran science writer Ed Yong has also recently signed on to augment The Atlantic's coverage. It's heartening to see such a well-known media outlet making a renewed push to publish excellent science content.
Astrophysicist Ethan Siegel's tagline for his fantastic blog, Starts with a Bang!, is "The Universe is out there, waiting for you to discover it." Siegel serves as an able guide on that empowering road to discovery. Each week, he translates the leading space news with depth and grace. He also frequently answers readers' burning questions about the often mind numbing intricacies of the universe and applies needed skepticism to questionable claims. In life, Siegel is delightfully outlandish in personality while remaining grounded in sense. Both Siegel and his blog (now hosted at Forbes) provide a dose of awesome reality.
For well over a hundred years, National Geographic's monthly print magazine has allowed millions of people to explore our breathtaking world in vibrant detail. Its website accomplishes the same feat on a daily basis. For particularly enlightening science coverage, check out Nat Geo's "Phenomena." There, you'll find blogs by science writing giants like Maryn Mckenna, Carl Zimmer, Robert Krulwich, Ed Yong, and Brian Switek. Up-and-comers Erika Engelhaupt and Nadia Drake round out the talented cast.
In an era of digital journalism where outlets leap at low-hanging fruit in the form of click bait and often skimp on accuracy and details in the effort to "get there first," Quanta does exactly the opposite. Focusing on complex topics like mathematics, theoretical physics, theoretical computer science, and basic life science, the team at Quanta covers the difficult science others might ignore, translating it with graceful poise and an unrelenting eye for accuracy.
The BBC ranked #1 on our previous list of top science news sites, but their fall to second in 2015 doesn't indicate a drop in quality. Their science coverage is plentiful and regularly sticks to truly relevant stories. For the extra curious mind, the BBC also maintains BBC Earth and BBC Future, two sites which offer extra, fascinating helpings of all things interesting.
No fluff and no BS; Nature News reports what's relevant and reports it right. Their website covers breaking news, puts the latest research in perspective, and offers unbiased takes on the times when politics and science meet.
Just this much pot can cure you.
Ahh, Portland: The city that shakes its fist in defiance at the 21st Century by stubbornly refusing to fluoridate its water supply and believing that wi-fi is killing its children. Given that Portland's citizens have a troubled relationship with reality, it perhaps shouldn't come as a surprise that the city's congressman does, too.
Earl Blumenauer, who represents the City of Roses in Washington, D.C., has some rather unconventional priorities for a federal politician. Making his list of the most pressing issues facing the nation is marijuana reform. Like geeky PBS traveler Rick Steves, Mr. Blumenauer has devoted a significant portion of his existence to the legalization of pot smoking.
Not everybody in government shares his adoration of the wacky tobacky. According to CBS, the Chief of the Drug Enforcement Administration, Chuck Rosenberg, recently said:
"What really bothers me is the notion that marijuana is also medicinal -- because it's not... We can have an intellectually honest debate about whether we should legalize something that is bad and dangerous, but don't call it medicine -- that is a joke."
From a scientific standpoint, the word joke is a bit strong. Marijuana has legitimate medicinal uses, although the benefits are regularly hyped and exaggerated. The drug is not a magical cure-all for the world's pain and suffering, though it may help reduce the number of deaths from overdoses of opioid painkillers. Mr. Rosenberg's claim that marijuana is bad is certainly backed up by plenty of scientific evidence, though it is probably not as unhealthy as tobacco. And while his claim that pot is dangerous is slightly preachy, it is still defensible. Certainly, teenagers should not use the substance. Overall, Mr. Rosenberg earns a B-/C+ on scientific accuracy.
Congressman Blumenauer, however, sees things differently. In a peculiar outburst on Facebook, he writes:
Let's examine some of his specific claims in more detail.
"Chuck Rosenberg's views don't represent those of the Administration."
Well, actually they do. The Drug Enforcement Administration is part of the Department of Justice, which answers directly to President Obama. The Attorney General, Loretta Lynch, appointed Mr. Rosenburg to his current role as DEA Chief.
"He is completely out of step with... growing scientific and overwhelming testimonial evidence."
Wrong. The scientific consensus has been very measured and cautious in its assessment of the benefits of medical marijuana. A literature review published in JAMA merely concludes the existence of "moderate-quality evidence to support the use of cannabinoids for the treatment of chronic pain and spasticity." All other supposed medicinal uses for marijuana were supported by "low-quality" evidence. That is hardly a ringing endorsement. The existing data is anything but "uncontestable," as the congressman says.
And who cares about testimonial evidence? Flimsy anecdotes might pass as deep wisdom for a lawyer like Mr. Blumenauer, but they mean absolutely nothing to a scientist. His comment betrays a fundamental misunderstanding of the scientific method. It gets worse.
"I've met with countless people whose lives have been transformed because of the relief it has offered."
Now, this is getting obnoxious. Testimonials mean nothing. Period. Anybody with a modicum of scientific training understands that. For an excellent example of how dangerous and misleading anecdotal evidence can be, watch the appallingly bad movie Erin Brockovich, which glorified the pseudoscientific meanderings of a legal clerk who helped scam Pacific Gas & Electric out of $333 million. How? By using anecdotal evidence to accuse the company of causing cancer in the residents of a small California town. This and other cancer-scare stories like it have been thoroughly debunked.
Thus, the congressman's reliance on anecdotes to hype the medicinal properties of marijuana lands him a solid F on scientific accuracy.
"This guy doesn't have a clue."
Those who live in glass houses should not throw stones. Being stoned, however, ought to be the congressman's legal choice.
"There are in fact a hundred billion other galaxies. Each of which contains something like a hundred billion stars."
Carl Sagan was quite accurate in this learned approximation when he uttered it in the early 1980s. The galactic and stellar populations of the Universe are staggering! But if the eloquent astrophysicst were alive today, he would be eager to share an equally fascinating development: that there may -- in fact -- be countless galaxies essentially devoid of stars!
Dark galaxies, as they are fittingly called, are still in the realm of hypothesis, but observational evidence is increasingly weaving them into reality. Composed of only hydrogen gas, dust, and dark matter, and without visible stars, they are nearly impossible to detect. The best evidence for their existence arrived just three years ago. Astronomers aligned with the European Southern Observatory spotted a quasar -- an unimaginably massive and radiant stellar object formed by galaxy collisions -- illuminating twelve large collections of hydrogen gas, causing the gas in each to faintly fluoresce. The astronomers interpreted those gas collections as dark galaxies.
Without the quasar's natural flashlight, the dark galaxies would otherwise have remained unseen. Zeroing in on those galaxies, what the team of astronomers did see, were low-mass regions of space roughly 16,500 to 19,800 light years across (roughly one-fifth the size of the Milky Way) each with amounts of gas (mostly hydrogen) about one billion times more massive than our Sun.
As dark galaxies reside in our roiling universe, they are not destined to remain hidden, or even around at all. In fact, they may serve as fuel for more conventional, star-studded galaxies.
"In our current theory of galaxy formation, we believe that big galaxies form from the merger of smaller galaxies. Dark galaxies bring to big galaxies a lot of gas, which then accelerates star formation in the bigger galaxies," Sebastiano Cantalupo, a Research Team Leader at the Institute for Astronomy, ETH Zurich, told the Kavli Foundation.
Martin Haehnelt, a Professor of Cosmology and Astrophysics at the University of Cambridge, concurs.
"We expect the precursor to the Milky Way was a smaller bright galaxy that merged with dark galaxies nearby. They all came together to form our Milky Way that we see today."
Considering that dark galaxies have been, for the most part, observed billions of light years away, and thus billions of light years back in time, there is some question as to their prevalence today. It's entirely possible that they formed only in the wake of the Big Bang, and since then have been swallowed up by larger galaxies.
But a dark galaxy could also be next door. In 2009, Matthew Nichols and Joss Bland-Hawthorn of the University of Sydney, Australia suggested that Smith's Cloud (pictured below), a cloud of hydrogen gas 9,800 light years long and 3,300 light years wide located 40,000 light years away from Earth just outside of the Milky Way, may be a dark galaxy. Regardless, in approximately 27 million years it will merge with the Milky Way, lending its bounty of stellar fuel to our galactic home.
(Images: Shutterstock, ESO/M. Kornmesser, Bill Saxton/NRAO/AUI/NSF)
As Alex Berezow wrote in his piece yesterday, psychology is not a science, and statistics in and of itself is not science either. Then again, lots of useful and worthwhile things are not science. So, that's not necessarily a problem. What is a problem is that poor statistical methods and irreproducibility damage not just the validity of any one study or one theory, but the rigor and quality of two-thirds of all studies in psychology.
Alex and I have previously detailed what we believe are the requirements for calling a field of study science: clearly defined terminology, quantifiability, highly controlled conditions, reproducibility, and finally, predictability and testability.
The failure of psychology (and indeed many other so-called social "sciences") to meet these criteria often manifests as an obvious symptom: lousy statistics. Statistics is just a language. Like other languages it can be harnessed to express logical points in a consistent way, or it can demonstrate poorly reasoned ideas in a sloppy way.
Statistical studies in psychology limp off the runway wounded by poor quantifiability, take further damage from imprecise conditions and measurements, and finally crash and burn due to a breakdown of reproducibility.
The strengths of hard sciences often shine through their statistical conclusions (although studies performed in hard science disciplines are certainly not immune to poor practices). Statistics underlies some of the most important and empirically successful chemistry and physics ever discovered.
Albert Einstein once said of thermodynamics, a field that can be theoretically derived using advanced statistics:
"It is the only physical theory of universal content concerning which I am convinced that, within the framework of applicability of its basic concepts, it will never be overthrown."
If you took high school chemistry, you may remember Boyle's Law, Gay-Lussac's Law, or the Ideal Gas Law. You might have read about the concepts of equilibrium, the ever-increasing entropy of the universe, or the "five-sigma significance" discovery of the Higgs Boson. All of these things are directly derived through statistics performed on atoms or particles.
But statistics is problematic when it comes to the social sciences. The first key issue is sample size.
Think of a political survey poll. Every one of these polls states a margin of error; surveys with a larger number of respondents have a correspondingly smaller margin of error. Most social research studies use sample sizes of tens, hundreds, and occasionally thousands. That may sound like a lot, but remember that statistical physics deals with sample sizes that can be described in unimaginable ways like this: One thousand trillion times more than the total number of stars in the Universe. Or, enough sample atoms that if each one were a grain of sand, they could build a sand castle 5 miles high. Or, a number of molecules greater than the number of milliseconds since the Big Bang.
The next big difference is a bit more subtle: quantifiability.
Working with such variables as awareness, happiness, self-esteem, and other squishy concepts makes quantifiability hard. This is the sloppy language problem. Even when these ideas are translated into some more concrete measure (say how long it takes a test subject to push a button or eat a marshmallow), the simplicity and truth of this transformation is far from crystal clear or rock solid.
Precision of measurement is another big issue. A social science survey may measure ten subjects with a stopwatch for a handful of seconds and produce an error of a second or two. They may ask people to rate things on a 1-10 scale. How sure are you that your "8" is not another person's "6.5"? The sorts of measurements chemists make have no such wiggle room. They ask molecules questions that have exact answers that cannot be fudged. What's your temperature? How much kinetic energy do you possess? A scientist in Texas and a scientist in Alaska and a scientist on the moon and a scientist at the bottom of the sea and a scientist on poor icy demoted dwarf planet Pluto could all measure the same molecule under the same experimental conditions and get the same answer to five decimal places.
Even the supposedly concrete measurements often fall vastly short of the rigor of true science. Photon-counting experiments often measure times in the range of nanoseconds. Timing subjects by hand with a stopwatch is quite literally one billion or even one trillion times less precise.
Finally, the issue of reproducibility.
While a study of human sexual practices conducted with 44 undergraduate college students may never be reproduced, the predictions of statistical physics will give you a correct answer to 10 decimal places. What's more, thanks to the enormous sample sizes involved, taking a verifying measurement every minute of every hour of every day for the rest of your life, you'd more likely be struck by lightning than detect any deviation from the theory a single time.
These distinctions only scratch at the surface of the vast gulf in rigor and objective truth between hard science and soft, fuzzy social science. Statistics aren't the only problem. While academics may politely demur from judgment, when only 39% of studies chosen from a particular field hold up under scrutiny, the public wises up. They stop believing findings and start ignoring every study. The public may just be right to be skeptical.
Sometimes, the most elucidating questions are the most outlandish. Take, for example, the one posed by Roy Plotnick, Jessica Theodor, and Thomas Holtz in a September publication of the journal Evolution: Education and Outreach:
Horrible puns aside, the question is an intriguing one. If a devout Jew, keeping Kosher, were to travel back in time, what exactly would they be able to eat?
The Jewish holy book, the Torah, permits the consumption of select animals. Only land animals that both "chew the cud" and have cloven hooves are allowed. That means ruminants like cows are on the menu, while pigs are not. Fish with both fins and scales are also permitted. Birds of prey are not to be eaten, along with "creeping things" that crawl the earth and "flying creeping things" (commonly taken to mean "insects" and "flying insects"). However, grasshoppers, beetles, and certain types of locusts are permitted.
But, of course, the animals around today aren't the same as they were thousands or millions of years ago. The kosher animals of today may not have been around in the ancient past!
So what was? The authors explored numerous eras in a literary piece of time travel. Their first stop was 120,000 years ago:
This was the peak of the last interglacial warm period before the most recent ice age and before humans first reached the New World or Australia. It was thus prior to the extinction event that wiped out many of the large animals in the world, such as mammoths and giant ground sloths. Many of these extinct animals are either ancestral to or related to animals that we would recognize as kosher. In Eurasia, we would find the aurochs, Bos primigenius, the ancestor of today’s domestic cattle... There would also have been the wild ancestors of modern fowl, sheep, and goats. In the Americas we would be able to feast on abundant ancestral bison, as well as a variety of deer, pronghorns, and turkey. Acceptable fish would be found in fresh and marine waters throughout the world.
So a pretty decent selection for our theoretical Jewish time traveler! But what if we venture farther back, say, 52 million years, to the shores of Fossil Lake in what is now Wyoming?
The lake teems with kosher fish, including abundant perch, herrings, and bass. Crickets chirp along the shore. There are also abundant modern-looking birds including relatives of modern land and water fowl. Mammals are present, but the Ruminantia have yet to appear on Earth. Thus it would seem that an observant Jew would be limited to a non-mammalian diet, consuming only fishes, birds, and orthopterans (grasshoppers and crickets).
The number of choices is dwindling, but it's still large enough to sate most picky eaters. The same cannot be said of the fare present 67 million years ago in the Cretaceous Period. Dinosaurs abound, but as reptiles they are obviously off the menu, and might even try to dine on you. Many of the early birds of the period would not be considered Kosher, nor would a good amount of the fish, as they would be "creeping" bottom-dwellers and might lack clearly defined scales. There likely would have been grasshoppers to munch on, though!
Finally, the authors travel back 310 million years, where a Jew keeping Kosher would have a difficult time finding any animal sources of food:
Although vertebrate life exists on land, the reptile-like tetrapods of this period predate any mammal or bird and would certainly “swarm” upon the land. Scales and finned bony fish are known, but none have cycloid scales. There are possible ancestors of crickets and grasshoppers, but their jumping legs were not strongly developed.
Our Kosher time traveler could simply eat vegetarian, of course. But even consuming plants gets a lot harder the farther back we go. Before humans domesticated plants, they were not as nutritive as they are now. Dating back as far as 55 million years ago, a forest might contain plenty of nuts, fruits, and seeds courtesy of angiosperms -- flowering plants. But before then, angiosperms were a relative rarity, and their seeds too small and few in number to be of much value.
Aside from serving as a useful guide for Kosher travelers to the past should time travel ever enter into reality (unlikely), the authors see their piece as a humorous, yet excellent example of science outreach.
"'Kosher paleontology' could be used for conversations about evolution that are lighter and more entertaining," they write.
"Discussing how paleontologists would address such a query illustrates how they think about the morphology, ecology, and relationships of extinct animals and thus gives an opportunity to introduce broader concepts from paleontology and evolutionary biology to a more general audience."
Source: Roy E. Plotnick, Jessica M. Theodor, and Thomas R. Holtz. "Jurassic Pork: What Could a Jewish Time Traveler Eat?" Evolution: Education and Outreach 2015, 8:17 doi:10.1186/s12052-015-0047-2
A few years ago, I caused considerable weeping and gnashing of teeth among psychologists for a piece I wrote explaining why psychology isn't science. It was predicated upon a lengthier argument, which I co-authored with physicist Tom Hartsfield, on the difference between science and non-science. RCS Editor Ross Pomeroy followed up with his own haymaker, explaining why Sigmund Freud's ideas -- from penis envy to psychoanalysis -- were not just whacky but unscientific and wrong.
Many psychologists take exception to our criticism, feeling disrespected by the notion that psychology isn't science. Perhaps they suffer from the academic equivalent of penis envy? (I keed, I keed.) In all seriousness, being contemptuous is certainly not our intention.
Psychology is very important. We make use of its insights in our daily interactions with other people. Anybody who has taken a management class or read Dale Carnegie's masterpiece How to Win Friends and Influence People appreciates the power in understanding what makes other people tick. But, of course, psychology isn't the only field in which we gain important insights on human behavior; indeed, one can learn just as much about human behavior by reading Shakespeare, studying religious texts, or contemplating art. Nobody, however, would consider these academic areas to be a form of science.
Why does this matter? you might be wondering. Isn't this just a food fight between academics? No, it most certainly is not. This discussion matters because epistemology, the study of knowledge, matters. The question, "How do you know what you claim to know?" is one of the most important questions in both science and philosophy. And in 21st Century America, in which people have their own news sources complete with their own set of (usually unverifiable) facts, we are in need of a very serious discussion on what constitutes genuine knowledge.
The liberal arts, which traditionally includes fields like psychology and economics, offers penetrating insights into human behavior. But, it simply does not measure up to the scientific method, the most powerful pathway to secular knowledge that humanity has ever invented. While psychology uses the scientific method in its experiments, limitations inherent to the field of psychology (such as difficulty in properly defining terms and quantifying data) may forever prevent it from joining the ranks of the hard sciences.
Again, that is not meant as an insult. It is meant only as a reminder that the truth claims made by the liberal arts are not as strong as those made by physics, chemistry, and biology. I would bet my house on the discovery of the Higgs boson, the accuracy of the periodic table, or the efficacy of vaccines. Yet, there is not a single fact in psychology upon which I would be willing to make a similar wager. Even the supposedly time-tested concept known as priming may be wrong.
Another popular misunderstanding is over the status of math and statistics. Math is the language of science. Undeniably, science would not be possible without statistics since it is necessary to discern significant from insignificant results, as well as to help determine cause-and-effect relationships. But, this in itself does not make statistics a science. Science would also not be possible without language, but nobody considers language to be a form of science.
Put simply, mathematicians and statisticians do not perform experiments. As a beautiful example of the power of logic and deduction, mathematical proofs may represent the highest form of knowledge attainable by man. Yet, that does not make it equivalent to science. Instead, proofs help develop tools. And these tools, gifted to us by the sheer brain power of mathematicians, have allowed scientists the ability to perform experiments. Statistics, therefore, should be thought of as the key facilitator of the scientific method.
Of course, though powerful, the scientific method is not infallible. The reproducibility problem in biomedical science attests to that. But a world without the scientific method is one that would never have advanced beyond the primitive technology of the 16th Century. Fully appreciating the Scientific Revolution begins by understanding what science actually is.
We have heard it many times. We in the mass media are ignorant and deceptive partisans, propagandists, and shills. The rest of us are just mindless puppets. The 24-hour news cycle is responsible for dividing America and dumbing down political discourse. In short, modern journalism is the worst thing to happen to the U.S. since Justin Bieber crossed the border.
True, journalism isn't perfect. But such widespread disdain toward journalists is predicated upon exaggerations and misconceptions. A recent, eye-opening literature review by a professor from Stockholm University in Annual Review of Economics examined the impact of the mass media on society and politics. Its findings were far less grim than what is commonly believed. Below are the most notable highlights from the paper:
1. The media makes people better informed and more politically active. However, it rarely changes voting intention. This is because people rarely change their voting intention in the first place and also because they tend to get their news from sources that share their viewpoint. Though it is widely believed that this is a modern phenomenon in an era dominated by partisan outlets such as Fox News and MSNBC, that is simply not true. Research in the 1940s noted the pattern back then.
2. Most large media outlets in the U.S. are centrist. The author cites a 2005 study that concluded that, while most newspapers are center-left, 18 of the 20 media outlets it examined held political positions that were in between Joe Lieberman (a centrist Democrat) and Susan Collins (a centrist Republican).
3. Yes, a partisan media contributes to polarization. But it should be kept in mind that partisan media simply strengthens preexisting divisions in society.
4. Newspaper endorsements only matter if they are unexpected. If the New York Times endorsed a Republican, it would matter. People would notice, and many would change their voting intention. But since the New York Times has endorsed the Democratic presidential candidate in every single election since 1960, nobody will care when it endorses the Democrat in 2016.
5. The "Fox News Effect" is small. Liberals love to blame Fox News for putting Republicans in government. But research shows that, as Fox News was slowly introduced in homes during the 1990s, the boost to Republicans was merely 0.4 to 0.7 percentage points.
6. Newspaper bias is driven more by consumer demand than by owners' interests. This finding probably comes as a surprise to those who claim that the Wall Street Journal's editorial page is dictated by Rupert Murdoch. Indeed, the author writes, "Two newspapers belonging to the same chain are not ideologically closer than two randomly chosen newspapers, once geographical factors are taken into account."
7. Partisan media outlets report on scandals, even if it involves politicians they like. However, the outlet won't cover it to the same extent. If a centrist paper covers a Republican scandal with four articles, a left-wing paper would write five, while a right-wing paper would write only three.
8. It is not known if partisan media causes the government to enact bad policy. No empirical data exists. One might predict that partisan media would lead to fewer voters being properly informed, which itself would lead to less political accountability and worse policy outcomes. Yet, that's not necessarily true. Partisan media outlets can become specialized in those issues that its readers and viewers care about. For instance, conservatives, many of whom are mostly concerned with business and economic policy, can read the Wall Street Journal; liberals, many of whom are mostly concerned with social policy, can read the New York Times.
9. The media is inconsistent in what it considers to be "newsworthy," and this adversely affects policy. Case-in-point: The author states, "46 times as many people must be killed in a disaster in Africa to achieve the same probability of being covered by the television network news as an otherwise similar disaster in Eastern Europe." When journalists turn a blind eye to suffering in Africa, so do our politicians.
10. High levels of media coverage tend to make politicians better behaved. The author cites research that showed that Congressmen from districts with a lot of media coverage tended to be less extreme and more focused on their constituents. (This is a rather counterintuitive finding, considering that the national media rewards the loudest and most obnoxious partisans with television time.) Additionally, greater media coverage often translates into better policy.
Clearly, the American media has a net positive effect on society. And the media, as a whole, is more centrist and fair than the talking heads would have you believe. Still, if you disagree, you always have the freedom to change the channel.
Source: David Strömberg. "Media and Politics." Annual Review of Economics Vol. 7: 173-205. Published online 18-Mar-2015. DOI: 10.1146/annurev-economics-080213-041101
The metric system is great, but sometimes it simply doesn't do justice to what you're trying to describe. These highly-specialized units of measurement might be of use!
Sagan. Inspired by and in tribute to astrophysicist and science communicator Carl Sagan, the "Sagan" plays off Sagan's tendency to say "billions and billions," often in reference to stars, galaxies, or planets in the Universe. It's defined as a "large quantity of anything."
Banana equivalent dose. A little considered fact: bananas are radioactive. The potassium-40 in your average banana imparts roughly 0.1 microsieverts of ionizing radiation when consumed. "Banana equivalent dose" (BED) refers to that amount. For refence, it would take about 35,000,000 BEDs to kill a human.
Barn. If you can't hit the broad side of a "barn", don't feel too bad about yourself. Nuclear physicists regularly use the term to describe a cross section of 10-24 cm2, roughly equal to the cross section of an atomic nuclei. The term originated out of the Manhattan Project back in 1942, and was actually kept classified until 1948. It's been widely used by nuclear and particle physicists ever since.
Foe. A stellar supernova is one mighty blast. In an instant, a dying star explodes, expelling its guts out into space at up to 10% of the speed of light. As you might imagine, the process releases an awesome amount of energy, so much, in fact, that astrophysicists Gerald Brown and Hans Benthe created a unit of measurement for it. Your average supernova releases about one "Foe" of energy, equal to 1044 joules.
Beard-Second. The light-second is defined as the distance a particle of light travels in exactly one second, and is equal to roughly 186,282 miles. That's quite a distance. As Kemp Bennet Kolb espoused in the humorous book This Book Warps Space and Time, there ought to be a comparable unit for defining things on the small-scale. He proposed the beard-second: "the distance that a standard beard grows in one second." It's equal to roughly ten nanometers.
A lot of Americans seem to be under the impression that there is something unique and (wonderfully) different about Vermont Senator Bernie Sanders. Yet, other than the fact that he identifies as a socialist -- in a world where capitalism brought one billion people out of poverty in just the last 20 years -- his other views, particularly on science, are predictable and banal. Time and again, he has planted his flag firmly in the camp of the anti-scientific left.
Take his recent debate performance as an example. He stated:
"Today, the scientific community is virtually unanimous: climate change is real, it is caused by human activity, and we have a moral responsibility to transform our energy system away from fossil fuel to energy efficiency and sustainable energy and leave this planet a habitable planet for our children and our grandchildren." (Emphasis added.)
Actually, the scientific community does not believe that the habitabilty of the planet is in jeopardy. Not even the most extreme climate models predict that Earth will someday become uninhabitable to humans. It is this sort of careless, hyperbolic, and unscientific rhetoric -- most often spouted by politicians -- that has caused the climate science community a lot of heartburn.
Additionally, it should be noted that while human activity is largely responsible for climate change, the IPCC AR5, which is seen as the global consensus on climate change, is more measured in its conclusion. It writes (PDF, page 5): "More than half of the observed increase in global mean surface temperature (GMST) from 1951 to 2010 is very likely due to the observed anthropogenic increase in greenhouse gas concentrations." (By "very likely," the IPCC means 90+% confident.) More than half is certainly a lot, but it also implies that a substantial proportion of climate change is due to other factors.
Mr. Sanders' apocalyptic view of climate change makes his opposition to nuclear power particularly inexplicable. One would think that if climate change really is the "most significant planetary crisis that we face," as Mr. Sanders once said, then the most efficient solution to that crisis should be warmly embraced. Instead, he rejects it because "we do not know how we get rid of the toxic waste from the [nuclear plants] that already exist."
Wrong again. We do know what to do with nuclear waste: Store it in a stable, long-term facility, such as the one at Yucca Mountain. However, the Obama Administration, for purely political reasons, mothballed the project. (Instead, waste is being stored on-site at nuclear plants all over the country, an irresponsibly dangerous option.)
Mr. Sanders, predictably, endorses solar, wind, and geothermal as the answer to our "moral responsibility to transform our energy system away from fossil fuel," despite research that suggests that nuclear power could replace fossil fuels in 25 years. Belief that renewable energy is the sole solution to the world's energy demand is nothing more than magical thinking.
On GMOs, Mr. Sanders is yet again opposed to mainstream science policy. His endorsement of GMO food labels is in direct opposition to the policy stance of the American Medical Association and the American Association for the Advancement of Science.
Finally, Mr. Sanders, notwithstanding his lack of credentials, appears to enjoy playing the role of a medical doctor in public. In February, he said that rich people have "psychiatric issues" and are "addicted to money" in the same way as the "people who are addicted to alcohol or drugs." Setting aside the absurdity of his comment, it is discomforting to know that Mr. Sanders believes that people who oppose his policies suffer from mental illness.
Ironically, at the beginning of the year, the Vermont Senator told CBS News, "It is hard to do serious and important things if you reject science."
On that point, we are in agreement. That is why Mr. Sanders will never reside in the White House.
Running may be the quintessential human activity. Eons ago, our ancestors ran to stay alive, whether by escaping from predators or by chasing prey over long distances. Persistence hunting, in which man chases down beast on foot over distances as long as twenty miles until the animal is too exhausted to continue, may have been the earliest form of human hunting. Today, it is practiced by only a handful of peoples across the globe.
Most modern humans run not to eat, but to exercise. Picturesque plains and African scrub have been replaced by parkways and city sidewalks. Runners embark on their journeys armed with iPods and earbuds rather than spears. And before they jog out the door, many of them wonder, "What is the best way to run?"
There are a number of ways to explore this question, and that's a big reason why science has yet to provide a clear answer. Let's start by looking down at our feet and asking, "Is it better to run barefoot or with shoes?"
In this day and age, most skeptics might understandably ask how this is even a question. After all, shoes cushion our feet and guard against the rigors of the road, like rocks, trash, and cracks. Moreover, the barefoot notion seems to be a clear offshoot of the naturalistic fallacy, as many proponents argue that since our ancestors ran barefoot, it must be better.
In reality, the barefoot discussion is not about shoes at all, but about how we use our feet. The most prominent scientist in support of barefoot running, Harvard paleoanthropologist Dan Lieberman, contends that the barefoot style tends to make runners land more on the front or middle of the foot, rather than the heel. This, he writes on his Harvard website, is the real key.
"In heel striking, the collision of the heel with the ground generates a significant... large force. This force sends a shock wave up through the body via the skeletal system. In forefoot striking, the collision of the forefoot with the ground generates a very minimal impact force... Therefore, quite simply, a runner can avoid experiencing the large impact force by forefoot striking properly."
Lieberman is not the only scientist espousing this view. As an international team of scientists reported in a 2014 issue of the Journal of Sport and Health Science:
"From a mechanical perspective, the two most important anatomical springs in the human leg are the Achilles tendon and the plantar arch; together, these structures store and return roughly half of the potential and kinetic energy lost each step during running. These anatomical springs are most effective when runners land on the middle or front of the foot, allowing the Achilles tendon and plantar arch to stretch as the foot is loaded during early stance phase."
Lieberman hypothesizes that this more economical style of running may reduce injury rates. Scientific research addressing that notion is now starting to trickle in. One study examining injuries among cross-country runners found similar rates of traumatic injury between front- and rear foot-strikers, but significantly higher rates of repetitive stress injury among rear strikers. Another focusing only on shod versus barefoot runners found no difference in injury rates.
With a paucity of studies performed thus far, the jury is still out on Lieberman's hypothesis. It may be that the barefoot style will only shift the location of running injuries, rather than reduce the incidence. Forefoot and mid-foot striking puts more pressure on the ankle, while rear-foot running puts more pressure on the knees and upper leg muscles.
Most casual or inexperienced runners typically strike with their heels first, as has been encouraged by shoe design over the years. But now, more and more shoes are being designed with a mid-foot strike in mind, so one doesn't need to go barefoot to try out a different style of running. Switching your running style shouldn't be done too rapidly, however. Runners accustomed to rear-striking will be adapted to running in that fashion, and switching too quickly will almost certainly result in injury. Luckily, science-based guides exist that can help runners navigate the process safely.
If you're not interested in changing your foot-strike habits, there is one simple adjustment that seems to universally benefit all long-distance runners: take more strides. A systematic review published to the journal Sports Health in 2012 found consistent evidence in the scientific literature that taking more and shorter strides alleviated pressure at the hip, knee, and ankle joints.
While most humans no longer need to literally run for their lives, running is one of the easiest forms of exercise to extend lives. On that point, science is nearly unanimous.
(Images: AP, Rothschild via S&C Journal)
The interplay between science and religion rarely strays from popular discussion. Since the issue is so charged, articles espousing varying views on the matter prompt both criticism and praise across the Internet. Particularly popular is the matter of God's existence or nonexistence.
Last Christmas, Eric Metaxas penned a widely-shared piece for the Wall Street Journal arguing, in essence, that research shows that Earth-like life is so rare and so special that scientific evidence now increasingly favors the notion that some sort of creator put us here.
Metaxas is not a scientist, of course, and unsurprisingly, his article completely misrepresented how science works. Just because science has shown thus far that intelligent life is relatively rare throughout the universe, such evidence does not say anything about the existence of God. It simply says that life is relatively rare. That's all.
Even if Metataxas presented stronger evidence, his argument still would have failed spectacularly. We can use Star Trek to elucidate why.
Perhaps the strongest evidence possible for the existence of God would be if some being came down from the sky and repeated all the feats performed in the Bible. But, as one enlightened listener of the Skeptics' Guide to the Universe (SGU) podcast recently pointed out, "What could a God do that couldn't either be done by a being such as Q or faked with high technology? Even if the Bible were 100% true, how could we be sure it wasn't just some Q-like being screwing with us?"
The reader was referencing one of the most powerful and memorable characters in Star Trek, an eccentric, apparently omnipotent entity with a flair for the dramatic and a tendency to cause mischief.
Yale neurologist Steven Novella, host of the SGU, concurred with the listener's logic.
"If you hypothesize a being that is outside of the laws of the universe, how can a being like a human who is constrained by those laws ever be able to evaluate evidence that demonstrates the existence of the being outside those laws? The answer is you can't... There is no experiment we can do, no observation that we can do... to distinguish between an advanced alien and God."
For those unacquainted with Star Trek, Rabbi Alan Lurie stated the "Q Problem" in layman's terms over at Huffington Post. Describing a debate between theists and atheists over the existence of God, he wrote:
Michael Shermer said that he'd find convincing proof [of God's existence], "if you could have God grow new limbs on amputees from the Iraq war, Christian soldiers, praying for them to be healed."
But that was actually poor logic from Shermer. As Lurie pointed out:
Even if this did happen, it would not prove the existence of God but would instead prove that there is some kind of regenerative force or energy that responds to the right kind of conscious thought. Likewise, a glowing presence and booming voice appearing on the White House lawn proclaiming "I am the Lord your God, who took you out of the land of Egypt, the house of bondage" as the waters of the Potomac part, would prove that there is an entity with powerful technology, and would be no more a proof of God than an airplane to a cave man.
Science is a far-reaching enterprise, but its purview is constrained to the natural world. Seeing as how God is undeniably a supernatural concept, the question of his existence falls outside science's reach. No scientific evidence will ever conclusively prove the existence of God.
Arguably, it is more difficult to be a scientist today than ever before. Faculty positions are few and far between. A mere 20% of federal grant proposals are actually funded, and the biomedical professors lucky enough to score an R01, the granddaddy of NIH grants, are usually not awarded their first until the ripe old age of 42. In short, there are too many PhD's and not enough jobs and money to support them.
As if that weren't bad enough, now scientists must also avoid being killed -- metaphorically speaking. The power of the Internet has fanned the flames of ideology, allowing activists to unite in an unprecedented assault upon our nation's scientists in the form of harassment, intimidation, and character assassination.
The strategy involves the Freedom of Information Act, a law meant to facilitate government transparency, which can easily be abused by troublemakers intent on digging through a scientist's emails in the hope of finding an embarrassing comment or a statement that can be taken out of context. The tactic was famously deployed by climate change skeptics in an attempt to discredit Michael Mann, and it has been perfected by anti-GMO activists bent on destroying an entire generation of biotechnology scientists. According to an editorial in the latest issue of Nature Biotechnology, 40 scientists have been targeted by "the activist organization US Right to Know (USRTK), bankrolled largely by a $47,500 donation from the Organic Consumers Association." (The Genetic Literacy Project claims the figure is at least $114,500.)
Kevin Folta, a plant scientist at the University of Florida, has become Public Enemy #1 for the crazed and implacable anti-GMO movement. An outspoken defender of GMOs, he has become the face of the biotech resistance. For his bravery, he has paid a dear price: His reputation has been dragged through the mud, despite the fact that he has never done anything unethical, let alone illegal. The aforementioned editorial summarized the situation eloquently:
The tragedy is the harassment that [Folta] and his family have experienced in recent weeks will cause many potential researcher/communicators to duck back under the parapet.
This is how demagogues and anti-science zealots succeed: they extract a high cost for free speech; they coerce the informed into silence; they create hostile environments that threaten vibrant rare species with extinction.
So, what can be done about it? The editorial suggests more funding for science communication and that the major journal publishers engage in more public outreach. These are a good start, but they do not go far enough. The solution will require at least three major changes.
First, FOIA must be amended. Unless there is evidence to suggest that a scientist has engaged in wrongdoing, there is no reason to allow a group of activists to blatantly violate his or her privacy by plowing through archives of email. (Alison Van Eenennaam, one of the targeted scientists, wrote on Science 2.0 that "there is something deeply intrusive about a third party requesting years' worth of email correspondence.")
Second, politicians from both sides of the aisle should publicly defend the integrity of the scientific community. Nothing can change the tone of public dialogue -- and bring much-needed attention to a pressing issue -- like a well-placed comment from the President of the United States.
Third, media outlets like Mother Jones, which lead with ideology rather than reason, ought to be publicly shamed by science journalists and the scientific community for giving a prominent voice to pseudoscientific malcontents and their anti-corporate conspiracy theories.
As a society, we must choose to protect our scientists from harassment. They deserve a peaceful atmosphere in which they can safely do their important work. If we do not, we risk not only further damaging an already fragile and demoralized scientific community but also America's standing as the world's leading innovator.
Source: "Standing up for science." Nat Biotech 33: 1009. Published online: 08-Oct-2015. doi: 10.1038/nbt.3384