Ethan Siegel is one of our very favorite authors here at RCS. In 2013 we named him our top science blogger. His first book, Beyond the Galaxy, covers a broad range of material right in his wheelhouse: astronomy, astrophysics, and cosmology. You can grab a copy at the world's largest bookstore.
The premise is to teach the students of an introductory college course the state of the universe as we've discovered it through the history of astronomy and physics. Basically, you're taking an Intro Astronomy for Non-Science Majors course with Professor Siegel. While it's well suited -- if read cover to cover -- for that task, I suggest that you read it in a slightly different manner.
Siegel's writing here is as entertaining and well-pitched for an enthusiastic layman as ever. The text brims with entertaining historical anecdotes and intuitive explanations; it illuminates the scientific process instead of dryly stating facts.
What Beyond the Galaxy really reads like, however, is a huge collection of articles. Fun, educational, lucid blog posts.
Accordingly, I think the real fun lies in reading it in bits and pieces. Each chapter is divided into a series of sections. Most of these stand alone as self-contained stories of a particular discovery, idea, or person. This granularity allows the book to meander into many entertaining corners of the historical progression of science and dives into explanations of competing cosmological theories rather than cutting back these details to streamline the whole story. From my perspective that characteristic is not a flaw. A trimmed, focused storyline written more like a novel's plot would weaken Siegel's strength at digging into informative details and making them entertaining. It would make the text a lot less fun and probably less educating to read.
One further aspect of this book stands out among popular science volumes. Fans of the prolific images in Siegel's blogging will enjoy the many pictures found throughout the text. Open it to nearly any page and you'll find at least one high quality colored print with a full, well-written explanatory caption. These alone are a wealth of information even without the main text.
So, I highly recommend keeping this book to read in bits and pieces at leisure or as a reference to learn particular concepts in astronomy as needed.
Siegel, Ethan. Beyond the Galaxy: How humanity looked beyond our Milky Way and discovered the entire Universe. Singapore: World Scientific, 2015. Print.
Superlatives are far overused when it comes to cancer. Though "breakthroughs", "miracles", "cures", and "marvels" are regularly reported in the popular press, more and more people continue to die of the disease. An estimated 600,000 will succumb this year.
But don't be disheartened. Through the superb efforts of dedicated researchers and hard-working medical professionals, we are slowly but surely winning the "war on cancer". Early detection, preventative measures, and improved treatments have reduced the cancer mortality rate from a peak of 215 deaths per 100,000 people in 1991 to 172 deaths per 100,000 people in 2010, no "revolutionary miracles" required.
Looking to lower the rate even further, many cancer researchers are calling for a widespread effort to analyze genuine cancer miracles. Yes, they do exist, but in a decidedly less hyped fashion. In a significant portion of cancer drug trials, there are patients who exhibit incredible responses to the treatments they receive. While the average effect of a certain drug might be middling overall, these patients will take the drug and experience miraculous results -- their cancers might even disappear entirely for a time. Such rare survivors are called "exceptional responders."
Over the past decades, these exceptional responders have been mostly ignored, cast aside as amazing, yet irreproducible, anecdotes. Intriguing oddities to be published in case reports, perhaps, but not studied empirically. That may soon change. Last year, the National Cancer Institute (NCI) announced an ambitious plan to transform these cases from anecdotes to evidence, calling for any and all exceptional responses to be reported and rigorously investigated.
"Tissue samples will be obtained and molecularly profiled via whole-exome, transcriptome, and deeper targeted sequencing. All clinical and genomic data will eventually be available to interested investigators through a controlled-access database," Alissa Poh reported in the journal Cancer Discovery.
When researchers have taken steps like this in the past, they've gleaned some remarkable insights. A couple particularly glowing examples are tied to the drug everolimus. During clinical trials, two different patients saw their cancers almost entirely disappear for 14 and 18 months before returning. Subsequent analysis turned up mutations in their tumors which rendered their cancers uniquely susceptible to the drug. With that information, cancer researchers can design clinical trials that specifically target patients whose tumors have those mutations.
Examining exceptional responders particularly excites Vivek Subbiah and Ishwaria Mohan Subbiah, a husband and wife duo at The University of Texas MD Anderson Cancer Center in Houston.
"Scientists and physicians are detectives at heart," they wrote last year in the journal Future Oncology. "The in-depth analysis of these n-of-1 outlier responders calls for an approach worthy of Sherlock Holmes, where 'the grand thing [is to be able] to reason backward' with the hope of unraveling unique insights into the disease that may help the current patient and future patients with the same disease or same aberration."
They offered a suggestion to make this happen.
"There has to be a real-time, open access online registry that stores the data relating to all of these ‘miracle’ patients and all of the data that has been deposited so that all of this investigative work is accessible and useful."
The Subbiah's recommendation has just been mirrored in an editorial published to Science Translational Medicine. Harvard Medical School's Eric D. Perakslis and Isaac S. Kohane call for establishing an Exceptional Responder Network, complete with a network of clinical sites that provide free testing for verified exceptional responders, a massive online registry, and a policy of open data sharing.
If this approach is widely adopted, researchers may be able to manufacture a bounty of "breakthrough" cancer treatments truly worthy of superlatives.
(Image: AP Photo)
The idea that our Universe is filled with dark matter has been around for nearly a century. When astronomers noticed that orbital speeds towards the edges of spiral galaxies remain the same or even increase slightly, rather than decrease, they surmised that either there must be some huge unseen mass driving the rotation, or that the laws of gravity given by Einstein's General Relativity need to be changed. They elected the first option.
Over that time, cosmologists have accumulated boatloads of evidence in favor of the notion that this invisible, "dark" matter -- which neither interacts with nor emits light -- comprises roughly 84% of the mass of the Universe. So compelling is this story that millions and millions of dollars have been spent on ingenious experiments to actually detect the stuff, but thus far, the particles have remained elusive.
It is partly because of dark matter's inherent ability to not be found that, in 1983, Israeli physicist Mordehai Milgrom proposed an upstart theory to challenge its dominance. Modified Newtonian dynamics, or MOND for short, dares to go where physicists of the past dared not: It slightly tweaks the laws of gravity put forth by Einstein's General Relativity. While the changes are subtle, only affecting Einstein's equations at very low accelerations, the ramifications are massive. General Relativity has remained essentially unscathed for over a century.
And yet MOND matches its audacity with surprising veracity. It successfully accounts for galaxy rotation curves just as well, and in some cases, even a little bit better than dark matter. Moreover, no evidence has come to light that conclusively disproves MOND. That's quite an accomplishment, as the annals of physics are littered with the corpses of theories that challenged General Relativity and failed.
"The idea is sound," cosmologist Ethan Siegel writes in his recent book Beyond the Galaxy. "Surely hypothesizing that 80-85% of the matter in the Universe is of some hitherto undiscovered type... represents a greater leap than making a tweak to our theory of gravity. After all, tweaking our theory of gravity to explain Mercury's orbital motion was what led to General Relativity in the first place!"
But, as Siegel notes, full-fledged cosmological theories built from MOND cannot fully account for many findings arising from the theory of dark matter.
"Gravitational lensing, the cosmic web of structure, and cosmic microwave background observations all go unexplained in all the modified gravity theories put forth so far."
"A compelling physical basis for MOND is still lacking. But then, it took Newton twenty years to realize there was a good geometric reason for the inverse square law, and centuries to develop our modern understanding of gravity. These things only seem crystal clear with the benefit of hindsight. So it no doubt shall be with MOND, whatever the underlying physics."
As Sabine Hossenfelder reported last year, one potential way to test MOND could soon become available. According to the modified gravity theory, an offshoot of MOND, a black hole's shadow should be ten times larger compared to what general relativity predicts. The Event Horizon Telescope aims to image a black hole and its shadow for the first time in 2017.
Siegel succinctly sums MOND's current scientific standing in his book.
"MOND remains an attractive avenue of investigation, as it is still more successful at predicting the rotation curves of individual galaxies, overall, than the theory of dark matter is. But its failure to meet the criteria of reproducing the successes of the already-established leading theory means that it has not yet risen to the status of scientifically viable."
When it comes to MOND, McGaugh is a strict adherent to empiricism, but he also has a flair for the philosophical.
"Is our universe an unfamiliar darkness filled with invisible mass, with the 'normal' matter of which we are composed no more than a bit of queer flotsam in a vast sea of dark matter and dark energy? Or is our inference of these dark components just a hint of our ignorance of some deeper theory?"
"Ultimately, what we want is irrelevant. Science is not a consensus endeavor: the data rule."
Primary Source: Ethan Siegel. Beyond the Galaxy: How Humanity Looked Beyond Our Milky Way and Discovered the Entire Universe. 2015. World Scientific.
As medicine has advanced over the centuries, diseases have come and gone, but not always because they've been eradicated. Many times, widely diagnosed maladies -- some of them supposedly debilitating or deadly -- turned out not to exist when new technologies allowed a closer look. Other times, diseases simply vanished when rigorous skepticism was dutifully applied.
Here are five historical diseases that were eliminated by scientific scrutiny.
Suppressed Gout. Throughout Charles Darwin's adult life, he was plagued by sometimes debilitating health issues. Turning to a variety of doctors for help, he received a menagerie of diagnoses. One of these diagnoses was "suppressed gout." Gout, of course, is a genuine condition, characterized by severe pain, redness, and swelling in joints, often the joint at the base of the big toe. Suppressed gout, however, was completely fabricated. In the 19th century, many doctors believed gout was caused by an accumulation of toxic substances. Some further blamed these substances for causing a host of other discomforting symptoms. Suppressed gout thus became a diagnosis of convenience.
"What the devil is this 'suppressed Gout' upon which doctors fasten every ill they cannot name? If it is suppressed how do they know it is gout? If it is apparent, why the devil do they call it suppressed? I hate the use of cant terms to cloak ignorance."
Railway Spine. Train travel was common in the 19th century, as unfortunately, were train collisions. Rickety rails, coupled with shoddy construction of passenger cars, rendered rail travel a decidedly more hazardous form of transportation compared to today. This situation resulted in a number of injuries, but it also brought forth opportunists hunting for an easy profit. Doctors all over the world found themselves listening to patients claiming they had been injured in crashes, yet showing no signs of actual ailment. The term "railway spine" was created for these cases. Railroad companies adamantly denied the malady's existence, yet were forced to pay thousands of dollars to supposed sufferers. Railway spine created quite a controversy in certain sects of the medical community. Now defunct, the disorder could very well have existed. Today the symptoms might be classified as whiplash or PTSD.
Status Lymphaticus. In the early 1900s, status lymphaticus reportedly killed thousands of children and was even regarded as "the most important problem in medicine." Today, most doctors have never heard of it, and that's for a good reason: it never existed.
The supposedly deadly disease was linked to a tiny gland nestled near the heart and lungs: the thymus. Now known as a key part of the immune system, the gland was not always held in such high regard. As the cause of status lymphaticus, the thymus was thought to occasionally grow out of control, pressing upon the heart and the lungs until the victim suffocated from the inside. Closer, skeptical scrutiny eventually disproved the condition in 1931.
Ovariomania. Commonly called "Old Maid's Insanity", ovariomania was a condition usually diagnosed amongst women at the early stages of menopause, although it was sometimes diagnosed even earlier. Some doctors believed that tumors in the ovaries prompted bouts of insanity. As influential Scottish psychiatrist Sir John Batty Tuke described:
"Women who for years have been carrying tumours, when they arrive at the change of life develop aberration of intellect, and not unfrequently the character of their illusion is marked by sexuality and erotomania; they think that they are pregnant, or that they are visited at night by men."
These tumors never actually existed, but that didn't stop 19th century surgeons from conducting as many as 150,000 oophorectomies on women, involving the removal of the ovaries.
Intestinal Autointoxication. There's poop inside you. Right now. But though disgusting when outside the body, feces are fairly benign inside. Tucked away within the colon awaiting excretion, there's little harm that poop can do. Physicians of the past weren't so sure however.
Dating back to ancient Egypt, medical "professionals" once entertained the notion that putrefaction of feces inside the body causes disease. The toxins produced supposedly shortened lifespan and sparked a host of maladies. While this hypothesis was firmly debunked decades ago, the premise still fuels a cornerstone of the natural health industry: the colon "cleanse."
(Image: Maull and Polyblank)
Occupied by re-tweeting an emoji in response to a selfie posted on Instagram, a distracted driver rams your car off the road. You awake in the hospital that night and meet the doctor in charge of repairing your fractured legs and piecing together your damaged spine. Whose eyes would you prefer to look up into from your hospital bed?
Doc A: received top-of-the-class A's in biology, chemistry, physics, biochemistry, organic chemistry, medical imaging, electronic devices, and neuroscience. He overcame coursework pressure and scored higher on science-based entrance exams than 98% of other medical school applicants.
Doc B: wrote well-marked sociology papers in Comparative Perspectives on U.S. & European Societies: Inequality, Institutional Underpinnings of the Arts & Media; Sexual Cultures; and Virtual Communities/Social Media. He aced portions of the entrance exam focusing on social studies and psychology to gain admission to medical school.
In a column written for Scientific American, Nathaniel P. Morris explains why he dislikes the pre-med track undergraduate education and wants to tone down the tough science courses in favor of more social studies education. He points out that competition is hard and stressful, basic science is difficult, and who uses that stuff anyway?
Plus, lots of students who want to help other people don't do well enough in science classes and aren't accepted to medical school.
I disagree with his view. The Hunger Games atmosphere for pre-med students is a good thing.
That's not just because I choose Doc A, every time, over Doc B. (So would you -- be honest.) I've also taught pre-meds and watched the pre-med system at work.
Pre-med coursework is much like coursework for any other difficult and technical professional occupation. These students take a broad range of tough classes in which they must excel. They are often graded against their peers and ranked. This can raise the competition to a frenzy and cause some pre-meds to break down and fail. It's heartbreaking to see a student overcome with anxiety and fear for their future after receiving a single low assignment score. It's not necessarily a pretty system.
But if preparing a ten-line calculation for Chemistry I Lab at home is overwhelming for a student, how would they fair in a ten-hour surgery in the operating room? Is that a good environment for anyone but a tremendously driven person who has learned to cope with extreme pressure and make crucial analytical decisions? Those skills are what a pre-med major is really about.
While the level of stress is very high for pre-meds, they aren't going through anything that most other science majors don't face too. There is one major difference between those science majors and pre-med majors. In return for the increased breadth of study and higher competition, pre-meds are relieved of the very hardest parts of a science major: the most advanced courses.
A pre-med major largely consists of a broad survey of introductory level science courses: biology, chemistry, physics. It then includes one or two mid-level chemistry courses including organic chemistry. It often includes calculus as well as biochemistry.
That's well short of the requirements for a full degree in chemistry or biology, much less physics or math. Some pre-meds moan about the difficulty of their physics and math coursework. Yet, they are taking only the classes that are considered the barest low-level introduction to these areas for most scientists. Pre-meds face stiff competition, but they don't face unduly difficult courses.
My observation -- in the lab courses I've taught -- is that the students who will be accepted to medical school buckle down and earn A's on their basic science coursework. They learn to cope with whatever is thrown at them and hone a mentality to rise to the difficulty of their work. The students who aren't the most successful at working hard, under pressure, on problems that are fundamentally scientific in nature don't go on to be doctors.
That's tough love. But reality is tough love. Shouldn't we teach that in college?
In the early 1900s, a strange disease was killing thousands of infants. Termed "status lymphaticus", it was blamed on the thymus, a small, grayish-pink gland weighing no more than 37 grams nestled between the sternum and the pericardium, the heart's reinforced sac. At the time, doctors could only hypothesize on the thymus' function, but many knew that in some cases, this little organ grew out of control, pressing upon the heart and the lungs until the young victim suffocated from the inside.
Yet, in 1931, status lymphaticus was conclusively shown not to exist. Highlighting new research and boldly admitting the medical community's collective "ignorance of the anatomy of the normal healthy human body", the editors of the respected medical journal The Lancet declared "The End of Status Lymphaticus." The article resigned thousands of infant deaths to uncomfortable mystery and served as yet another defeat in anatomists' attempts to ascertain the function of the thymus.
Dating back to antiquity, the little gland had defied explanation. Despite excising and examining the thymus from a variety of animals, physicians had collectively arrived at just two facts by the turn of the 20th century: It seemed to be involved in the immune system, and it dwindled in size as subjects grew to adulthood. The latter finding was particularly perplexing. Why, as the body grows, would an organ so dramatically shrink?
It wasn't until 1960s that the thymus' function was finally revealed. Australian immunologist Jacques Miller removed the thymus from newborn mice, and found that they had drastically reduced populations of white blood cells called lymphocytes compared to normal controls. The mice also had reduced lymphoid tissues, impaired immune responses, and suffered from higher rates of infection. Six years later, Miller teamed up with Graham Mitchell to discover that the thymus produces lymphocytes called T-cells, which aid in the production of antibodies.
To this day, however, it remains unknown why the thymus shrinks as we age. Active and growing during infancy and childhood, the thymus starts to atrophy during puberty and, over time, much of it converts to fatty tissue. In fact, by the age of fifty, only about fifteen percent of the thymus remains. By age seventy-five, the average human thymus weighs just six grams and is yellow in hue, a sickly-looking shadow of its former self.
The tendency for the thymus to shrink in vertebrates is known as thymic involution. As the process plays out, the immune system weakens, and rates of cancer, infection, and autoimmune disorders increase. Thymic involution is not directly caused by aging, so anatomists are at a loss to explain why the gland decays. A leading hypothesis suggests that thymic involution is an evolutionary trade-off. The immune system is physiologically expensive, so as complex organisms mature, immunity gets deprioritized.
No doubt much remains to be learned about this most mysterious organ.
Source: Liu, D & Ellis, H. "The Mystery of the Thymus." Clin Anat. 2016 Apr 2. doi: 10.1002/ca.22724. [Epub ahead of print]
We label these questions "hilariously stupid," but in truth, a question sparked by scientific curiosity is never stupid. A couple gems contained within today's crop of queries especially demonstrate that though a question may seem daffy at first glance, further consideration can reveal a hidden brilliance.
Do math majors in college graduate with a degree or with a radian?
My kid asked for a Pb and Jam sandwich in his lunch tomorrow. How much lead is appropriate for a 10 year old boy to consume?
If I flip a coin 1,000,000 times, what are the odds of me wasting my time?
Is Black Lives Matter similar to dark matter?
If parallel universes exist, is there a parallel universe in which parallel universes don't exist? (One for the philosophers out there...)
How did humans evolve to fit so perfectly into clothing?
If particles do not exist unless observed, why can't I close my eyes and walk through walls?
How many milligrams are in a telegram?
The Solar System has five Dwarf planets, but why doesn't it have any Elf or Orc planets?
Why can't I weigh the earth by putting a scale upside-down? (Actually, you can! Sort of.)
Somewhere around 13.8 billion years ago, the Universe began with a bang. In less than a second, the four fundamental forces -- electromagnetism, gravitation, weak nuclear interaction, and strong nuclear interaction -- which initially were joined as a single even more fundamental force, separated. Suddenly, the Universe started to expand at an exponential rate. Cosmic inflation had begun. A little later, but still within this initial second, tiny particles called hadrons formed, and neutrinos ceased to interact with other particles. For the next ten seconds, particles called leptons dominated. Their short-lived rule gave way to photons, which governed for approximately 380,000 years. Newly formed atoms of hydrogen and helium took over next, but it wasn't until around 559 million years later that stars began to shine by fusing them together.
The Big Bang is the best theory we have to explain the birth and existence of the Universe. As astrophysicist Ethan Siegel wrote in his recent book Beyond the Galaxy:
"To this very day, there is no other model that is both consistent with General Relativity and explains the Hubble expansion of the Universe, the abundances of the light elements and the existence and properties of the cosmic microwave background; the Big Bang is the only one."
But while satisfying and substantially supported by the weight of scientific evidence, the defining theory of cosmology is not perfect. There remain three key problems.
The first is the Horizon Problem. If we look far out into space, billions of light years away, we see photons with the same temperature -- roughly 2.725 degrees Kelvin. If we look in another direction, we find the same thing. What a coincidence! In fact, when astronomers look in all directions, no matter how distant, they find that all regions have the same temperature. This is incredibly puzzling, Siegel says, "since these regions are separated by distances that are greater than any signal, even light, could have traveled in the time since the Universe was born." The Big Bang offers no explanation for this fascinating quirk.
Yet another quirk unexplained by the Big Bang is the Flatness Problem. Almost all the evidence collected by cosmologists indicates that the Universe is flat. Like a sheet of paper on a desk, spacetime shows almost no curvature whatsoever. Within the context of the Big Bang, this seems extremely unlikely.
Lastly, we arrive at the Monopole Problem. The immense energies produced by the Big Bang should have created a magnetic particle that breaks the mold. All magnets have two poles, a north and a south. Even when a magnet is snapped in half the two poles remain. But this particle would effectively be a magnet with only one pole: a magnetic monopole!
In 2014, researchers created a synthetic magnetic monopole in the laboratory, but physicists have yet to find one of these particles in nature. Blas Cabrera, a researcher at Stanford and the current leader of the Cryogenic Dark Matter Search experiment, detected a candidate monopole back in 1982, but no other experiments have replicated his results.
So what do we do with these three puzzling problems? Do we simply ignore them?
"It is tempting to look past these three problems as not problems at all, but rather as simply the conditions that the Universe started off with..." Siegel writes. "The Big Bang has enough successes that it is easy enough to sweep these problems under the rug and not be bothered by them."
"But that would be a terrible, non-scientific attitude to take... As soon as we convince ourselves that something is a question that science cannot answer, it becomes a self-fulfilling prophecy."
Scientists do not turn away from problems because they are difficult. To the contrary, scientists tackle problems precisely because they are difficult! To the discoverer goes the notoriety, and to the world goes something far more important: knowledge.
Primary Source: Ethan Siegel. Beyond the Galaxy: How Humanity Looked Beyond Our Milky Way and Discovered the Entire Universe. 2015. World Scientific.
(Images: AP Photo/Elise Amendola, NASA / WMAP Science Team, Sbyrnes321)
The pharmaceutical industry undoubtedly ranks near the top of the world's most vilified businesses. Earning a combined $1 trillion in revenue, companies like Pfizer, Bristol-Myers Squibb, Roche, Novartis, and countless others are easy targets of populist outrage.
Some of it, of course, is warranted. Various companies have been found guilty of fraud under the False Claims Act. Most notably, in 2012, GlaxoSmithKline was ordered to pay $3 billion for failing to report safety data, bribing doctors, and promoting medicines for unlicensed uses. According to numerous reports, these unsavory acts are widespread throughout the industry.
If Edward Robinson Squibb, the pioneering medical inventory who founded and lent his name to pharmaceutical giant Bristol-Myers Squibb, were aware of this unscrupulous situation, he would not be impressed. Squibb died more than a century ago, but he should serve as a role model for the pharmaceutical industry today.
As a young student at Philadelphia's Jefferson Medical College in the 1840s, Squibb became enraptured with the seemingly magical skills of early surgeons, particularly those of his professor: Dr. Thomas Dent Mütter. Squibb watched in awe as Mütter repaired all manner of maladies, despite being handcuffed by inconsistent means of inducing anesthesia. Many times, Mütter would be forced to perform complicated operations on patients who were wide awake, making each surgery a delicate dance between subject and physician. Squibb resolved to remove that handicap.
"His vision was to provide doctors and surgeons with stable, constant chemicals for their work, and thereby make ether surgeries safer, more popular, and even more widely accepted," Cristin O'Keefe Aptowicz wrote in her book Dr. Mütter's Marvels.
Squibb labored for long hours in isolation to both concoct a perfect ether compound to induce anesthesia and create a consistent way to manufacture it. In 1854, after years of work, he finally succeeded, crafting a still that used steam to produce a uniform ether gas. The discovery was guaranteed be a goldmine, but Squibb gave it all away for free.
"Instead of rushing to patent either the process or the still -- both of which were conceived, created, tested, and perfected by Squibb alone -- he gave them to the world for free, publishing an article on the apparatus, including a detailed diagram of the design, in the American Journal of Pharmacy," Aptowicz wrote.
Four years later, Squibb founded the company that would become Bristol-Myers Squibb. But though he was now able to profit from his efforts, Squibb did not lose his altruistic mentality.
"Through his company and through his personal work, Squibb would become an advocate for transparency between patient and health-care provider and between doctor and medicine supplier," Aptowicz recounted. "He was instrumental in launching the movement that produced... the first of a series of consumer protection laws that, among other things, required drugs to be labeled with their active ingredients..."
While many today would undoubtedly agree that the pharmaceutical industry has strayed from Squibb's salubrious ideals, the reality isn't so cut and dry. The modern pharmaceutical industry owes much of its maligned reputation to system and circumstance. Game changing advances that occurred regularly decades ago are today not so readily attained. Breakthrough treatments of the past have essentially eliminated a number of medical conditions, leaving fewer, and more difficult, health problems to tackle. Yet the economic pressures from outside investors remain as strong as ever. This leads pharmaceutical executives to inflate prices and disguise slightly tweaked drugs as innovative new products.
Moreover, pharmaceuticals are a high-risk industry. New drugs cost billions of dollars to research and produce, with no guarantee that they will survive the FDA's rigorous review process. However, critics reasonably counter that average pharmaceutical company profits remain extremely high.
We cannot expect the modern pharmaceutical industry to completely emulate the altruism of its long-dead forefather, but one would hope that the executives behind Big Pharma could learn a thing or two from Squibb's honorable example.
(Image: Rept0n1x / Wikimedia Commons)
Scientists have discovered a diet that ensures you will live cancer free for the rest of your life. It's being hailed as a "miracle," a "marvel," a "breakthrough," and even a "quantum leap in nutrition."
Past studies have indicated that pretty much everything in life causes cancer. To circumvent this unpleasant truth, scientists had to think outside the box. The diet they created is nothing short of revolutionary.
There's no cooking, no grocery shopping, and no annoying delivery drivers. In fact, there's no food or water whatsoever! You don't consume anything!
Nine subjects took part in the study, which was not published in a peer-reviewed journal. All of the participants abstained from eating or drinking for the duration of the study. When examining the results, the researchers were utterly amazed to find that none of the subjects showed any signs of cancer.
"The 'no-nutrient' diet was not associated with any form of cancer," the researchers reported. "Moreover, the three children that took part in the study showed no signs of autism."
Vindicated by the new research is popular health blogger, Vani Hari, also known as "The Food Babe." For years, Hari has urged her followers to avoid all toxins and chemicals (the 'no-nutrient' diet has none), an effort that has provoked harsh scrutiny from the scientific community. That independent scientists have now proven her correct beyond a shadow of a doubt is a delightful piece of irony.
The researchers don't plan to perform a follow-up study, but they do intend to lobby Congress to revise the recently-released 2015-2020 U.S. Dietary Guidelines to include the 'no-nutrient' diet.
"We expect the Guidelines to be updated very quickly in light of our incontrovertible findings," they said in a press release.
The researchers admitted one minor side effect of their diet: All of the subjects passed away after five days.
Author's Note: This article is satire and all of its content is completely fabricated. The author does not actually recommend depriving yourself of food and water. Nor does he seek to trivialize the plight of cancer patients or denigrate those diagnosed with autism. The article is intended to highlight -- in an utterly absurd fasion -- the often ridiculous nature of nutrition science and how it is reported in the popular press.
(Image: Salimfadhley/Wikimedia Commons)
5:30 A.M., Monday July 16th, 1945: The day dawned brighter than ever before over the New Mexico desert. But it was not the Sun's soothing rays that set the landscape alight; it was the radiant flash of the very first atomic bomb.
Trinity, the nuclear offspring of the Manhattan Project, detonated with the force of 21,000 tons of TNT. The accompanying fireball reached temperatures of 8,430 degrees Kelvin, hotter than the surface of the sun, and sent a mushroom cloud of smoke and debris soaring more than seven miles into the sky.
That day, every human on the planet was reborn into a nuclear era, one where mankind now held the power to end its existence. Also born that day was an otherworldly, greenish glass, a physical reminder of the cataclysmic explosion. Scientists dubbed the strange material trinitite.
The ghostly glass littered the ground for hundreds of meters around the blast site, though it might be more accurate to say that it "transformed" the ground. The sand, which blanketed the desert the day before, had been replaced by this new material. Walking on it was like setting foot on the surface of an alien world.
Trinity's atomic blast catalyzed the transformation. Amidst destructive turbulence and searing heat, sand was thrown up into the fireball, where it melted, reformed, and rained down upon the ground. Scientists discovered proof for this storm strewn all over in the form of trinitite beads -- molten drops that solidified before they hit the ground. Years later, these pebbles are still surfacing as ants excavate them from their tunnels.
Despite its distinctly eerie appearance, trinitite really isn't that much different from sand. The glass is composed of silicon dioxide, better known as quartz, the second-most abundant mineral in Earth's continental crust. Closer inspection, however, reveals a material tainted with trace amounts of forty different elements, many of them radioactive.
In fact, to this very day, trinitite remains radioactive, buzzing with activity from isotopes of cobalt, barium, europium, uranium, and plutonium. It's safe to handle, but one would be ill advised to make jewelry out of it.
Much of the trinitite created on that fateful July day more than sixty years ago has now been bulldozed and buried, but rare specimens do reside in the hands of collectors. Rarer still, is red trinitite, which gets its color from the presence of copper. When scientists examined samples of red trinitite under a microscope, they found metallic, round blobs within the glass. These "chondrules" were melted pieces of iron and lead from the bomb itself, mementos encased in an atomic glass.
Primary Source & Images: Eby, N., Hermes, R., Charnley, N. and Smoliga, J. A. (2010), Trinitite—the atomic rock. Geology Today, 26: 180–185. doi: 10.1111/j.1365-2451.2010.00767.x
(Top Image: Shaddack)
Most everyone has a pretty good idea of what an atomic explosion looks like. Through images and video, we know the flash, the fireball, the mushroom cloud. Seeing it all in person is quite different, however.
One of the few firsthand accounts immortalized to paper comes courtesy of the inimitable Richard Feynman, who was present for the very first detonation of a nuclear weapon. The test, codenamed "Trinity" was carried out on July 16, 1945 in the Jornada del Muerto desert of New Mexico. The 20-kiloton blast was the culmination of years of work by the scientists of the Manhattan Project. One of those scientists, the 27-year-old Feynman, sought to view his handiwork with his own eyes:
They gave out dark glasses that you could watch it with. Dark glasses! Twenty miles away, you couldn't see a damn thing through dark glasses. So I figured the only thing that could really hurt your eyes (bright light can never hurt your eyes) is ultraviolet light. I got behind a truck windshield, because the ultraviolet can't go through glass, so that would be safe, and so I could see the damn thing.
Time comes, and this tremendous flash out there is so bright that I duck, and I see this purple splotch on the floor of the truck. I said, "That's not it. That's an after-image." So I look back up, and I see this white light changing into yellow and then into orange. Clouds form and disappear again--from the compression and expansion of the shock wave.
Finally, a big ball of orange, the center that was so bright, becomes a ball of orange that starts to rise and billow a little bit and get a little black around the edges, and then you see it's a big ball of smoke with flashes on the inside of the fire going out, the heat.
All this took about one minute. It was a series from bright to dark, and I had seen it. I am about the only guy who actually looked at the damn thing--the first Trinity test. Everybody else had dark glasses, and the people at six miles couldn't see it because they were all told to lie on the floor. I'm probably the only guy who saw it with the human eye.
Actually, Feynman wasn't the only person who chose not to don their safety glasses that day. Ralph Carlisle Smith, the future assistant director of Los Alamos Scientific Laboratory, also observed the explosion with the naked eye. Here's what he saw:
"I was staring straight ahead with my open left eye covered by a welders glass and my right eye remaining open and uncovered. Suddenly, my right eye was blinded by a light which appeared instantaneously all about without any build up of intensity. My left eye could see the ball of fire start up like a tremendous bubble or nob-like mushroom. I Dropped the glass from my left eye almost immediately and watched the light climb upward. The light intensity fell rapidly hence did not blind my left eye but it was still amazingly bright. It turned yellow, then red, and then beautiful purple. At first it had a translucent character but shortly turned to a tinted or colored white smoke appearance. The ball of fire seemed to rise in something of toadstool effect. Later the column proceeded as a cylinder of white smoke; it seemed to move ponderously. A hole was punched through the clouds but two fog rings appeared well above the white smoke column."
There are other accounts, of course, from those who did not actually see an atomic explosion, but felt its effects infinitely more than either Feynman or Smith. Over 100,000 people lost their lives when atomic bombs were dropped on Hiroshima and Nagaski. Here are a few of their stories.
(Image: Jack Aeby)
Religion is declining in America.
This is actually something fairly new. For decades, religion has been on the wane in developed countries worldwide, with statistical models going so far as to predict its eventual extinction in nine countries: Australia, Austria, Canada, the Czech Republic, Finland, Ireland, the Netherlands, New Zealand and Switzerland. America was pretty much the sole country bucking the trend to nonbelief. No longer.
In 1998, 62 percent of Americans said they were “moderately” or “very” religious. In 2014, that number dropped to 54 percent. According to a recent study, irreligion is particularly pronounced amongst younger Americans.
"Nearly a third of Millennials were secular not merely in religious affiliation but also in belief in God, religiosity, and religious service attendance, many more than Boomers and Generation X’ers at the same age," the authors wrote. "Eight times more 18- to 29-year-olds never prayed in 2014 versus the early 1980s."
In light of the new data, it seems inevitable that as demographics change over a matter of decades, religious practitioners will become a minority group in the United States. What's driving the decline?
While a variety of factors are likely at play, I'd like to focus on what may be the most significant contributor: science.
We are perhaps the first generation of humans to truly possess a factually accurate understanding of our world and ourselves. In the past, this knowledge was only in the hands and minds of the few, but with the advent of the Internet, evidence and information have never been so widespread and accessible. Beliefs can be challenged with the click of a button. We no longer live in closed, insular environments where a single dogmatic worldview can dominate.
As scientific evidence questions the tenets of religion, so too, does it provide a worldview to follow, one that's infinitely more coherent.
Sir James George Frazer, often considered one of the founding fathers of modern anthropology, wrote that -- when stripped down to the core -- religion, science, and magic are similar conceptions, providing a framework for how the world works and guiding our actions. He also noted that humanity moved through an Age of Magic before entering an Age of Religion. Is an Age of Science finally taking hold?
Bemidji State University psychology professor Nigel Barber expounds upon Frazer's thoughts even further.
"[He] proposed that scientific prediction and control of nature supplants religion as a means of controlling uncertainty in our lives. This hunch is supported by data showing that more educated countries have higher levels of non belief and there are strong correlations between atheism and intelligence."
Frazer's hunch is also supported by a recent study published journal Personality and Individual Differences. Querying 1,500 Dutch citizens, a team of researchers led by Dr. Olga Stavrova of the University of Cologne found that belief in scientific-technological progress was positively associated with life satisfaction. This association was significantly larger than the link between religion and life satisfaction. Moreover, using the World Values Survey, they extrapolated their findings worldwide. As Ronald Bailey reported in Reason:
Stavrova and company concluded that the "correlation between a belief in scientific–technological progress and life satisfaction was positive and significant in 69 of the 72 countries." On the other hand, the relationship between religiosity and life satisfaction was positive in only 28 countries and actually negative in 5 countries.
"Believing that science is or will prospectively grant... mastery of nature imbues individuals with the belief that they are in control of their lives," Stavrova concluded.
So not only does science dispel religious belief, it also serves as an effective substitute for it. Science will never drive religion completely extinct, but religion may be marginalized to a small minority bereft of influence.
One of science's primary aims is to seek out knowledge that will hopefully better our world and the lives of all who live on it. That's something we all can believe in.
At least 14 million people in United States are currently diagnosed with cancer, and around half of them, at one time or another, have pursued alternative therapies for their disease. While some of these therapies can help alleviate the debilitating side effects associated with cancer, none are effective at treating the disease or curing it outright.
But facts and evidence haven't stopped snake oil salesmen from pushing their ineffective panaceas. Here are six of the strangest "cancer cures" ever sold:
1. Emu Oil. Some products that FDA regulators examine present a genuine challenge to classify as legitimate or bogus. "Pure emu oil" was not one of them. Its claim to "eliminate skin cancer in days" was particularly specious.
Harvested from the adipose tissue of the emu, a large flightless bird, emu oil may impart some medicinal benefits, but they are thus far unconfirmed.
2. Electrohomeopathy. In the 19th century, Count Cesare Mattei found a way to capitalize on the burgeoning practice of homeopathy. First, add "electro" to its name. Second, sell custom electric devices to bolster traditional homeopathic treatments. The scheme worked brilliantly. The practice, itself, did not. Despite its ineffectiveness, it is still practiced today, particularly in bastions of naturopathic medicine like India, Pakistan, and Bangladesh.
Homeopathy is bunk. Providing a spark of electricity doesn't change that.
3. The Grape Cure. Grapes make for a delicious, nutritious snack and even produce a remarkable burst of plasma when microwaved! But while the multifaceted fruit is good for a great many situations, it isn't effective at curing cancer.
Tell that to Johanna Brandt, who pioneered a grape-only diet for curing cancer. Dr. Stephen Barrett dispels her quackery.
"There is no scientific evidence that the Johanna Brandt's "Grape Cure" has any value. Even worse, her recommended diet is deficient in most essential nutrients and can cause constipation, diarrhea, cramps, and weight loss that is undesirable for cancer patients."
4. Germanic New Medicine. According to Ryke Geerd Hamer, the founder of Germanic New Medicine, severe diseases like cancer result from shocking events that trigger psychological conflict. This conflict manifests physically as disease. To cure the disease, you simply need to resolve the conflict.
Doubling down on crazy, Hamer claims that evidence-based medicine is a Jewish conspiracy designed to kill non-Jews. The German Cancer Society and the German Medical Association strongly disagree.
5. Zap Away the Parasites. For decades, Hulda Regehr Clark claimed to have "The Cure for All Cancers.” The "cure" of which she spoke and wrote so glowingly, was a "zapper" device that supposedly removed disease-causing parasites from the body and resulted in a 95 percent cure rate. Sales of her books and unfounded treatments earned her millions of dollars.
Of course, her cure has never been substantiated by any sort of evidence, nor could it save her. Clark died of cancer in 2009.
6. Venus Flytrap Extract. Due to their carnivorous nature and quirky looks, the venus flytrap is one of the few plants that is actively poached, so much, in fact, that it is at risk of extinction. No doubt contributing to the plant's desirability are dubious claims that it can "eat cancer." Venus flytrap extract is sold in the form of an herbal remedy called Carnivora. Despite its fantastic name, no clinical studies have shown the supplement to be effective in the treatment of cancer.
IN 2005, one day before the comet Tempel 1 made its closest approach to the Sun, NASA scientists got a chance to embrace their inner Hulks. Like rambunctious schoolchildren giddy to cause a little mayhem, they smashed an 820-pound impactor into the comet at tremendous speed, and then -- undoubtedly with large grins plastered upon their faces -- watched what happened.
Almost instantly, a massive cloud of dust began spewing from the 72-trillion-kilogram comet. Subsequent analysis from the nearby Deep Impact probe revealed the presence of silicates, carbonates, metal sulfides, amorphous carbon, and hydrocarbons, as well as water ice, within the plume -- in short, the stuff that life is made of. When the enriched dust cloud dissipated, scientists were able to view their handiwork: a crater 328 feet wide and 98 feet deep.
In the wake of NASA's Deep Impact mission, interest in comets grew by orders of magnitude. Scientists had their first concrete evidence that the frozen hunks of water, rock, and various gases contained the building blocks of life. No longer mere objects to be charted by astronomers and ogled by sky watchers, comets now demanded an existential reverence.
TRAVEL BACK 4 billion years and you might find yourself in the middle of a storm of cataclysmic proportions. At this time, when the planets of the young Solar System weren't neatly synced into their elliptical orbits, it has been theorized that Uranus and Neptune rammed into a reservoir of icy comets, sending asteroidal and cometary debris raining down on the inner planets. During the Late Heavy Bombardment, as the event is called, the Earth was getting slammed, so much, in fact, that if life existed, the surface may have been sterilized. As many as 22,000 objects rocked our home over a period of 300 million years. However, in subsurface cracks created by the pummeling, life could have been boosted, or even seeded. Recent research suggests that impacts of comets containing organic compounds could generate peptides, the building blocks of proteins. The Solar System's most cataclysmic storm could very well have been a drizzle of life.
EVEN MORE ASTOUNDING, some of the comets that struck Earth could have already contained life. The chances are remote, but it is possible. According to recent research published to the journal Astrobiology, large comets with a radius of over 10 kilometers could contain liquid water at their cores. The decay of radioactive isotopes of aluminum or iron could supply the heat necessary to melt the inner ice. Katharina Bosiek, along with her colleagues Michael Hausmann and Georg Hildenbrand, suggest that a thick layer of dust could protect the core's liquid environment from solar radiation, echoing learned speculations found in prior research. Their findings make the hopeful words of Nalin Chandra Wickramasinghe, the Cardiff University astrobiologist who was one of the earliest proponents of panspermia, believable.
"Supposing comets were seeded with microbes at the time of their formation from pre-solar material, there would be plenty of time for exponential amplification and evolution within the liquid interior," he wrote in 2009.
In this view, large comets could be seen as enchanting snow globes just waiting to be smashed upon fertile ground, thus releasing the microbial life contained inside. It's not inconceivable. Some of Earth's extremophiles display surprising resilience to the inhospitable conditions of space, and they didn't even evolve there.
Skepticism is called for, however. Given the sometimes transient nature of comets and the harsh conditions of space, it's hard to imagine that life, if it ever existed inside them, could still exist today. Still, the tantalizing notion makes a mission to the Solar System's Kuiper Belt or Oort Cloud, where as many as 100,000 comets reside, that much more tempting.
Reference: Bosiek Katharina, Hausmann Michael, and Hildenbrand Georg. "Perspectives on Comets, Comet-like Asteroids, and Their Predisposition to Provide an Environment That Is Friendly to Life." Astrobiology. March 2016, ahead of print. doi:10.1089/ast.2015.1354.
It seems odd to say that scientists were ecstatic about the opportunity to shoot a critically endangered whale, but that was exactly how Katie Jackson and her colleagues at the North Atlantic Right Whale Program felt when they saw Whale 1334 on a mild February day in 2013 off the coast of Jacksonville, Florida.
The weapon of choice was a harmless one, of course. A bolt from the large crossbow would certainly harm or kill a human, but it would be little more than a pinprick to an animal the size of a school bus, and a valuable pinprick at that. A mechanism at the end of the bolt would collect a tiny piece of blubber from 1334, enough for biologists to sample and study her DNA. When Jackson's partner Tom Pitchford connected with the shot, the duo was elated.
For decades, 1334's genetic information had been prized more any other right whale's. Over a timespan of thirty years, she had been the most productive mother of all North Atlantic right whales, giving birth nine times. Yet her comings and goings were puzzling to say the least. She did not show up in regions where the whales typically congregated, and she disappeared for years at a time. In her recent book Resurrection Science, journalist M.R. O'Connor expounded upon the mystery.
"She was first seen off the southern coast back in the early 1980s and reappeared there periodically. But unlike the others, 1334 did not show up in the Bay of Fundy [off the coast of Maine] in the summers with the rest of the right whales. No one saw her again, until she appeared in Florida three years later with a new calf. And then the same thing happened three years later... 1334 gave birth during years when biologists saw calving rates stall and even decline in the general population. In 2000, she was the only right whale to give birth to a calf."
Considering that just five hundred right whales remain in the world, 1334's mysterious, yet prolific procreating was instrumental in keeping the species alive. Could there be secrets in her DNA that might prevent their extinction?
As O'Connor described, a right whale pregnancy is a monumental task. Pregnant females must consume as many as 4 million calories a day in the form of miniscule zooplankton. The binging doesn't stop even when the calf is born after a yearlong gestation, for that's when the nursing begins, which roughly lasts another year. Due to the great expense of reproduction, female right whales are able to delay pregnancy until they've stored up enough energy in the form of blubber to afford giving birth.
Thus, when Trent University geneticist Brad White started examining 1334's DNA in spring 2014, he had a hunch that her genotype permitted her to birth calves regardless of good or poor nutrition. That hunch is still being explored.
"Nothing has jumped out yet about the DNA profile," he told RCS in an email.
Of course, 1334's success could be more attributed to her behavior than to her hard-wiring. The massive whale goes her own way, and it's entirely possible that she's stumbled upon more hospitable grounds to mate and rear a calf. Scientists still don't know exactly where she travels.
That an animal weighing well north of 100,000 pounds can disappear so easily is remarkable, especially considering that right whales were once hunted and killed like cattle.
O'Connor simultaneously laments and appreciates that the mystery of 1334 remains unsolved.
"As much as I want to know where 1334 goes, I cheered her elusiveness and hoped that the ocean is still big enough for her to escape the forces threatening her kind."
Primary Source: M. R. O'Connor. Resurrection Science: Conservation, De-Extinction and the Precarious Future of Wild Things, St. Martin's Press, 2015
Though difficult to fathom, just 1,500 years ago, English was a wisp of a language, spoken by a smattering of Germanic tribes as they migrated from mainland Europe to the island of Britain. Today, linguists whisper and wonder: will English eradicate all other languages?
To do so would be a tall task. English's 339 million native speakers are outnumbered by those who speak Spanish (427 million) and Mandarin Chinese (897 million).* What's more, English's native speaking population has been decreasing steadily. While this situation seems to suggest that English is on the way out, globally, it's actually ascending. That's because 510 million people from all over the world have elected to learn English as a second language, and more start learning every day. No other language comes close.
In science, business, and the media, English dominates. Learning the language is a cheap price of admission to join an increasingly interconnected world.
A side effect is that other languages are starting to fall by the wayside. Prominent linguist David Graddol estimates that as many as 90 percent of the world's 6,000 to 7,000 languages will go extinct this century. His learned guess is echoed by John McWhorter, a linguistics professor at Columbia University. Backing them both is evidence from a study published in 2014. Researchers modeled declines in hundreds of languages and found that, on average, a language is going extinct every two weeks. If this trend continues to play out over the next century, 2,600 languages will be gone. The researchers suggested that a burning desire to benefit from economic growth is what's causing lesser-spoken languages to go up in smoke. More and more, education and employment hinges upon being able to communicate in modern society. This means that parents are not passing on rarer, obsolete languages to their children.
Writing in the Wall Street Journal, McWhorter had this to say on the situation:
"It is easy for speakers to associate larger languages with opportunity and smaller ones with backwardness, and therefore to stop speaking smaller ones to their children. But unless the language is written, once a single generation no longer passes it on to children whose minds are maximally plastic, it is all but lost. We all know how much harder it is to learn a language well as adults."
So as esoteric tongues die, vastly fewer will remain. But will English emerge on top?
"Some may protest that it is not English but Mandarin Chinese that will eventually become the world’s language, because of the size of the Chinese population and the increasing economic might of their nation," McWhorter wrote. "But that’s unlikely. For one, English happens to have gotten there first. It is now so deeply entrenched in print, education and media that switching to anything else would entail an enormous effort... Also, the tones of Chinese are extremely difficult to learn beyond childhood, and truly mastering the writing system virtually requires having been born to it."
While Chinese may remain the most spoken language on account of the large and growing native population that speaks it, English certainly isn't going anywhere. One of the chief reasons is that it has cemented itself as the defining cosmopolitan language of our time. In a 2010 study, Gary Lupyan of the University of Pennsylvania and Rick Dale of the University of Memphis found data to suggest that as more and more non-native speakers learn a language, they inadvertently hack away at the extraneous edges. Over time, the language grows more streamlined and simple to learn. There's no question that English has evolved considerably over the years. Just compare the flowing prose of John Adams and Abraham Lincoln to the simplified of Hillary Clinton or Donald Trump.
Of course, linguistic evolution could be completely shaken up by technological advancement. A Star Trek-style universal translator is one of the holy grails of science fiction, and companies like Google are hard at work trying to craft it. If such a device ever enters the realm of reality, it could dismantle the Tower of Babel for good.
*Sentence updated 3/21 to reflect 2016 statistics from Ethnologue.
Last week at the Newton Blog, my colleague Ross Pomeroy discussed a famous puzzle known as one of Zeno's Paradoxes. He presented the resolution of the problem of fitting infinitely many things into a finite space through the understanding of fractals: shapes that repeat the same pattern infinitely many times.
Such an approach might resonate with the original Greek mathematicians who worked on this very problem. They primarily solved physical problems through methods relying upon the geometry of shapes and lines; most high school geometry was discovered by famous Greeks such as Euclid.
Let's crack this venerable nut using the methods of a physicist.
First, a brief recapitulation of the problem: A sprinter completing the 100 meter race has finished one half of his race as he passes the 50 meter (m) mark. He's completed one half of the remaining distance at 75 m, and one half of the 25 m then remaining when he runs 12.5 m further and reaches 87.5 m. As he continues, he finishes one half of each remaining distance, but at each distance, a small amount further remains.
Continuing to divide each remaining half into another small half, we never find the final distance to finish equal to zero -- the point of the finish line. It just keeps getting smaller forever. How can the runner ever reach a line that requires him to run an infinite number of distances?
An answer is provided by the application of the calculus, as first described by the patron saint of inventing mathematics for physical problems -- and the greatest physicist of all time by count of hypothetical Nobel Prizes -- Isaac Newton.
The physical picture is simple: as the distance remaining gets smaller, the rate at which the runner covers it grows faster. The first 50 meters might take 5 seconds, the next 25 meters take 2.5 seconds, the next 1.25 m takes 1.25s, the next .625 m takes .625 s, and so on. The smaller the distance, the faster it's covered. As distances approach infinitely small, the time it takes to run them approaches infinitely small as well.
The total time to run the 100 m is all of these smaller and smaller times, added together.
What's not clear, until we possess the powerful ideas of Newton's calculus, is whether all of those small times add up to some finite number (say 9.58 seconds for Usain Bolt, or 15.58 seconds for a more normal person) or whether they add up to infinity -- an infinitely long time to run the 100 m!
Calculus can handle this sort of problem by using a tool called a limit. Running at a steady and world-class speed of 10 m per second, we would find our times to cover each successive remaining half of the course to be 5 s + 2.5 s + 1.25 s + 0.625 s + ... and so on. You can check that this is equal to: 10s*(1/2 + 1/4 + 1/8 + 1/16 + ...). The Zeno's paradox boils down to simply what the numbers in those parentheses add up to: infinity or some actual number.
Using the limit, we can subtract out the infinite number of terms, perform some mop-up algebra, and corner the resulting troublesome infinity under the bottom of a fraction where 1/infinity = 0 to kill it. Then we get our answer: 1/2 + 1/4 + 1/8 + ... = 1. (Here are more rigorous and simpler proofs if you're so inclined.) Those infinitely many fractions all add up to a small whole, giving us a finite time when we put the speed multiplier back in. This is why runners break the tape just like they expect to.
That may still sound a touch arcane, but the idea is right at the heart of much of physics. The calculus provides us with ways of analyzing the rate of change of many many natural processes. It does so by fighting off the infinities that appear to stymie our understanding as distances and times grow smaller and smaller.
This summer, sprinters from around the world will gather in Rio to compete in the 100-meter dash. Should you choose to tune in, you'll be treated to electrifying race after electrifying race. With each crossing of the finish line, you'll also witness something seemingly impossible: a runner completing an infinite number of tasks in roughly ten seconds flat. Compared to such a monumental achievement, who cares about a gold medal? An athlete has just made infinity occur within a finite frame!
How is such a thing possible? To find out, we must first travel back to around 470 BC, when the great Greek Philosopher Zeno of Elea was wowing his compatriots -- including a young Socrates -- with his keen intellect, and in particular, his playful paradoxes. In one of these paradoxes, Zeno described a race and a runner, noting that before the runner completes his goal, he must first travel half the distance. Once halfway, he must then travel halfway again, and again, and again. If this was applied to a 100-meter race, our sprinter would run 50 meters (1/2), 25 meters (1/4), 12.5 meters (1/8), 6.25 meters (1/16), and so on until he passes the finish line.
Since one can technically always travel half of some set distance, that would mean the sprinter completes an infinite number of tasks! Zeno argued that this is impossible, and thus concluded that movement must be an illusion.
Since each leg of the 100-meter dash is exactly half the remaining distance to the finish line, it makes sense that the more legs we add up, the closer we'll get to the full 100 meters. So we would expect S1000, say, to be bigger than S10, and therefore closer to 100, but not quite equal to 100. Pushing this reasoning a step farther, in some sense the sum of all the numbers in the sequence must be equal to 100.
Here we have found a point of contact between the finite and the infinite: the sum of infinitely many numbers adding up to something finite. In the right context it seems to make perfect sense: if you split up 100 meters into infinitely many shorter pieces, then of course the sum of the lengths of all the pieces should be equal to the total length of 100.
Outside of fancy philosophical musings, there's a far simpler way to make something infinite fit within a finite space: Make a fractal, a mathematical set that exhibits a repeating pattern at every scale to infinity!
Perhaps the most basic example of a fractal is the Koch snowflake, an extrapolation of Swedish mathematician Helge von Koch's curve, in which a straight line is divided into three equal segments and the middle segment is replaced by two sides of an equilateral triangle of the same length as the segment being removed. This is then repeated for all of the straight lines an infinite number of times.
Zoom in on an edge of the fractal, and this is what you'll see!
So as counterintuitive as it may sound, it is quite possible to contain an infinite number of things within a finite space!
Most people spend between 30 and 47 percent of their waking hours spacing out or lost in thought. But for a small percentage of these daydreamers, their airy fantasies and idle ruminations transform into a powerful compulsion that crowds out reality.
Though not formally recognized as a psychological disorder, researchers are starting to study and characterize this potentially debilitating condition. Maladaptive daydreaming, they've dubbed it.
Eli Somer, a Professor of Clinical Psychology at the University of Haifa and the scientist who first reported on maladaptive daydreaming more than a decade ago, is not trying to pathologize everyday imaginations. Such reveries are normal and even beneficial. Somer simply wants to acknowledge and hopefully find ways to treat those daydreamers whose dreams literally dominate their days. To that end, he has spearheaded a sizable chunk of the thus far scant research on the condition, even helping to create a tool to diagnose it.
In light of his published efforts, hundreds of maladaptive daydreamers have contacted Somer volunteering to take part in research. With their help, he has just published a new study that describes the condition in empirical detail.
Somer and his colleagues, Liora Somer & Daniela Jopp, interviewed 21 maladaptive daydreamers from across the world. From these in-depth discussions, the team gleaned a number of commonalities.
First and foremost, subjects described maladaptive daydreaming as harmful, time-consuming, and isolating. All desperately sought information and support.
"I spend most of the day at home daydreaming... I feel like a ghost that misses out on life."
"Oh Gosh, nothing gets done! Homework, studying, cleaning, sometimes I would lie there, my stomach will be growling and hurting and I won't get out of bed because I'm trying to daydream. It's that bad..."
"For myself, I just want a life, not just stories about a life."
Somer also found that every single interviewee had a ritualized process to induce vivid daydreams. This process invariably involved listening to music while performing some sort of repetitive activity, such as rocking their head back and forth or pacing for hours on end.
"This set of conditions sounds similar to the focusing of attention described as an induction process for hypnosis and has been observed among indigenous communities as part of ritualized kinetic trance induction," he and his co-authors noted.
Almost every subject lamented that socializing was incompatible with daydreaming. To truly become immersed within their fantasies, they required solitude.
In this solitude, subjects said that their daydreams sprung to life in vibrant and vivid detail. Some described entering a dream-like state.
“It is visual like an actual dream with a tunnel vision on the person I am talking to and without much detail about the surrounding. I can hear the voices of the people in my daydream...When they talk to me it is not like my own voice coming back to me or voices from outside. It's theirs”
Others insisted that their daydreams seemed real.
"It is like a reality with colors, smells and tastes."
Inside their fantasies, subjects wove engrossing narratives that played out very much like parallel realities, in which they had enhanced social status and fulfilling relationships. The unfortunate irony is that these idealized visions inhibited their ability to achieve those things in real life.
Now that maladaptive daydreaming is gaining scientific legitimacy, Somer hopes that future research will explore what causes the condition. He and his co-authors suggest that irregularities with the neurotransmitter serotonin may be to be blame. Latent obsessive-compulsive disorder may also factor in.
Once the causes are nailed down, and potential treatments are explored, many maladaptive daydreamers may find some relief and be able to live their lives fully awake.
Source: Eli Somer, Liora Somer & Daniela S. Jopp (2016): Parallel Lives: A Phenomenological Study of the Lived Experience of Maladaptive Daydreaming, Journal of Trauma & Dissociation