Icthyosaurs were giant reptiles who lived in the seas at the same time as the dinosaurs ruled the land. They gave live birth to their children. A fossil newly discovered in China, dating back almost a quarter of a billion years, shows the earliest known live birth of a reptile. The fossil literally captures the moment of birth, as there was an embryo still inside the mother, a newborn just outside her, and a third halfway between, in the process of exiting the pelvis. The headfirst posture of the second baby indicates to researchers that live birth may have evolved on land, not in the water in reptiles as previously thought.
Icthyosaurs, although they lived in the same time period and could be mistakenfor them owing to their shared reptilian inheritage, were not dinosaurs. (And neither were the plesiosaurs, which, unlike the more fish-like icthyosaurs, look exactly like you’d expect water-dwelling dinosaurs to look.) Live birth is one of the things that distinguishes them.
Although not a first per se, it still amazes me that, through fossilization, we can look back at a birth in progress that started 248 million years ago.
If a person dies quickly, would he/she feel the pain before death, or would they just die without feeling anything since it was so quick?
This is an interesting question. A good example of rapid death would be decapitation. For obvious reasons, one can’t do experiments on humans to figure out if they remain conscious for any period of time after having their head chopped off. Ranging back to the time of the French revolution, the heyday of the guillotine, there are anecdotes about people who apparently remained conscious and responsive to stimuli for a few seconds after the head was severed. There are also anecdotes about people who promised to give a sign after death to indicate awareness, and failed to do so. These anecdotes are impossible to verify. Decapitation or any other human death by extreme and rapid trauma has never happened during any form of scientifically controlled conditions. All we can say based on these anecdotes is that it’s possible that some people may remain conscious and feel pain for a short while after such a violent death. It’s also possible they aren’t, and it seems likely to be so for the majority of cases.
Rats are frequently used as model animals in research, so scientists have taken some interest in the question, do rats feel pain during decapitation? Is decapitation a humane way to sacrifice animals in research? Monitoring brain activity in rats as they were killed, researchers found no brain activity normally associated with pain in rats who were awake while their head was cut off. That suggests the rats were not in pain. Other scientists have calculated that it would take no more than 2.7 seconds for the rat brain to go unconscious from lack of oxygen. Given the nature of the trauma, more intense brain activity would be expected, which would use even more oxygen, and so unconsciousness would result even quicker. Taken together, these data suggest that rats, at least, do not feel pain after decapitation, and if they did, the duration of the pain would be no more than a couple seconds.
We can’t directly extrapolate to humans, but we can speculate. It seems likely that humans, too, go unconscious more or less instantly, at least in the majority of cases. Given the anecdotes, it seems possible that some may retain some kind of awareness for an instant after trauma, but the evidence is weak, so we’ll have to call it an open question. It’s certain, however, that the brain cannot function without oxygen, and that oxygen would run out in a matter of seconds after the supply was permanently cut off, so at the most, one could hypothetically feel a few seconds of pain. But then again the rats’ brain activity didn’t suggest pain. So the best answer I can give is that most people will probably not have time to feel pain before they’re dead, and it remains an open question whether some rare exceptions may retain a few seconds of consciousness, but if so they wouldn’t necessarily be in pain for that time.
I believe that we survive death. I have read that the soul is made of matter with a higher frequensy.
I have read that Yahweh created the world in six days, and on the seventh he rested. I have read that according to exact calculations of genealogies, this occurred less than six thousand years ago.
One should not believe everything one reads. This is not a question, but if it were one, it would not be well formed. Science, as far as it has been concerned with the matter of consciousness, has not found any evidence that consciousness continues past death. That is all we can say: nothing points to it. If by “the soul” you mean something that sustains our personality or consciousness, then the answer to your non-question is: no, that is not true. I can guarantee you the phrase “the soul is made of matter with a higher frequency” does not occur in any peer-reviewed scientific paper. Anywhere. Its likelihood of being true ranks up there with “the tooth fairy is made of matter with a higher frequency.”
Some of the most interesting things that happened in science and on this blog in 2013. Previous years: 2012, 2011, 2010.
Story of the year
The story of the year has to be the discovery of the elusive Higgs boson. The new particle’s discovery was announced in July, 2012, but it wasn’t until March of this year that a full analysis confirmed its status as a Higgs boson. Either way, I didn’t pick a story of the year for 2012, so it can serve for both years, in lieu of any obvious competitors. (If you think there’s a more important science story for 2013, I’d love to hear it!)
You’ll notice that the headline above says the new particle is a Higgs boson, not the Higgs boson. This is because some models posit several Higgs bosons. The Large Hadron Collider shut down operations in February after a three-year run, but will restart in 2015 at higher energies, hopefully bringing us more useful data about the Higgs mechanism. In case you’ve lived under a rock the past five years, the reason the Higgs boson is important is because it is the particle tied to the mechanism that gives everything mass in the Standard Model of physics. Its existence has been theorized for decades, but it is only in the past two years that we have had experimental confirmation of its existence.
In chemistry: Martin Karplus, Michael Levitt and Arieh Warshel for the development of multiscale models for complex chemical systems. In practice, this means computer models that draw on quantum mechanics in the most important parts and simplify to classical physics, which is less accurate but also less computationally expensive, in the less important parts of a reaction. Illustration of Newton and his apple fighting, then reconciling with Schrödinger’s cat from the Nobel Institute. Note that this research was done in the 1970s; although the Physics prize was highly topical, most Nobel prizes are awarded decades after a discovery; indeed, the Higgs boson was initially theorized in 1964.
We’re living in the future, when scientists can create mouse brains with human brain cells and measure the impact on learning. Earlier this year, a paper was published in Cell describing experiments where scientists implanted human glial progenitor cells into the brains of newborn mice. Glial cells are cells as numerous as neurons in the brain that support and nourish neurons. Until recently it was thought that they played exclusively a supporting role in the brain, but we now know that they can also impact neurotransmission. The mice who had human glial cells implanted in the neonatal stage grew chimera brains: on reaching adulthood, the human glial cells had spread to large parts of the mouse brain, retaining their human form and integrating themselves into the mouse neural network similar to human glial cells in a human brain or mouse glial cells in a normal mouse. The partially human mice learned significantly faster than control mice and mice implanted with mouse glial cells on all behavioral tests, and the same could be seen on measures of long-term potentiation (LTP)—a molecular process of strengthening the connections between neurons that is known to underlie some forms of memory and learning.
Science Behind the Factoid: Lottery Winners Are No Happier than Quadriplegics
Here’s a frequently repeated, counterintuitive factoid: people who win large sums in the lottery are no happier, over time, than people who become paralyzed in traumatic accidents. This “fact” comes from Brickman et al’s 1978 paper called Lottery Winners and Accident Victims: Is Happiness Relative? The researchers interviewed 22 major lottery winners, 22 randomly selected controls from the same area, and 29 paraplegics and quadriplegics who had suffered the injury in the recent past. The lottery winners had won sums ranging from $300,00 (more than a million in 2013 dollars) to $1,000,000. Here are some of the results:
The respondents rated their happiness and their enjoyment of everyday pleasures such as hearing a good joke or receiving a compliment on a scale from 1 to 5, where 5 was the happiest. As you can see, lottery winners were not significantly happier than controls. They also derived significantly less pleasure from everyday events. The victims were significantly less happy than the controls or the winners; however, one may have suspected them to be even unhappier. After all, these were people who had suffered a life-changing, paralyzing injury less than 12 months ago, and were still engaged in extensive rehabilitation. The victims also reported slightly more enjoyment of everyday pleasures than the lottery winners. All reports, past, future and present were made at one moment, and so we can see that the victims idealize the past, which they report as significantly happier than the controls or lottery winners report the present. All groups reported similar expected future happiness levels.
The second study showed that the results were not due to preexistent differences between people who buy lottery tickets and those who don’t.
The results are surprising, but they aren’t particularly strong. The sample size is small, and the results have not been replicated since. It’s a long way from results such as these to Oprah self-help slogans like “major events, happy or not, lose their impact on happiness levels in less than three months.”
Does money increase happiness? Some real-world studies have attempted to look at this. One study interviewed average Joes and Forbes 500 multi-millionaires. The wealthy were happier than the average Joes, but only modestly so. A larger study of poorer people was undertaken as part of the Seattle and Denver Income Maintenance Experiments. Part of a large-scale experiment involving 4,800 families and negative income tax—a way of ensuring a minimum income regardless of work status—it provided fertile ground for investigating the question: does having a stable income increase happiness?
A three-to-five year study tackled the question. Household heads who received extra monetary support and controls who didn’t were queried for symptoms of psychological distress. The results were surprising: for most groups, a stable income did not have any impact on psychological distress. In some groups, psychological distress was increased.
In 2006, Jonathan Gardner and Andrew J. Oswald decided to take another look at the lottery winner findings. They had the benefit of longitudinal data. Instead of asking subjects to rate their past, present and future happiness, they used data from the British Household Panel Survey. This survey already asks participants to rate their happiness every year. Gardner and Oswald looked at participants who had won medium-sized lottery prizes, from £1000 to £120,000. The 137 winners—a small sample, but much larger than the 1978 sample of 22 winners—went on to improve a small, but significant amount on a scale of general happiness.
Does increasing everyone’s income increase happiness? Gross Domestic Product per capita has been steadily increasing the last three decades. But are we happier? No, we’re exactly as happy or unhappy as we’ve always been. Below are some data:
What does this mean for happiness? Clearly, money isn’t everything. Equally clearly, money is something. It’s easy to come up with a folk psychology explanation to Brickman et al’s findings. Let’s give it a go: In the moment, a major accident is a huge negative factor in your life. Winning a million dollars is a huge positive factor in the moment. But as time passes, the happiness fades. Lottery winners grow accustomed to their new wealth, and no longer derive significant happiness from it; on the contrary, compared to the euphoric winning moment, everyday pleasures become duller. On the other hand, quadriplegics grow accustomed to their injury, and in contrast to the injury, the joy of everyday pleasures becomes greater. In time (say, 3 months, that sounds good in a soundbite), major life events don’t really affect your happiness level.
But as the data above shows, that’s an oversimplification. When it comes to our understanding of happiness, we may not have come much further than Socrates or Seneca. We lack historical data from antiquity, but it’s easy to imagine Socrates being a happier man than the man who won a million dollars in 1978.
The New Yorker has a long article about the development of a new insomnia drug. It starts with the discovery of a novel neurotransmitter—discovered independently by two different groups of researchers in the 1990s, named by some hypocretin (because it is produced in the hypothalamus), by other orexin (meaning appetite-stimulating, because of its observed effects). Orexin/hypocretin was first thought to regulate appetite, but attention soon turned to sleep. Orexin receptors are only present in a few thousand nerve cells in the hypothalamus, a tiny number compared to the billions of neurons across the brain. But these neurons have connections all over the brain, and they appear to act as an “awake switch.”
Orexin comes along and tells the brain, “hey, be awake, don’t fall asleep.” Soon after the discovery of orexin’s effects on rats’ appetite, it was discovered that rats lacking orexin receptors (“keyholes”) acted similarly to human narcoleptics. They have disturbed sleep patterns, and tend to fall asleep suddenly or fall into a heap, their muscles inert, at intervals during the day. Human narcoleptics were found to lack orexin (“keys”). This novel discovery led to the possibility of new medications. Orexin agonists (which activate the receptors) could become new treatments for narcolepsy or daytime sleepiness. And orexin antagonists (which block the receptors) could become new sleep aids. The search for the next blockbuster drug was on.
The article gives a fascinating look into the evolution of pharmaceutical research. Here is a description of how scientists came up with the zolpidem molecule, the active component in the popular sleep aid Ambien:
[Jean-Pierre Kaplan] and Pascal George—a younger colleague whom Kaplan described as “sympathetic and brilliant”—started by building wooden models, including ones for Valium, Halcion, and zopiclone. Colored one-inch spheres, representing atoms, were connected by thin rods, creating models the size of a shoebox. This was a more empirical, architectural approach than is typical in a lot of pharmaceutical chemistry. Kaplan and George tried to identify what these molecules had in common, structurally, that allowed them to affect the brain in the same way. Kaplan told me that their thinking wasn’t wildly creative, but it was agile: “You know, at that time it was maybe clever, because you have no computer. Now it’s routine work.”
Then a couple of decades later, pharmaceutical giant Merck is trying to find a drug to block orexin in order to help patients sleep:
Merck has a library of three million compounds—a collection of plausible chemical starting points, many of them the by-products of past drug developments. I saw a copy of this library, kept in a room with a heavy door. Rectangular plastic plates, five inches long and three inches wide, were indented with hundreds of miniature test tubes, or wells, in a grid. Each well contained a splash of chemical, and each plate had fifteen hundred and thirty-six wells. There were twenty-four hundred plates; stacked on shelves, they occupied no more space than a filing cabinet.
In 2003, Merck conducted a computerized, robotized examination of almost every compound in the library. At this stage, the scientists were working not with Renger’s animals but with a cellular soup derived from human cells and modified to act as a surrogate of the brain. Plate by plate, each of the three million chemicals in the library was introduced into this soup, along with an agent that would cause the mixture to glow a little if orexin receptors were activated. Finally, orexin was added, and a camera recorded the result. Renger and his colleagues, hoping to find a chemical that sabotaged the orexin system, were looking for the absence of a glow.
But drug development isn’t just science. Politics and marketing also enter into it. Everything from the color of the pills (“reds are culturally not acceptable in some places”) to the packaging (“the U.S. prefers everything in a thirty-count bottle”) to the dose. The final hurdle is approval by the Food and Drug Administration—America’s final arbiter on what drugs are allowed to be marketed and sold, and for which diseases—and other regulatory bodies in other parts of the world. It’s good that such hurdles exist, because otherwise dangerous drugs—such as thalidomide, which caused severe birth defects—would enter the market much more frequently. However, there is a question of balance: at what point do potential downsides outweigh the benefits? The FDA has turned to a more conservative line: dosages should be as small as possible.
This poses a problem for Merck. Their orexin antagonist, their potential superstar new sleep medication, suvorexant, is effective (as measured by objective measurements) at a dose of 10 milligrams. However, at this dose, patients don’t experience any subjective improvement in sleep quality. At higher dosages, both objective and subjective measurements agree that the drug is effective. But the FDA argues that such higher doses run a higher risk of side effects, and recommends the lowest dose, the dose which doesn’t make patients feel any better even if they’re getting better sleep as determined by objective measurements. This leads to an absurd situation where the FDA is arguing the drug’s effectiveness (at the lowest dose) while the drug company is arguing its ineffectiveness. If the FDA will only approve the lowest dose, this poses a problem for marketers:
How successfully can a pharmaceutical giant—through advertising and sales visits to doctors’ offices—sell a drug at a dose that has been repeatedly described as ineffective by the scientists who developed it?
Regardless of marketing and backroom tactics and FDA meetings, the research into orexin continues on. And that’s the really interesting part from a scientific perspective. Just a little more than a decade ago we discovered a completely new piece of the brain puzzle. We still don’t understand sleep. That’s the big thing. More basic research—the kind of research that just tries to figure out how things work, without regard for practical applications such as drug development—is needed. We don’t know why we need to sleep, we don’t know the exact significance of the different sleep phases. We do know that sleep is vitally important, that a specific cycle of brain states throughout the night is needed to perform well the next day. But why must we sleep at all? Why is resting awake not good enough? We have some ideas—memory consolidation, ridding the brain of certain toxins that can build up during wakefulness—but we’re not sure.
Sleep remains a mystery. Orexin is likely to play some part in the solution. And that’s exciting, whether you have sleep troubles or not.
It isn’t always the case, but this time you should actually go to Wikipedia for a good explanation. Read about the development of the barometer and the theory of gauging pressure. It’s rather fascinating that people were able to figure all this out 400 years ago. And how did they do it? By questioning a long-held assumption: that air is weightless. That’s a fine example of the scientific method.
Fundamentals of Computer Science (Some Math Ahead!)
I hate it when people choose nicknames that don’t work in conversation. A Photo of Dorian Grey, alternatively “50% physics, 50% mountains” writes:
Actually the first general-purpose digital computer was built by Tommy Flowers in 1943.
That computer was the Colossus, and the Colossus computer was not Turing complete. Therefore, it wasn’t the first general-purpose digital computer.
During the 1930s, the decade before the first modern computers were built, many mathematicians were working on the fundamentals of computation. Just what was computable? What problems could and could not be solved algorithmically, and how could this property be formalized? Several men solved this problem independently, in different ways which turned out to be equivalent.
Alan Turing was one of those men. He imagined a theoretical machine which would later be referred to only by his name—the Turing Machine—which could compute anything that is theoretically computable. A Turing Machine consists of an infinite tape of cells one after the other, filled with symbols. A reading head moves along the tape, scanning the content of one cell at a time, and deciding based on a small table of instructions whether to change the symbol, move forward or rewind. A specific Turing Machine cannot be programmed; it can only solve the single program for which its instruction table was designed. But Turing further imagined a Universal Turing Machine (UTM) which could simulate any other Turing Machine by reading off not only the input but also the instruction table from the tape. In effect, this is how modern computers with programs stored in memory work.
The UTM is one way to formalize the concept of what can and cannot be computed. It’s a very simple, elegant theoretical construct, and very clever people have managed to device ingenious UTMs that use a tiny set of symbols and rules to effectively compute anything that can be computed. A computer is said to be Turing Complete if and only if it can compute exactly the same things an UTM can compute. (Ignoring the fact that non-theoretical computers have finite memory.) And if it cannot, it is not a general-purpose computer in the modern sense, since there are algorithmic problems it cannot solve. The Colossus was very useful for its purpose, but it was not Turing complete.
Around the same time, Alonzo Church attacked the problem of defining computability from a slightly different angle. He deviced a formal calculus called the lambda calculus. The lambda calculus is also extraordinarily elegant and simple. It is built upon the theory of anonymous functions and substitution. Here is how wikipedia defines it:
A variable, x, is itself a valid lambda term
If t is a lambda term, and x is a variable, then (λx. t) is a lambda term (called a lambda abstraction)
if s and t are lambda terms, then (t s) is a lambda term (called a function application)
In effect, lambda calculus is defined by anonymous functions that can take one named variable and can later be applied to one lambda term. Only the syntax above and a couple rules about how to perform function application are needed. From these humble beginnings, some clever bootstrapping allows us to create something called the Y Combinator, which allows us to use recursion in a calculus that has no native support for it. Numbers can be encoded using Church encoding. If you’re willing to do theoretical groundwork, you can compute anything that is computable using the lambda calculus. As it turns out, Church’s lambda calculus and Turing’s UTM are equivalent; they define the exact same class of functions. And the Church-Turing thesis posits that anything that is computable is computable by both these formal systems.
Now, these formal systems aren’t meant to be practical. When they were invented, they were intended only to define computability, not to provide a practical means of achieving it in a machine. Yet the Turing Machine formed the basis of the Von Neumann architecture, which is the basis of almost all modern computers. And many modern programming languages—called functional programming languages, because they are based around functions in the mathematical sense—are based on lambda calculus. The most basic form of lambda calculus sketched above is called the untyped lambda calculus, because the variables have no type; any function can accept any variable. But typed variants of the lambda calculus are very important in type theory, a field in the cross-section of computer science and pure mathemathics that investigates ways to reduce errors in computer programs by more rigorously defining what kind of operations can be performed on which variables. For instance, it makes no sense to perform a “search for the letter ‘t’” operation on an integer, but it makes perfect sense to do so in a text string like “this is a string!” These are some of the errors typed programming languages can help catch before the program runs, during development.
One of the programming languages strongly based on a typed lambda calculus is Haskell, named for Haskell Curry, another computer science pioneer. He is known, among other things, for the so-called Curry-Howard isomorphism, which states a formal equivalence between mathematical proofs and computer programs. This equivalence is useful in several ways. If one has a particularly gnarly mathematical problem, such as the four-color theorem, one could write a computer program that is equivalent to a proof of the theorem. Or if one has a particularly gnarly computing problem, one can write a mathematical proof that some algorithm is correct, but which also doubles as an executable program that can perform that algorithm.
If you’d like to learn more about the fascinating world of fundamental computer science, I highly recommend the book The Structure and Interpretation of Computer Programs. It is a programming manual, but also the best introduction to computer science ever written. It’s practical, it’s theoretical, it will tax your brain, and if you read it and complete the exercises, you will become a zen master of computing. (I cheated; read the book but skipped some exercises. Don’t be like me. Be a real man/woman, do the work, learn the good stuff.)
What are most scientists’ view on paranormal phenomena?
The simple answer is that most of them don’t believe in it. Paranormal research rarely intersects with mainstream science. Most so-called researchers into the paranormal have none of the rigor needed to perform real science, and their “experiments” usually have methodological flaws that can easily be spotted by a bright middle schooler. On occasion, actual scientists attempt to explore alleged paranormal phenomena, and sometimes there’s even a semblance of rigor to their investigations. Such is the case I’ll tell you about today.
Back in 2011, professor of psychology Daryl J. Bem turned a lot of heads when he published rigorous experimental data that appeared to prove a form of extrasensory perception (ESP), in the form of precognition and premonition: the ability of future events to determine an individual’s thoughts and feelings in the present. Clearly, such informational time-travel would go against everything we know about physics. But Bem is no run-of-the-mill crackpot: he is a widely cited and influential psychologist most known for his self-perception theory of attitude formation, which states that we form our attitudes by observing our behaviors, rather than the other way around. Although counterintuitive, many studies have found support for the idea. For instance, while we know that happy people smile and angry people frown, it has also been shown that people get happier by smiling and angrier by frowning. Bem’s smart move when it comes to ESP research was attempting to do the research according to the established standards of psychological science. He also encouraged others to attempt to replicate his findings, correctly reasoning that replication is at the heart of science.
Bem’s experiments were rather clever. He simply took established psychological effects and time-reversed them. For instance, it is known that mere exposure to a word or concept can “prime” a person to more readily think of, or even like the concept at a later time. For instance, if you read a list of words that includes the word table, and later do a word completion task in which you are asked to make a word beginning with the letters tab, you are much more likely to go for table than if you had not been primed. This effect persists even after you have consciously forgotten the priming, or even if you were never aware of it in the first place. The same concept is responsible for the effectiveness of subliminal advertising. Two of Bem’s experiments applied priming after the fact, and appeared to show a “retroactive priming” effect. The setup resembled a typical priming experiment: subjects were asked to judge whether each in a series of pictures was pleasant or unpleasant. Usually, people respond faster when an emotionally congruent word is flashed before an emotionally charged picture than when a word representing the opposite emotional charge (e.g., a positive picture and a negative word) is flashed. Bem observed that this effect persisted even when the word was flashed after the picture.
In total, Bem did 9 different ESP experiments, each with 100 or more participants, all time-reversed variants of known psychological phenomena. 8 of 9 appeared to show statistically significant evidence for ESP phenomena. Bem also appeared to show evidence for a link between stimulus seeking (a personality characteristic associated with extraversion) and ESP abilities, as more stimulus seeking individuals (as indicated by their answers to one or two questions) seemed to exhibit a stronger ESP effect.
Enter the scientific process. As Bem agrees in his paper, extraordinary claims require extraordinary evidence, and one batch of experiments isn’t enough to disprove a very stable theory about how the world works. If ESP exists, what we think we know about physics goes out the window. Because Bem, unlike most researchers into the paranormal, did not believe that the paranormal was above the normal process of science, and also because he has a good scientific track record, other researchers took his claims seriously and set out to replicate or discredit them by repeating Bem’s experimental procedure.
Most attention was given to Bem’s eight and ninth experiments, because they were among the easiest to replicate according to Bem, because they had some of the largest effect sizes of all the experiments, and because they provide the least amount of wiggle room. Either they should show definite effects, or not. If performed correctly, there is little room for observer bias, and there are also few points of contention (unlikely some of the other experiments, which rely on participants’ subjective responses, where null results could conceivably be due to idiosyncrasies in the participants).
The eight and ninth experiment investigate retroactive facilitation of recall. Participants were briefly shown 24 test words and 24 control words, unaware of which category each word fell into. They were then given as much time as they wanted to freely recall as many of the words as possible. Finally, they were given the 24 test words to practice. The results of the experiment appeared to show that memorizing words after the fact could affect results in the present; the words the participants would later practice were more readily recalled than the control words, despite the fact that the participants didn’t know which words they would later practice. The usual effect of practice, naturally, is that words you have previously attempted to memorize are more readily recalled than unknown words.
Bem’s original paper was called Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. It was published in the Journal of Personality and Social Psychology. In 2012, the same journal published a large study attempting to replicate Bem’s retroactive recall experiments called Correcting the Past: Failures to Replicate Psi. As the title suggests, the study attempted to replicate the two retroactive recall experiments and found strong evidence for the null hypothesis (i.e., no ESP). It’s important to note that this is not the same as not finding evidence for ESP. Bem’s hypothesis predicted certain effects, which Bem’s results appeared to support. However, a larger-scale attempt to replicate the findings did not yield those results. The failure to provide the expected results for the hypothesis is not simply lack of evidence for Bem’s hypothesis, it is strong evidence for the opposite. As far as Bem’s hypothesis is falsifiable, it has been falsified. Retroactive facilitation of recall doesn’t exist. The authors did seven different experiments with a sum total of 3,289 participants and also analyze Bem’s own data and other independent attempts to replicate the eight and ninth experiments. In total, more than 6,000 subjects participated in the analyzed experiments, and the effect was not replicated. Only Bem’s own data show evidence for ESP.
Now, that was a good-faith attempt to replicate Bem’s results. Others responded more negatively, analyzing Bem’s procedures and data and suggesting that he was deliberately not reporting negative results, or stopping experiments at the point where his desired results were found. One example was the idea that effect sizes were inversely proportional to number of participants in the studies, certainly a red flag. Others went so far as to suggest that to the extent that Bem’s experiments followed established procedure in psychological research, there’s something wrong with established procedure.
Now, you may think, perhaps the eight and ninth experiments didn’t pan out, but what about the other experiments? But it doesn’t bode well for Bem’s hypothesis when his most rigorous experiments, the ones that most unanimously show results, cannot be replicated.
Nevertheless, Bem should be applauded for attempting to bring rigor and proper methodology to a field of study that is usually mocked—rightfully so—as pseudoscientific at best. Although some fellow scientists were immediately dismissive of Bem’s results, he was rigorous and forthcoming enough to warrant attempts at replication by other scientists. If their experiments prove him wrong, well, that’s science for you. At least it isn’t, as the theoretical physicist Wolfgang Pauli supposedly was fond of saying, so bad it’s not even wrong.
Answer: A new paper in the journal Astrobiology estimates the time various exoplanets—and Earth—will live in the “habitable zone” which is just the right distance from their star. In the case of Earth, they find a lower bound of 1.75 billion years into the future and an upper bound of 3.25 billion—quite a while, that is. At that point, Earth will have become unlivably hot, and the oceans will have boiled off. It’s important to note that this is the end for all life on Earth—complex life like humans will have died off long before. But then again, anatomically modern humans have only existed for 200,000 years. 1.75 billion years is almost 9,000 times longer than humanity has existed.
Earth might become uninhabitable before that due to a runaway greenhouse effect. The current greenhouse effect is what keeps the planet inhabitable right now—without greenhouse gases, almost all the heat from the Sun would be reflected back into space, and there would be no life on Earth. The current anthropogenic climate changes are increasing the greenhouse effect, but not to runaway levels.
Venus, it is believed, once had oceans just like Earth. But early on in its history it became a runaway greenhouse. Water vapor is actually one of the most efficient greenhouse gases, but on Earth only a small portion of the upper atmosphere is constituted by water. On Venus, a large portion of the atmosphere at all height levels was water. As more water evaporated, the greenhouse effect increased, trapping more heat, which caused more water to boil off, in a positive feedback loop until all the water was gone. But on Venus, the protective layer of other gases above the water vapor was much thinner. As a result, more water could be destroyed by photodissociation—broken down by light into hydrogen and oxygen. The hydrogen likely went off into space while the oxygen reacted with materials on the surface, oxidizing them. Thus the oceans of Venus disappeared.
On Earth, this is unlikely to happen until the Sun becomes much hotter, in billions of years. There are two possible scenarios for a rampaging greenhouse effect: the runaway greenhouse (in which all the water disappears) and the moist greenhouse (which is stable, i.e., not self-reinforcing, but stable at a much hotter level than today). Could burning fossil fuels release enough carbon dioxide to create a runaway or moist greenhouse on Earth far sooner than the projected billion-year timeline?
A recent paper by Colin Goldblatt and Andrew J. Watson considers this question. They calculate that the runaway greenhouse is very unlikely. As pressure increases, the boiling point of water also increases. (This also goes the other way: at the top of Mount Everest, water boils at around 70 C or 158 F.) As more water evaporates, pressure increases, thus increasing the boiling point of water. This means that the critical temperature at which all the water boils off the surface is not the usual boiling point of water, 373 K or 100 degrees Celsius, but rather 647 K, which is 374 C or 705 F. (Needless to say, things would get rather difficult for humans long before surface temperatures get to 647 K. Neither the pressure nor the temperature would be livable.) On Venus, this was less of an issue because the atmospheric pressure was lower to begin with.
A moist greenhouse might be more likely, but Goldblatt and Watson estimate that it would take about 10,000 parts per million (PPM) of CO2 (assuming carbon dioxide was the only noncondensable greenhouse gas) to induce a moist greenhouse effect. Currently, we’re at about 395 PPM. In other words, it would take 25 times the current levels of carbon dioxide in the atmosphere to induce a moist greenhouse. Such levels are unlikely to be reached even if we burned off all the fossil fuel in the world, although there is some small amount of uncertainty built into the calculation.
Interestingly, there is an area of Earth that displays the characteristics of a runaway greenhouse effect. This area is the Pacific warm pool, northeast of Australia. Seasonally, these waters get very hot—upwards of 29 degrees Celsius on the surface. This leads to lots of evaporation, and large volumes of water vapor in the upper reaches of the atmosphere. However, this is only a relatively small region of the planet, and through mechanisms not yet fully understood, the excess heat is distributed throughout the rest of the atmosphere, leaving the water in a state of equilibrium: it rarely exceeds 30.5 degrees Celsius, and thus doesn’t start a runaway greenhouse effect.
Despite the unlikelihood of a runaway greenhouse or a moist greenhouse, the more modest projected figures of an increase in average temperatures of 1.5 to 4 degrees Celsius, which we will likely reach in the next hundred years given current greenhouse emission rates will still be hugely destructive to life on Earth. It won’t wipe us out, but it will destroy fragile ecosystems and make life unbearable for humans in many corners of the globe. Nature doesn’t let us off the hook. We shouldn’t aim just to avoid the worst doomsday scenarios. That goal is far too modest, unworthy of a race as technologically and morally advanced as humanity. We should want something more for our children, grandchildren and future generations.
Extraversion and Introversion: What You Believe Is Probably Wrong
For some reason, during the month of August introversion exploded on the internet, reaching meme status. I don’t know why, but I figured I’d chime in with some scientifically backed data on this interesting personality category.
But first. How do you spell it? In psychological research, extravert is preferred, while in popular writing extrovert is commonly used. Either is correct, although perhaps appropriate for different target audiences. Luckily, both researchers and laymen call the opposite end introverts (intraverts are not a thing).
So is extraversion a thing? Yes, very much so. The extraversion-introversion axis is one of the most robust dimensions in personality research. Scientists in the field more or less unanimously agree that whatever categories best describe personality, the major component of one of them is the degree of extraversion-introversion (extraversion for short, although this refers to the whole spectrum, not just the one end of it).
Is it a Western invention? Nope, it has been found in a sample spanning 40 different countries across the world, ranging from very individualistic to very collectivist cultures.
Exactly what is it? This one’s trickier. The simplest folk idea is that extraversion is simply the degree to which a person is sociable. Extraverted people are people who are outgoing and engage in a lot of social interaction. This is a broad oversimplification according to science. A slightly more complex folk psychology idea is that extraversion is simply the question of whether a person draws “mental energy” from social interaction or alone time. According to this idea, introverts may enjoy social interaction, but need to be alone to recharge, while extraverts may enjoy alone time, but need to spend time with others in order to recharge. Surprisingly, there is little scientific evidence for this idea either.
Scientists don’t agree exactly on what constitutes the core of extraversion. Here is a list of facets that have been included in models of extraversion (different authors will include different subsets of these):
These six facets are venturesome (feelings of excitement
seeking and desire for change), affiliation (feelings of warmth and
gregariousness), positive affectivity (feelings of joy and enthusiasm), energy (feeling lively and active), ascendance (feeling dominant or being an exhibitionist), and ambition (valuing achievement and endurance).
However, there seems to be broad agreement that there is some underlying principle that explains all or most of these and ties them together. Extremely robust evidence links positive affect and extraversion, both between subjects and within subjects (more on this later). Extraverts are happier than introverts.
Wait, what? Yes. This is undeniable. All the research agrees: positive emotions and extraversion are very strongly correlated. Extraverts are happier than introverts. This has led some to propose that the unifying theme underlying all or most of the extraversion facets is positive affect. In the aforementioned study spanning more than 6,000 subjects in 40 countries, the authors contrasted two hypotheses for explaining the link between extraversion and positive feelings. One says that sociability is the core trait of extraversion, and that the correlation between happiness and extraversion is indirect: either social interaction is highly pleasurable, and extraverts spend more time interacting, thus becoming happier; or extraverts and introverts spend equal time alone and together with others, but people on the extraverted end of the spectrum enjoy the together time more.
The other hypothesis is their novel reward-sensitivity model. According to this model, extraverted people are simply more sensitive to and prone to seek out rewarding stimuli. It just so happens that social interaction is especially rewarding for humans, and empirical studies show that both introverts and extraverts tend to report more positive affect in social situations; extraverts are also apparently happier than introverts even when alone. According to this hypothesis, extraverts should be more likely to seek out rewarding stimuli both in social and nonsocial situations, and their sociable behavior is simply an instance of a more general pattern of reward seeking. According to their statistical analysis, the other facets of extraversion correlated much more strongly with positive affect than with sociability. The statistical evidence suggests that the unifying phenomenon that underlies the complex set of behaviors and traits that make up extraversion is reward sensitivity.
How would this be expressed in the brain? Recent research has found connections between differences in the dopaminergic system and differences in extraversion. Dopamine is a neurotransmitter implicated in reward, motivation and reinforcing behavior. For example, one study published this year measured brain activity in people who scored very low or very high on a measure of extraversion, corresponding to extreme introverts and extreme extraverts. Subjects were given placebo or different doses of sulpiride, a drug that interacts in a dose-dependent manner with dopamine. At low doses, it is a partial agonist, meaning it slightly increases dopamine activity; at higher doses, it is an antagonist, meaning it decreases activity. At higher dosages, introverts’ brain scans shifted in the direction of extraverts’ baselines and beyond, while extraverts shifted in the opposite direction. The authors suggest that a difference in the density of dopamine receptors might best explain their results. Basically, the hypothesis goes, introverts have less of certain kinds of dopamine receptors and more of a different kind, resulting in an overall lower dopamine activity. Since dopamine is implicated in reward and motivation, this would lead them to be less reward seeking and to experiencing less rewarding, positive feelings in general.
Another 2013 study looked at this behaviorally. It administered methylphenidate (Ritalin), a dopamine reuptake inhibitor, to subjects in what could best be described as a recreational manner. The subjects were conditioned to associate the lab environment with reward by taking feel-good drugs while in the lab environment. But tests showed that this association between reward and the specific lab environment was only acquired by extraverts, not by introverts, nor by introverts/extraverts who had been conditioned to associate a different lab environment with the reward stimulus. This is further evidence that extraverts are more reward sensitive than introverts.
But wait, does this mean if I’m introverted I’m genetically doomed to unhappiness, or that I will never learn to associate social contexts with pleasure? Here’s the twist. All of the above rather robust evidence could be turned on its head by a different line of research. Because all the above data were gathered by comparing groups of individuals. On average, what are the differences between introverts and extraverts? But a separate line of investigation seeks to understand the relationship between extraversion and other traits within the same person.
A 2002 paper details experiments that probe the relationship between positive affect and extraversion within the same subject. In one experiment, subjects were asked to answer questions designed to assess their state extraversion (how extraverted they acted in the moment) within the previous hour and their state happiness (how happy they felt in the moment) during the same time period. This was done five times per day, using PDAs (this was before smartphones). In the second experiment, a single report was made for the previous week, for ten weeks. In the third, the findings from the first two experiments were tested in the lab. Subjects were randomly assigned to group discussions in groups of three. They participated in two group discussions, randomly assigned to act in an introverted manner in one discussion and in an extraverted manner in the other. Then they evaluated each other and themselves using the same metrics as in the other experiments.
Now, one might think that simply acting extraverted wouldn’t be as effective as being extraverted. But the experiments proved this wrong. The individuals who were trait introverts (tended to be introverted) had just as strong a correlation between positive emotions and extraversion as the extraverts. In fact, in the first trial, the one where emotional states were assessed 5 x day, the introverts had a stronger correlation between acting extraverted and happiness than the extraverts! In the third trial, the people who were told to act introverted or extraverted were consistently rated by others to be introverted or extraverted, respectively. And yet again, the more extraverted they acted, the more they enjoyed the discussion.
Faking it ‘till you make it, scientifically validated!
This suggests two things: first, the simple fact that acting extraverted appears to make you happier. And second, that all the neurological differences between extraverts and introverts may not actually explain much. After all, within subjects, most people had periods of very outgoing and very introverted behavior, and their mental state for the most part correlated exactly with the behavior. Your brain is malleable, but it doesn’t completely rewire itself multiple times every day. These results throw a wrench into the reward sensitivity / dopamine framework detailed above. If we can replicate and expand on these within-subject findings and reunite these results with the between-subject findings, we’ll have come a lot closer to understanding introversion and extraversion.
OK, I get it. There’s still much work to be done. But however it is, how come these two different temperaments evolved? Especially if one sort is much happier than the other, wouldn’t one die out under selective pressure?
Good question. First of all, extraversion is a complex phenomenon, controlled by many genes. It takes a long, long time for evolution to completely eliminate variation in all of these genes. Second of all, there is good reason to believe that introverts and extraverts evolved to fill different niches in human society. It has been empirically shown that extraverts have more mates, but also die younger than introverts. Perhaps extraverts’ social activity and prowess make them more attractive mates, or give them more opportunity to meet mates, or some combination, which raises their evolutionary fitness, but are also more prone to impulsitivity and reckless behavior in their search for excitement and reward, thus resulting in more premature deaths.
Whatever happened to introvert/extravert pride? A common trend in the latest popular culture writings on introverts and “extroverts” is that whichever end of the spectrum the author identifies with is presented in a broadly positive light. The author takes pride in their orientation, whichever way it goes, often extolling its virtues in opposition to some perceived or real prevailing cultural current.
But science doesn’t function this way. It does not say that one kind of person is more worth than any other kind of person. That is well outside science’s scope.
However, the fact remains that the scientific consensus is that extraverted people are happier. And acting in an extraverted manner makes introverts and extraverts alike happier. And being happy, aside from being the higher-order goal of every human being, will make you more successful in almost every area of life—from income and marriage to mental health and longevity. It seems that from nature’s side, introverts have been cursed. But hey, at least you’re less likely to die in a parachuting accident or hunting sharks or from some other dangerous, thrill-seeking behavior. Besides, chances are you can be instantly happier by acting more outgoing.
But I like being introverted!
That’s perfectly fine. There is still a lot of work to be done on this personality dimension. The last word has not been said. But the current data we have cannot be denied: the link between happiness and high extraversion is undeniable.
Let us clear up a few confusing things. When a Chinese philosopher talks about “chi,” which is often translated as “energy,” he is talking about something quite different from what a modern physicist is talking about when he talks about energy, as in, “energy can’t be created or destroyed.”
I just finished reading your answer to the “mind-body question”, and I have a question regarding your answer. You say there’s no way to scientifically prove consciousness surpasses us after death, but what about our energy? According to physics, energy can’t be created or destroyed. Could it be possible our consciousness is linked in with this energy (which some may call our soul)? This is actually an on-going debate I’ve had for quite some time and I would love to hear your thoughts :)
In physics, energy is roughly the capacity to do work. Not the mental energy to show up for another grueling day of 9-5 at the office, but the capacity to apply force over distance. To push a particle in a certain direction, say. The international standard unit for energy, the joule, is given by the equation J = kg × m2 / s2, where kg is the kilogram, m is the meter, s is the second, and J is the amount of energy measured in joules.
Energy persists. The matter and energy that make up you once made up stars. To call it “your” energy is a gross injustice to the universe. You are simply borrowing the energy. In fact, even that is misleading. You are the universe. Not all of it, of course, but you and I are as intrinsic and inextricable parts of the universe as stars and planetary systems. There’s nothing special about us, which is either beautiful or terrifying, depending on how you look at it. We are the same matter and energy that was once in stars and when we die, this matter and energy will simply go over to other forms, become other things.
But this continuity can hardly be appealed to for eternal life. When the atomic nuclei in your body were part of a star billions of years ago, were they conscious? Was the star conscious? When your body is dirt, will the dirt have thoughts? Will the dirt cry inwardly when its relatives, living humans, stand upon it and watch its grave? If energy and consciousness are equivalent, consciousness must be omnipresent. Some have argued for this position, but we shouldn’t take that very seriously, because philosophers are a crazy bunch who will argue for anything, sometimes just for sport, sometimes because they’re not right in the head. If you don’t believe soil is conscious, there is no reason to think that the conservation of energy means the conservation of the soul.
This is known as the fallacy of equivocation: of using the same word with different meanings, carelessly substituting one meaning for the other. It’s an honest mistake to make, but it’s a mistake nonetheless. Ancient philosophers thought that the thing they called variously by names like “chi,” the “life force,” and so on was interchangeable with other things which we today call “energy.” For this reason, modern spiritualists will often use the word energy to refer to such nebulous, spiritual concepts (for which there is little to no scientific support). The first law of thermodynamics is the one about conservation of energy. You know how it goes, at least in the layman’s version: “energy can be neither created nor destroyed, only converted from one form to another.” But the “energy” physicists talk about is the kind that is measured in joules. It has nothing to do with consciousness.
If, like most scientists now believe, consciousness arises out of a particular kind of arrangement of matter and energy in the brain, then obviously there is a connection between consciousness and energy. Obviously, if mind is physical, then it must be matter and energy. But the catch here is that this matter and energy must be arranged in a particular way. A rock, for instance, is not conscious. The matter and energy that make up the rock aren’t arranged in the appropriate pattern for consciousness to arise. And when humans die, the same happens to “our” energy. It gets converted into other forms, takes part in other configurations of matter. Perhaps in a billion years, the same molecules that once made up you or me will, through an endlessly complicated series of operations, again take part in a configuration of matter and energy sufficient for consciousness to arise. Does this mean your “soul” has reincarnated? I think not. Not in the sense that religious people mean when they talk about the soul surviving or reincarnation. But in a poetic sense, perhaps it has. That is a matter of interpretation, for you to decide.
What’s your opinion on the mind-body problem? Do you think the mind can survive after death and is a separate element?
It’s tempting to suggest that if science can’t explain the way consciousness arises from matter, then there might be something to the old myths about the afterlife and the survival of the soul or reincarnation anyway. But it’s very important to distinguish between these two questions: what is consciousness, and does consciousness continue after death?
There is no reason to suppose that the mind doesn’t die when the body does. Near-death experiences are just that: they’re not experiences of death, otherwise the person who had the experience couldn’t have lived to tell the tale. Furthermore, it isn’t implausible that we might sometimes enter an altered state of consciousness in extreme conditions that bring us to the brink of death—after all, everyone experiences the altered state of consciousness we call dreaming almost every night. That these experiences might resemble myths or narratives prevalent in your society is hardly surprising. If your subconscious is creating a story, surely it must take the raw material from somewhere, and if you live in a traditionally monotheistic society where life after death is a common notion known to everyone, believers or not, then tunnels of white light or talks with dead relatives during a hallucinatory near-death experience is almost to be expected. After all, people who ingest a lot of drugs or become psychotic also have similar experiences.
What other reasons might one have for believing in the continuation of consciousness after the brain is dead? Excluding theoretical future inventions that facilitate “brain uploading,” and excluding religious scripture, what more is there to say? How would we ever know? If the mind is intangible, unobservable, at best we could say that the mind might survive death, but without any good reason to believe so, why would you? The only mind we have direct access to is our own.
The famous philosophical question of other minds asks, how do you know your neighbor is conscious? How do you know your mailman isn’t an automaton, an assemblage of matter that behaves like a human but has no consciousness, for whom there is nothing to be like them? The answer, of course, is that we don’t, but they behave roughly like we do, they respond as if they are conscious, and besides it seems hard to imagine how a robot could accomplish what other people do without having sentience. Whether such automatons—identical molecule by molecule with humans, but lacking conscious experience—can exist is known as the philosophical zombie problem. Most of us who have common sense accept that other minds exist, that other people are conscious, because they appear to be. And we also accept that rocks and bacteria aren’t conscious, because they display no signs of being conscious. Of course, until we can prove definitely that the mind is purely physical, we can’t prove that rocks aren’t conscious, but not even the most hardcore treehugger argues that we shouldn’t tread on rocks for fear of hurting their feelings. The same argument can be applied to life after death: dead people don’t display any of the signs of being conscious. The most parsimonious explanation is that when the body dies, so does the mind. Even if dead people’s minds somehow were separated from the body, and remained in existence after physical death, we would never know.
There is a lot of things we don’t understand about the brain. But one thing we’re pretty sure: it isn’t breaking any of the laws of physics. Clearly mind and brain are intrinsically linked. If the mind were nonphysical but somehow capable of interacting with the physical brain in the way that a dualist account of consciousness requires, we would expect to see energy spontaneously come into existence as the non-physical self pulls the strings. But we don’t. The second law of thermodynamics holds as well for the brain as it does for everything else in the universe.
I believe the mind isn’t nonphysical. Exactly what that entails is a matter of both philosophy and science. The philosophical zombie thought experiment is troubling; not so troubling that I believe it proves the mind can’t be physical, for reasons explained above, but troubling in a more fundamental sense. The fundamental question is: why does a certain configuration of physical, objective matter translate into subjective consciousness? It seems entirely conceivable that it wouldn’t. Biology reduces to chemistry which reduces to physics, and it would be logically contradictory for biological mechanisms to violate the laws of chemistry, or for chemistry to violate the laws of physics. And it works the other way around: physics necessarily gives rise to chemistry, which necessarily gives rise to biology. But with psychology, the same seems not to be true. We have yet to find the piece of the puzzle that guarantees that physics gives rise to consciousness. It seems entirely conceivable that it wouldn’t, in ways it’s inconceivable that physics wouldn’t give rise to chemistry.
I believe the mind-body problem is the hardest one in science, in part because the brain is so complicated, and in part because it is a problem that is both philosophical, linguistic, and scientific all at once, and which discipline which aspect belongs to is hard to determine. Combining quantum mechanics and general relativity? I think we’ll do that long before we solve the mind-body problem. Perhaps, in the end, we must simply accept that there is no explanation. For no good reason, certain configurations of matter give rise to subjective experience, just like there is no reason for the other fundamental constants of physics to be what they are. They just are. But this seems hard to accept because the fundamental forces of nature are so elementary, and the mind so complex, and it seems hard to believe that elementary facts can be true for no good reason, but that they still can’t explain more complicated facts which are entirely supervenient on them. (That, I’m afraid, is philosopher-speak. I have been damaged by reading too many philosophy papers, and this is a very complicated subject, and it’s very hard to write about without using technical language, even though I try and explain technical terms the average reader would not be familiar with as much as possible on this blog.)
Because it is so complicated—perhaps the hardest nut to crack in science—and because it is so essential to our lives—by definition, all our experience is, well, conscious experience—the study of mind and brain is probably my favorite part of science. You may have noticed there’s a lot of pharmacology and psychology on here. But it’s very hard to give a straightforward answer to a question like, “What’s your opinion on the mind-body problem?” Life after death though? Nah, old fairy tale.
Here’s a fun idea: what if we could improve on nature and create more efficient blood? Or at least, artificial blood that is as efficient as natural blood? The medical applications are too numerous and obvious to mention. There are two main approaches to artificial respiratory carriers—artificial ways of transporting oxygen and carbon dioxide around the body: one involves hemoglobin, the protein inside red blood cells that binds CO2 and oxygen. But hemoglobin becomes much less stable outside red blood cells, so various modifications or micro-encapsulations are required. The other approach involves emulsions of fluorocarbons. But both approaches have many inherent flaws.
In 1998, Robert A. Freitas proposed a radical idea: what if nanorobots could carry oxygen around, replacing red blood cells? Freitas’ design involves tiny nanoscale artificial cells fueled by blood sugar that store more than 200 times more oxygen per volume than red blood cells. A nanoengine drives a rotor that sorts molecules and stores the right kinds inside, while a tiny nanocomputer calculates when to release them. Augmenting your blood with a solution of Freitas’ “Respirocytes” could allow you to hold your breath for hours—useful for divers, patients who stop breathing away from hospitals, endurance runners, geriatric patients, and a whole lot more. Being nonbiological in nature, they could sit on the shelf indefinitely and still be ready to go, and once in the body could possibly last a lifetime.
Nanotechnology is far from the kind of precision and efficiency required to actually make respirocytes. And even if we could make them, no amount of theoretical calculations can tell us for sure how they’d behave in living humans. I’m not signing up to have my blood replaced by tiny robots anytime soon. But it’s certainly an interesting idea, if you can look at it with the appropriate amount of skepticism.
Gone are the days when we could comfortably assume that a single gene or hormone is responsible for a complex disease or behavior. As new data roll in, previously clear-cut cases turn out to be more complex. One such case is oxytocin, a neurohormone closely related to prosocial, bonding behavior in decades of research. This strong association has given oxytocin the nickname “the love hormone,” and oxytocin nasal spray—currently used primarily for its effects on lactation in nursing women—has been proposed as a novel treatment for autism and social anxiety. But new data suggests that oxytocin also plays a role in human ethnocentrism, and in strengthening negative memories in mice.
Researchers at the University of Amsterdam investigated the effects of oxytocin on in-group v. out-group bias in a series of trials. Ethnically Dutch males self-administered either oxytocin or placebo and then completed a series of double-blind computer tasks designed to measure implicit bias towards a perceived in-group or against two perceived out-groups, represented by ethnically Dutch people and by Germans or people of Middle Eastern descent, respectively. The tasks included trolley problems, empathy exercises, and tasks where the participants had to group positive words together with names from either in- or out-group and negative words with the other, or vice versa (the speed with which this grouping takes place being a measure of implicit bias). The researchers found a significant incrase in bias towards the in-group as compared to either out-group (the Germans or the Middle Easterners) with oxytocin as compared to placebo. They also found limited out-group derogation, but the good news is that it appears oxytocin more strongly increases favoritism towards the in-group than hate towards the out-group. Still, this research suggests the “love hormone” is implicated in ethnocentrism and xenophobia.
This, however, contrasts with previous research which has showed that oxytocin leads to prosocial behavior and decreases our aversion towards the unknown, and increases trust in economic games. Previous studies, however, may not have controlled properly for in- versus out-group bias. For example, one study measured the brain activity of fathers as they looked at pictures of their own child, a familiar child, or an entirely unknown child, and found that oxytocin lead to decreased activity in areas of the brain implicated in critical social evaluation. That study, however, did not control for ethnicity of the children and fathers or other factors that may create an in- versus out-group dynamic, only for gender and whether the child was known to the father or not.
A recent study in mice found that oxytocin strengthens negatively charged memories. Rather than reducing future anxiety, it strengthened it. In one experiment, three groups of mice (without oxytocin receptors, with normal densities and with increased densities of receptors) were exposed to aggressive mice in a cage, a stressful experience. When later reintroduced to the cage, the mice without oxytocin receptors didn’t appear to remember the aggressive mice, and did not exhibit fear responses. The mice with normal and high densities of receptors, however, exhibited typical and above average fear responses, respectively. These effects were not limited to socially salient negative memories. In another experiment, the aggressive mice were replaced by startling, but not painful electric shocks. When reintroduced to the box, the oxytocin-deficient mice did not show any particular fear while in the box where they had received shocks, while the oxytocin-boosted mice showed enhanced fear responses.
This new data certainly complicates the picture of oxytocin as a cuddly hormone involved with all that is good in human nature. Despite this, research into the possible therapeutic benefits of oxytocin remains promising, although perhaps not as unanimously cheery as it may once have looked.
At the time of writing, we have discovered 916 planets outside our own solar system, in 706 planetary systems.
After we got over the silly idea that we are Special, that the universe revolves around us, we have been smitten by the idea that we are the opposite—we’re typical, and our home tract of the universe is probably broadly representative of the rest of existence, although we do know that life is somewhat rare, since we have yet to discover it elsewhere. But as it turns out, the more planetary systems we discover, the more our own looks like an outlier. Not just because it has a habitable planet, but in more general terms which have nothing to do with the conditions for life.
But consider this, also: we know of less than a thousand exoplanets. There are likely billions out there in the unimaginably vast universe. The data point in one direction at the moment, but the data are also incredibly limited. We don’t even know what we don’t know.
It’s an interesting time to be alive. It’s only been 25 years since we discovered the first exoplanet. Who knows what we will discover in the next twenty-five?
if you're not using this url anymore could i have it?
Since 2008, I have made approximately 300 posts on this blog. Almost every one of them has taken extensive research. It has been one month since the last post, which, if you look at the archives of this blog, is not uncommon. I do not post simply to fill up space. I have 160,000 followers and never made a dime off this blog. I will update it when I have something interesting to write about.
I am not an expert in anything. I hold no scientific degrees, and there are certain topics I don’t write about simply because they are beyond my comprehension, and I detest lazy science journalism. Whenever possible, I read original research papers if I’m going to report on a story. This is not a job but a hobby, and unless some wealthy patron is willing to pay me to do it, it will remain as such. The fact that one month has gone by without a post means nothing except that you have been spared from one month of meaningless drivel. Rest assured, I will continue updating this blog in the indefinite future, but it will be at my own pace.
If you have an interesting question, tidbit or fact related to science, you are welcome to submit them. If it is interesting, it is an area in which I am qualified to comment on, and I find reliable sources about it, I will make a post and credit you with the suggestion. For example, someone recently asked what, exactly, wormholes are, which is an interesting question, but not one I feel qualified to answer.
In this attention-deficient internet era, I want this to be a place to slow down, think, and marvel at the magnificence of nature, the ingenious mechanisms of life, the intricate network that connects the smallest quark with the largest supercluster. It is not a place for pretty pictures of cats, which would undoubtedly make my life easier. I’ve deliberately kept the meta-posts to an absolute minimum, because this blog is not about me or about my personal opinions, but this needs to be said. This blog’s ethos is that science is good, science should be shared with the people, and science should be reported accurately and in ways that laymen can understand without resorting to highly inaccurate simplifications. If you want more frequent content, you have two options: hire me for actual $$$, or be patient.
No. What is and is not science is a question which has vexed scientists and philosophers for hundreds of years. Many influential definitions have been put forth, but none are without weaknesses. I think it’s naive to suggest that such a heterogenous body of practices and knowledge as 21st century science can be defined in a few sentences. There is Popper’s falsifiability criterion, the “scientific” method as taught to high school students—hypotheses, predictions, experiments, confirmed theories—and Kuhn’s “paradigm shifts” and more besides. None of these general, relatively simple-to-define ideas manage to completely encompass all that is reasonably regarded by scientists as science, while also excluding everything that is reasonably regarded as non-science.
The general principles of science are curiosity, methodological rigor, repeatability, falsifiability, peer review, full disclosure of both failures and successes—but these alone are not enough to define what science is. I could waste more words, but at the end of it I would have to admit that neither I, nor anyone else has successfully defined science in blog-post length. That is not to say that anything goes, or that what is and isn’t science is completely subjective—only that, despite all our efforts, we are yet to find a clear line of demarcation that is both simple to state and which includes all the right things and excludes precisely those things which are not scientific, and none other.
Why are barns painted red? Because red paint used to be cheapest. But why is red paint cheaper than other colors? Ultimately, nuclear physics. Yonatan Zunger explains. For more on stellar nucleosynthesis, see: we are all made of star stuff.
The traditional method of extracting gold from other materials, first discovered more than two hundred years ago, involves cyanide. Cyanide is very toxic. But now, scientists have stumbled upon a method of extracting gold by using alpha-cyclodextrin, a carbohydrate that can be derived from corn starch. The gold forms tiny nanowires. From a press release:
The supramolecular nanowires, each 1.3 nanometers in diameter, assemble spontaneously in a straw-like manner. In each wire, the gold ion is held together in the middle of four bromine atoms, while the potassium ion is surrounded by six water molecules; these ions are sandwiched in an alternating fashion by alpha-cyclodextrin rings. Around 4,000 wires are bundled parallel to each other and form individual needles that are visible under an electron microscope.
This method can be used, among other things, to recycle gold from consumer electronics. Hopefully they can find a way to make it work for ores, since the initial mining still accounts for the majority of gold cyanidation, and consequent environmental danger. One of the co-authors of this paper, Dennis Cao, is on reddit answering questions about the research.
An interesting fact I learned from a reddit today, and which is supported by the scientific literature: schizophrenics can tickle themselves. Why is that?
The brain has a predictive model of the sensory results of different motor movements. For instance, if I move my hand in this way or that way, where is it going to end up? When you make the movement, this “predictor” is informed and forms a prediction about the sensation that will result—say, the sensation of a feather moving against your skin. When this sensory data comes in, it is compared to the prediction, and if they match up, the brain assumes that it was your self-generated movement, and thus can compensate for it. This is how we can distinguish between the exact same stimulus when applied by an external force or when applied by ourselves.
In an experiment, healthy subjects were instructed to tickle themselves by moving a piece of foam on their palm with their other hand. As predicted, the subjects did not find the sensation of their own tickling ticklish. Then their hand was hooked up so that, instead of directly tickling themselves, their hand movements were replicated by a robot hand. The researchers then introduced various delays or rotational translations of the movement—in effect, increasing the difference between the brain’s predictive model of the movement and the end result. The subjects reported that the sensation grew increasingly ticklish as the delay or rotation increased. In another experiment, however, schizophrenics reported no difference in the amount of ticklishness, whether the tickling was self-applied or applied by another person.
A current hypothesis about the origin of delusions of passivity, e.g., the idea that someone else is making you say or do the things you do, a classic symptom of schizophrenia, is that it is due to faults in this predictive model. Evidence suggests that schizophrenics fail to generate this predictive model of what their own movements will result in, which manifests as the inability to distinguish external and internally generated stimuli, like in the tickling example. In healthy non-schizophrenics, the predictive model and the sensory input that results from a movement will reinforce each other; the brain predicts that the movement of a hand will result in a certain sensation, and the sensation confirms it. This also allows us to self-correct before we even begin a movement if we detect that the intended result and the predicted result don’t match up. But schizophrenics tend not to make such self-corrections. One can imagine how delusions of passivity could result if the “predictor” fails to make a prediction: when it then tries to compare the end result to a non-existent prediction, it is as if someone else had willed the action. After all, in healthy subjects, when we receive “unexpected” (i.e., not predicted by the model) stimuli, it is usually of external origin. As in, when another person tickles us.
Extraordinary Vision: Polychromacity, Polarization in Mantis Shrimp
You may have heard of mantis shrimp. They are members of the order Stomatopoda, a group of marine crustaceans with some extraordinary properties. They’re hard-hitting bastards, but the most interesting property about them is their magnificent vision. Their vision varies between different species, but some species are capable of seeing light that is invisible to us, distinguishing more shades of visible light than us, and detecting properties of light impossible for humans to comprehend.
In order to understand the mantis shrimp’s vision, we need to understand some fundamental properties of light. Humans, of course, can distinguish millions of colors, but these are all detected by three different kinds of photosensitive cells in the eye, cone cells which have a peak sensitivity at three different wavelengths. Sophisticated visual processing in the brain then combines the varying intensities of these three types of cells to distinguish the millions of shades we see. Our cone cells can only see a very small sliver of the entire spectrum of light; most of the spectrum is invisible to us, either because the wavelengths are too short (ultraviolet) or too long (infrared), and on either side of that are even more extreme kinds of light, like X-rays. In addition to that, we also have rod cells, which are more sensitive to light than cones, but which can’t discriminate between different colors. These are mainly responsible for low-light vision and for detecting movement; this is why all cats are gray in the night (we rely for night vision on cells that cannot discriminate color).
So, we have three different kinds of cones and one kind of rod cell, for a total of four different photosensitive cells. Some women may be tetrachromats, that is, they have four different kinds of cones and can distinguish more colors than everyone else. That doesn’t even begin to compare to mantis shrimp. Some species have as many as sixteen different kinds of photosensitive cells! Here is a diagram:
Not only can this species see ultraviolet light that is invisible to us, it can also distinguish much better between shades of visible light than us! However, this is kind of misleading. It’s true that the eyes of mantis shrimp have much more sophisictated color vision than human eyes; at the same time, however, they have much smaller brains. The visual center in the human brain is much more sophisticated than the mantis shrimp’s. As a result, it’s hard to compare color vision directly. What we can say for sure is that the raw input from the eyes is much better for color vision than the raw input from human eyes.
But that’s not all. Mantis shrimp can also detect polarization of light, which is a property that is completely invisible to us, as difficult to imagine as color would be to someone with monochromatic vision. To understand polarization, we need to understand a few things about the wave nature of light. Waves can be transverse or longitudinal. Here is a diagram:
Sound waves propagating through air are longitudinal. The air compresses and expands, carrying energy along. Light waves, on the other hand, are transverse. To simplify, they move “up and down.” Longitudinal waves, by definition, are moving “left and right” along the direction of the wave. The “up and down” movement of transverse waves, however, can go in different directions. This means that transverse waves, such as electromagnetic (light) waves, can be polarized.
Unpolarized waves basically move up and down in random directions. Polarized light, however, “waves” in a specific pattern, for lack of a better description. Light coming from the Sun is unpolarized, but becomes partially polarized when it passes through the atmosphere. Thus most light we see is partially polarized, but we are completely blind to this. Light of different polarizations looks the same to us, all else being equal. Some animals can distinguish different polarizations, however.
It’s been well known for some time that some animals can perceive linear polarization. Here is an illustration of linear polarization (look at the blue line):
The waving is confined to a given plane along the direction the wave is traveling. However, this is not the only way for light to be polarized. There is also circular polarization. Until recently, it was believed that no animal could perceive circular polarization. Here is a circularly polarized wave that goes clockwise from our vantage point:
And here is one that goes the opposite way:
Stokes parameters are a mathematical construction that completely describes a state of polarization. If you know these parameters, you basically know all there is to know about the kind of polarization (or non-polarization) that characterizes whatever light you’re looking at. Scientists recently demonstrated that the Gonodactylus smithii species of mantis shrimp can perceive all the Stokes parameters, or, in other words, they have perfect polarization vision.
Why would this be useful? Well, polarization information can be used to navigate, or it can be used to identify prey that would be transparent to humans (the light doesn’t change intensity or color when it passes through, but it may change polarization). It could also be used to communicate in a way that other species couldn’t detect. Below you see a signalling structure (viewed through filters to make the direction of circular polarization visible to humans) on the mantis shrimp:
Voyager 1 is the man-made object farthest from the Sun, steadily traveling towards the edge of the solar system and beyond. A recent paper analyzing data from the probe, which is still sending data back to Earth, states that as of August, 2012, the probe has entered a new region of space which they dub the “magnetic highway.” Radiation levels from inside the solar system dropped to 1 percent of previous levels while radiation from outside the solar system increased dramatically in a matter of days, leading some to propose that Voyager 1 has finally—and as the first man-made object ever—left the solar system for good. NASA, however, was quick to clarify that they do not consider the probe to have entered interstellar space quite yet, as they expect magnetic field lines to change direction when this occurs. The boundary of the solar system is kind of fuzzy, but one thing is clear: this little piece of metal humanity sent out 35 years ago is really, really far away and traveling in unexplored territory. Here be dragons cosmic rays aplenty.
Miraculin is a taste-modifying protein found in the fruit of Synsepalum dulcificum, also known as miracle berries. By itself, miraculin doesn’t have much of a taste, but after dissolving miraculin on the tongue, sour things like lemons taste sweet. The effects last for up to an hour. Miracle berry pills are available online.
Miraculin doesn’t affect the brain, it operates directly on the taste receptors. Experiments have shown it to have an interesting property: at neutral pH levels, it acts as an antagonist of sweet taste receptors, but in an acidic environment it turns into an agonist. This means that it will actually block the taste of things that are usually sweet, like table sugar or artificial sweeteners like aspartame. However, for some reason, when you add something acidic (sour), the effect changes, strongly activating the sweet receptor, known as hT1R2-hT1R3. Sour things taste sweet, and sweet and sour together tastes really sweet. Scientists hypothesize that the acid donates protons to the hT1R2-hT1R3-bound miraculin, causing it to change shape and go from inactive to active.
A morpheme is the smallest unit of language that carries meaning. A cranberry morpheme is a morpheme that has no independent meaning. We know what “sub-” or “super-” mean regardless of what comes after, but the “cran-” in cranberry or the “-ceive” in “conceive”, “receive” and “perceive” have no meaning outside those words. Another example is the “cob-” in “cobweb”, which comes from an old word for spider, “attercop”, literally “poison-head”. (Cf. Norwegian “edderkopp.”)
Unpaired words are words that look like they should have a twin, but don’t. You can be unkempt, but hardly kempt, and one can be untoward but not toward, disgruntled but not gruntled. Frequently it’s the negative senses of such words that survive. Here is an article that explains the origins of many unpaired words. The word “ungainly”, for instance, ultimately comes from the adjective “gain”, meaning “straight, near”, which was used in the phrase “the gainest way”, meaning the shortest, most direct route. A word like “dishevelled”, on the other hand, comes from the Old French ” deschevelé”, and was never the negative of a positive “shevelled.”
The Ganzfeld or “whole field” effect is a simple way to trick the brain and induce hallucinations. The most common setup is to cut up a white ping pong ball in two halves, put them over your eyes and illuminate them from the outside. The effect will be of staring into a uniform white field. In the absence of any structure to sensory input, or in the absence of any input whatsoever—as in sensory deprivation—the brain will start to amplify the noise inherent to perception, eventually producing simple or complex hallucinations. The effect can be extended to several senses, typically by wearing headphones blasting white noise or other unstructured sound.
Ganzfeld experiments are darlings of “paranormal researchers”, who use the setup in attempts to prove extrasensory perception. (Using red light is popular, I suspect for the sole purpose of giving photographs of the seance an otherworldly vibe.) Unsurprisingly, the results of these experiments don’t improve on chance.
In case you missed it, the New Yorker published a great, but terrifying look into science’s sordid past this last December. Operation Delirium and High Anxiety: LSD in the Cold War detail the US army’s experiments in mind control and chemical warfare during the cold war, centered around Edgewood Arsenal, in Maryland.
The efforts were led by Colonel James S. Ketchum, who wanted to develop a more humane approach to war: the enemy was not to be killed, only incapacitated using various chemical agents. Never mind that those agents might well be nerve gas, and the mechanism of incapacitation extreme mental or physiological stress. His best bet for such an incapacitating agent was BZ, an anticholinergic drug like scopolamine or atropine which causes delirium. (These drugs work by blocking the transmission of the important neurotransmitter acetylcholine.) At one point Ketchum’s team resorted to building “an entire Hollywood-style set in the form of a makeshift communications outpost.” Soldiers were placed on the outpost and dosed with either placebo or varying doses of BZ. Then Ketchum set about trying to fuck with their heads in any way he could think of:
Two hundred phony tactical messages, warnings of chemical attacks, and intelligence were fed to the men in the room. At one point, Ketchum and the others ran out of script. “In an urgent brainstorming session, we put our heads together and came up with an agonizingly improvised scenario,” he recalled in his memoir. “We told the military communicators to start sending new intelligence to the group inside the room—in a simple code. The messages informed the men that enemy forces were planning to move a train loaded with chemical weapons along a certain route.” Eventually, Ketchum and the technicians resorted to gibberish, using poker terms, referring to “the dealer” and a “full house,” as the BZ-addled soldiers struggled to interpret their code.
Ketchum was flanked by Dr. Van Murray Sim, who founded the Edgewood program on psychochemicals in 1956. Sim’s grand idea was the use of LSD-25 to loosen the tongue of what might today be euphemistically called “enemy combatants,” or if that didn’t work, make them so mad that they’d tell any secrets they had just to escape the torture. Maybe it was cruel, but surely, the logic went, the Communists were doing the same, and the US could not afford compassion. Besides, Sim figured, if he was willing to test the chemicals on himself first and he was fine, surely he could test them on others.
One chilling story is that of Private James Thornwell. Throwing informed consent to the wind, Sim theorized that expectations about a drug’s effect would influence the intoxication, and so it was vital that the subjects did not know what they were given, or even that they were given anything. When word got around about the kind of experiments going on at Edgewood, Sim was forced to relocate his experiments to Europe and, eventually, to Asia. James Thornwell was the only African-American working at an American military-communications station in France. After a falling out with his superior, Thornwell was accused of stealing classified documents. After three months of torturous interrogations, Thornwell insisting on his innocence, he was released—not to freedom, but into the hands of Sim’s special investigators, who repeatedly dosed an already half-mad man with LSD without his knowledge. Thornwell never recovered. The experiment was deemed a success.
It wasn’t only the military who were carelessly experimenting on humans. Under the innocuous-sounding title Effect of some Indolealkylines in Man, a 1959 medical paper details some rather cruel experiments. Ostensibly, the scientists set out to study the effects of a psychedelic snuff used across Central and South America. Failing to achieve any effect from the snuff, they isolated two pure chemical compounds for further study. They were n,n-DMT, a powerful psychedelic also found in the traditional brew ayahuasca, and bufotenine, a chemical cousin of the neurotransmitter serotonin. These chemicals were injected intravenously into schizophrenic patients.
Here is a description of one of the experiments:
In several subjects who had more than 10 mg of bufotenine injected quickly, there was intense salivation. The present subject could easily have drowned in her own saliva, and she had to be turned on her side. (…) Responsiveness returned at about 23 minutes, at which time the patient was entirely lucid and, in response to a query related to a preinjection question, spoke of a long-repressed memory from the age of three years, when she came into the bathroom and saw her mother die of a uterine hemorrhage.
But this revelation “had no therapeutic consequence.”
In further experiments, we read, three patients—as if they were patients undergoing treatment, not guinea pigs for mad scientists—were injected bufotenine after receiving reserpine or chlorpromazine (both antipsychotic drugs). “Each of these injections almost proved fatal in small amounts.” After one subject almost died, they repeated the experiment two more times just to make sure. It’s the same absurd logic which prompted Sim’s LSD researchers to respond to adverse reactions by doubling the dose in the next trial.
The LSD trials were suspended in 1963, but the Edgewood experiments continued into the 1970s.
This large study published earlier this year and involving tracking more than 400,000 participants over twelve years looked at the correlation between coffee drinking and mortality. It found that, all else being equal, there is an inverse correlation between coffee drinking and mortality, or, in other words, people who drink a lot of coffee died less often:
In this large, prospective U.S. cohort study, we observed a dose-dependent inverse association between coffee drinking and total mortality, after adjusting for potential confounders (smoking status in particular). As compared with men who did not drink coffee, men who drank 6 or more cups of coffee per day had a 10% lower risk of death, whereas women in this category of consumption had a 15% lower risk. Similar associations were observed whether participants drank predominantly caffeinated or decaffeinated coffee.
However, all else is rarely equal. The study also found that coffee drinkers were more likely to have other habits, particularly smoking, that correlate with higher mortality. When not correcting for these factors, coffee drinkers were a bit more likely to die. In other words, coffee appears to be good for your health, but if you drink a lot of coffee, statistically you’re more likely to have other, unhealthy habits that increase mortality.
Some of the more interesting things that passed through these pages in 2012.
The Brocken Spectre is an optical phenomenon in which the observer’s shadow appears to be magnified on clouds or fog below.
Psychedelics are back in science. After decades of little research due to drug hysteria, scientists have started exploring the therapeutic potential of psychedelics and similar drugs again. Here is the New York Times reporting on a promising study of MDMA-assisted therapy for PTSD.
On February 9, 1913, a unique procession of meteors was observed from North America. This meteor shower may have been the breakup of a short-lived, small second moon.
Rare earth metals are important for a number of modern technologies. China has a near-monopoly on the world’s supply, and they’re prepared to use it for political gain.
Some people might point out that “conformations” and “configurations” are also concepts in organic chemistry that mean much the same thing, and that this post is a thinly disguised effort to teach concepts in organic chemistry through a discussion of cats. Horseshit. This is 100% cat content here people! This is a cat blog.