It’s no secret that I and this blog are excited about the return of psychedelic drugs to academic study. After being shut out of the warmth, labelled as criminal and shunned for four decades, finally scientists are yet again investigating the effects of drugs such as LSD, magic mushrooms and DMT. Not only do these drugs hold great potential for helping us understand more about how the brain functions both in normal and altered states of consciousness, they also hold great promise as therapeutic aids in clinical psychiatry.
The Psychologist magazine currently has a free special issue on psychedelics in psychology. There’s lots of interesting stuff there, including an article on how psychedelics exert their effects in the brain. The classic psychedelics such as LSD and magic mushrooms in particular activate the serotonin 2A receptor, leading to a cascade of changes on many levels of the brain:
Much of brain activity is rhythmic or oscillatory in nature and electroencephalography (EEG), magnetoencephalography (MEG) and local field potential (LFP) recordings are techniques that measure the collective, synchronously oscillating activity of large populations of neurons. Studies in animals and humans have found decreases in oscillatory activity in the cortex after the administration of hallucinogens, and in one of our most recent and informative studies with psilocybin we observed a profound desynchronising influence on cortical activity (Muthukumaraswamy et al., 2013). (…)
To help illustrate this principle by analogy, the strength of cortical rhythms can be thought of as analogous to the rhythmic sound generated by a population of individuals clapping their hands in synchrony. The presence of an individual clapper among a population of clappers means that his/her rate of clapping becomes quickly entrained by the collective sound generated by the population as a whole. Now imagine that a number of mischievous ‘ticklers’ are introduced to the scene, inducing sporadic clapping by tickling individual clappers. Although the individuals targeted may be excited into clapping more often, there will be a disruptive effect on the regularity and volume of the sound generated by the population as a whole. The basic principle is that although hallucinogens excite certain excitatory neurons in the cortex to fire more readily, this has a disorganising influence on cortical activity as a whole.
And further, psychedelics have the potential to dissolve the ego, our perception of a continuous self. The mechanism for this seems to be screwing with the so-called “default mode network,” a network of neurons that is pretty much active in the background all the time, helping to maintain our regular sense of ourselves as unitary selves flowing through time:
Evidence has accumulated in recent years highlighting a relationship between a particular brain system and so-called ‘ego functions’ such as self-reflection (Carhart-Harris & Friston, 2010). This network is referred to as the ‘default mode network’ because it has a high level of ongoing activity that is only suspended or interrupted when one’s attention is taken up by something specific in the immediate environment, such as a cognitive task (Raichle et al., 2001).
It was a matter of great intrigue to us therefore that we observed a marked decrease in brain activity in the default mode network under psilocybin (Carhart-Harris et al., 2012) whilst participants described experiences such as: ‘Real ego-death stuff! I only existed as an idea or concept… I felt as though I was kneeling before God!’
The default-mode network is also called the “task-negative” network. It is anticorrelated with the so-called task-positive network. This is a brain network that is highly engaged when our attention is on goal-oriented activity. The anticorrelation means that when one system is highly active, the other is not, and vice versa. Thus we can, to put it in terms perhaps a little too much like pop psychology, literally “lose ourselves” in a task or activity. This comes about because the default mode or task-negative network that is largely responsible—as far as we understand the brain at this time—for introspection and maintaining our sense of self, while the task-positive network which is activated during goal-oriented activity intrinsically suppresses this introspective network.
Thus there are similarities between the psychedelic state and flow states, when we are so engaged in an activity that everything else, including our sense of self, seems to fade away into the background. That, of course, doesn’t mean that flow states and being on LSD are exactly alike—there are many differences too obvious to point out. However, it does indicate that we do in fact enter altered states of consciousness all the time: when we’re deeply engaged in an activity, when we’re asleep or half asleep, and so on. Not just when we’re taking mind-altering drugs or engaging in ritualistic religious rites.
The typical Western account of why ayahuasca is consumed usually focuses on ‘getting in contact with the spirit world’, but this fails to capture either the cultural worldviews in which ayahuasca consumption is situated or the motivations behind the ceremonies. The first thing to note is that Amazonian people can differ greatly in how they understand reality in relation to themselves. For example, the Cashinahua, Siona, and Schuar peoples all use ayahuasca as a tool for revelation but differ markedly in how they understand the experiences it produces. The Cashinahua understand ayahuasca as causing hallucinations that provide guidance (Kensinger, 1973), the Siona believe that it allows access to an alternative reality (Langdon, 1979), while the Schuar take all normal human experience to be a hallucination and take ayahuasca as a way of accessing true reality (Obiols-Llandrich, 2009).
Perhaps, then, it’s fitting to end with a quote from one of the most famous writers on psychedelics, Dr. Gonzo himself, Hunter S. Thompson. Thompson regarded himself and his reckless drug use as embodying something of a national archetype:
I am the prototype, the perfect American. Half out of control, violent, drunk, high on drugs, carrying a .44 Magnum. Rather than being strange, I may be the embodiment of the national character…all the twisted notions that have made this country the beast it is.
Mirror neurons, These are neurons with a curious property: they fire both when you do something, but also when you observe the same action in others. Much speculation surrounds the functional role of mirror neurons, and in particular how they might factor into developing empathy, and whether defects in the mirror neuron system could contribute to autistic spectrum disorders, which are characterized by poor cognitive empathy.
In this instance, we’re seeing a primitive kind of “motor empathy,” which might underlie cognitive empathy, our ability to understand others’ thoughts, feelings, motivations and so on from their outward behavior. Brodmann’s area 9, a part of the mirror neuron system in the brain, lit up when test subjects engaged in contagious yawning. This area of the brain has also been implicated in mentalizing, i.e., precisely in understanding other people’s mental states. Interestingly, in people with Major Depressive Disorder, we have found neurons in this area to be smaller, and glia—the support cells which are more numerous than neurons, and increasingly are understood to play more than just a passive role in thought—to be fewer and further between.
It remains to be seen exactly what role mirror neurons play in human empathy, but they’re certainly interesting. It’s fascinating that not only we can automatically do something because we saw someone else do it; this automatic act is caused by parts of our perfectly healthy brain not being able to distinguish between ourselves and our fellow human beings.
Is it true we don't use all of our brain??? If so, why can't we. I mean, we have a brain, why not use it all to its superlative capability?
Why not indeed. The idea that we only use a small portion of the brain, usually quantified by a very specific number, is completely false. I don’t even have a guess as to where it originated, but it has since spread and infected the public consciousness. We do, in fact, use all of our brain.
Of course, this implies that we can’t just “switch on” the rest of the dormant brain and magically become smarter and more handsome, like Bradley Cooper’s character in Limitless. If it sounds too good to be true, it probably is.
However, that doesn’t mean that the way in which our brain operates is at all times completely optimal for our goals. Increasing or decreasing activity in certain parts of the brain, or certain neurotransmitter pathways, could plausibly make at least some of us happier or more productive. Which, of course, is nothing new, since we have been using psychoactive drugs for such purposes since the dawn of medicine. As we learn more about the brain, we will come closer to the level of understanding required to really mess with it in ways that can, possibly, make us smarter or happier without risking dangerous side effects. But we aren’t really there yet. Most current drugs, or non-drug methods of altering the brain come with a long sheet of possible adverse reactions.
Obviously it would be easier if there really were large swaths of the brain going unused all the time, just sort of hitchhiking on the evolutionary trail, a sort of parasitic neural network gobbling up nutrients and energy—the brain is the part of our body that uses the most energy as compared to its volume—that we could activate to become superhuman. But that really isn’t the case.
And if you think about it, that really makes no sense at all on two levels. First, why would we have a huge organ that consumes huge amounts of precious (at least in prehistoric times) energy if we only used a small portion of it? If we could do with the brain of a baboon, we would never have retained, or evolved such a big brain in the first place. And secondly: consider the extremely implausible-even-for-a-hypothetical scenario that we all were actually carrying around a huge brain but only using a small portion of it. That would constitute normal experience. What would happen if we suddenly activated the rest? In the movies, the obvious answer is that we’d be superhuman. But maybe we’d actually become emotional wrecks, or maybe we’d become intellectually impaired because the mind could not integrate all the new activity into a coherent picture.
Luckily for us, no such dilemma faces us. The 10% or whatever number is making the rounds is completely fabricated.
However, while it’s not the case that ordinary healthy people go around not using a large chunk of their brain, it is possible to survive and even thrive with minimal loss of cognitive function with only half your brain. A procedure known as hemispherectomy involves removing or severing one hemisphere of the brain. This surgery is only performed in extreme cases of epilepsy where the source of seizures has been found to be localized to one hemisphere, due to the obvious risks of cutting out or off one half of someone’s brain. Remarkably, the brain, especially if the surgery is performed at a young age, is able to adapt and allow basically all of the functions of the other hemisphere to be taken over by the one remaining.
In March, a study reported an interesting finding: inside a diamond brought up from the depths of the Earth by a volcano in Brazil, a small piece of the mineral ringwoodite was found, and about one percent of its mass was accounted for by water bound in solid form inside the crystalline structure. Now, a study bringing together evidence from an array of seismic sensors across the United States and laboratory work simulating the conditions of the transition zone between the Earth’s upper and lower mantle, around 400-700 kilometers’ depth, suggests that this was no anomaly. The lab work suggests that, under the conditions of extreme pressure in the transition zone, ringwoodite can soak up more than one percent of its mass in water. When some of this ringwoodite is pushed down further into the lower mantle, it gets crushed into a different kind of mineral that can’t hold water. As a result, the rock “sweats” water, which is trapped in pockets deep beneath the surface.
The observations of seismic waves found changes in wave velocity consistent with such subterranean water. If 1% of the rock in the transition zone is water, that would be the equivalent of three times the mass of water in all of the oceans on the surface.
Typically, planets much larger than Earth would be gas giants. That’s what we thought, anyway. But now astronomers have discovered an exoplanet seventeen times heavier than Earth, made up of rock and solids, some 560 light-years away. Not only is the planet exceptionally large for its composition, it’s also surprisingly old. Its parent solar system is 11 billion years old. In order to make the heavier elements needed to create an earthy planet, you require stellar nucleosynthesis—stars merging atomic nuclei into successively heavier elements until they explode, dispersing the mass, which can then form planets. There weren’t a whole lot of heavy elements present in the universe less than three billion years after the Big Bang, but apparently, there was enough to create Kepler-10c. Fascinating.
Think of the implications for life elsewhere in the universe. Although we have yet to confirm its existence, the conditions conducive to it could have appeared much earlier than one would have thought.
Carbon dioxide isn’t really hot. Like other gases, it all depends on how much you heat it. The boiling point of CO2 is far below freezing. But carbon dioxide and other greenhouse gases in the atmosphere absorb heat that would otherwise be reflected off earth into space, thus increasing the average temperature on our planet’s surface. In absolute terms, global warming doesn’t amount to much warming at all—if you saw it on the weather forecast, you might shrug it off—but an increase in average temperature of only a few degrees can have dramatic and devastating consequences.
Ah! Sometimes I need to be reminded why I love science in the first place. The answer is simple curiosity, and the extraordinary sensation of satisfying it. A child-like wonder at the world is a great thing. It can lead in two directions: either to the mystic, who so clings to that wonderful feeling that any attempt to dissolve it by explanation is seen as a threat; or to the scientist, who enjoys the wonder for what it is, but who sees it rather as a motivation to explore, invent, discover, and seek the equally extraordinary sensation of satisfying curiosity. That is the phenomenology of science in a nutshell, the science of what it feels like to do science, or learn science, or at least my idealized version of it.
Scott Aaronson is the kind of rigorous modern scientist who hasn’t lost touch with his child-like curiosity and wonder at the world. He asks an innocent question—who can name the bigger number in fifteen short seconds—and goes on to explore how this question connects to a series of incredible discoveries in the history of mathematics and science. And he’s funny, too. Read it. I’m surprised I haven’t linked this essay before.
Someone asked me to explain the Higgs Boson. This will be the last question I answer in a while. I’ll let the ask function stay open, but I’ll be collecting the questions and answering some of them at a later date. To clarify my earlier post, I do not consider any of the questions I answered to be dumb. The comment about dumb questions was intended to discourage people from asking me to help them with their schoolwork. I have no interest in that. But if you have a question about science inspired by genuine interest and curiosity, that is the sort of question I am interested in answering on this blog. This blog isn’t just about Q and A, though: more posts about science inspired by my own curiosity are forthcoming.
So, the elusive and mysterious Higgs boson. On this subject I am hopelessly out of my depth, as acquiring a good understanding of it requires a background in mathematical physics which neither I nor most of my readers possess. I will try to defer to better informed authorities. Here is John Baez, a mathematical physicist I really admire for his productivity both in producing cutting-edge theoretical science and also cranking out one educational piece after the other. This was written before CERN confirmed that the particle they detected in 2012 was indeed a Higgs boson:
The Standard Model predicts the existence of a spin-0 particle called the Higgs boson, which comes in two isospin states, one with charge +1 and one neutral. (It also predicts that this particle has an antiparticle.) According to the Standard Model, the interaction of the Higgs boson with the electroweak force is responsible for a “spontaneous symmetry breaking” process that makes this force act like two very different forces: the electromagnetic force and the weak force. Moreover, it is primarily the interaction of the Higgs boson with the other particles in the Standard Model that endows them with their masses! The Higgs boson is very mysterious, because in addition to doing all these important things, it stands alone, very different from all the other particles. For example, it is the only spin-0 particle in the Standard Model. To add to the mystery, it is the only particle in the Standard Model that has not yet been directly detected! [ed. note: now it has]
On the 4th of July, 2012, two experimental teams looking for the Higgs boson at the Large Hadron Collider (LHC) announced the discovery of a previously unknown boson with mass of roughly 125-126 GeV/c2. Using the combined analysis of two interaction types, these experiments reached a statistical significance of 5 sigma, meaning that if no such boson existed, the chance of seeing what they was less than 1 in a million.
However, it has not yet been confirmed that this boson behaves as the Standard Model predicts of the Higgs [ed. note: at this point, many signs point to the particle behaving roughly as predicted]. Some particle physicists hope that the Higgs boson, when seen, will work a bit differently than the Standard Model predicts. For example, some variants of the Standard Model predict more than one type of Higgs boson. LHC may also discover other new phenomena when it starts colliding particles at energies higher than ever before explored. For example, it could find evidence for supersymmetry, providing indirect support for superstring theory.
So what is up with this boson, anyway? Bosons are a kind of elementary particles, the other kind being fermions. You may have heard about the Pauli exclusion principle, which says that no two particles (such as neutrons) can occupy the exact same quantum state. Fermions are particles that obey this principle, as well as other laws found by Paul Dirac and Enrico Fermi (principal discoverer of the neutron). Bosons do not, however, obey these laws. The Higgs particle is a boson.
As mentioned above, the Higgs is important for several reasons. For one, it is the missing piece in the Standard Model of physics. This is the model that constitutes what is colloquially only known as quantum physics. Using a variety of laws operating on a variety of particles, it explains three of the fundamental forces of nature: the electromagnetic force, and the weak and strong nuclear forces. Einstein’s General Relativity explains the last force, gravitation, and famously we have yet to find a good theory of quantum gravity, a theory that can explain all four forces in a common framework. Finding the Higgs, predicted more than forty years ago, goes a long way towards confirming the standard model.
But its most publicized property is its ability to give (certain) particles mass; without the so-called Higgs mechanism, we don’t know how to explain the mass of some particles. As it turns out, if you look closely, by which I mean at the subatomic level—like the researchers at the Large Hadron Collider—you find that the electromagnetic and weak nuclear forces can be unified to one force. But why then do they behave as if they were two, how come this one force seems to act like one force in certain circumstances and another under different circumstances? This is where the Higgs boson comes in. It “spontaneously breaks symmetry” and cleaves the electroweak force into the weak and electromagnetic forces. Interacting with the Higgs field, a field that can be described by four numbers that permeates space—so the theory goes—also endows the W and Z bosons, carriers of the weak nuclear force, with a mass they would otherwise not have. Thus we have in principle four particles involved with the electroweak force, produced by the Higgs symmetry breaking: the positively charged W+ boson, its antiparticle the negative W- boson, the electroneutral Z boson, and finally the familiar, massless photon.
This symmetry breaking, the cleaving of one force into two apparent ones, can only be seen at very large energies, hence the need to build the expensive particle accelerator at CERN.
Various analogies have been proposed to explain how particles gain mass by interacting with the Higgs field, but none of them really hit the mark; all of them, while partially accurate, are prone to misinterpretation. In reality, what goes on in the quantum realm has no easily explainable analog in the macroscopic world we live in. I won’t even pretend to make an analogy. To fully understand it, you need a physics background.
This has been a layman’s attempt at explaining the Higgs boson. It is undoubtedly not wholly accurate, precisely because I lack the necessary scientific background. Those of you who are physicists will likely object to some of what I’ve said, and that is fine. I am perfectly aware that I am not entirely qualified to speak about this, but for the sake of my own understanding and that of my readers, I’ve given it a shot. If someone reblogs this with a better, more accurate explanation that is nevertheless accessible to the layman, I’ll share it. Maybe a physicist among you would like to do a guest post? I haven’t forgotten the early days when this was a group blog.
Anyway, the takeaway, which is one thing I am fairly certain is entirely accurate, is this: the discovery of the Higgs boson is very important in that it presents strong confirmation of what we already suspected based on previous evidence, namely that the Standard Model is, while not perfect, a very good description of reality at the subatomic level, as good as any we have today. The Higgs mechanism also solves important problems with the Standard Model involving how certain particles gain mass, particles which without this mechanism would be massless—and we know from experiments that they do have mass.
Although I have never publicized this possibility, a number of you have taken the opportunity to send questions to this blog. Instead of responding to each individually, I will answer some in a bunch. It’s always encouraging when the silent masses make the effort to tell you, either directly or indirectly that they appreciate something you have put a lot of effort into over the years. But before I answer any questions, let me just make one thing crystal clear: there is such a thing as a dumb question. Dumb questions are not what you think. You need not be dumb in order to ask an ignorant question. Ignorance is neither a sin nor a sign of low intelligence: it simply means that there is something you don’t know at this moment in time. We are all ignorant in one aspect or another. I myself have many embarrassing holes in my knowledge, and even top-flight scientists make their ignorance known on a daily basis when they entertain the misguided idea that their expertise in their field grants them domain-specific knowledge in another.
But some questions really are dumb. Particularly questions that amount to poorly veiled attempts to get someone else to do your homework for you. Now that’s dumb. The educational system has two purposes: it is there to teach you things you need to know—and as in any imperfect system, inevitably it will also attempt to force you to learn some things you don’t need to know, and it will occasionally even teach you things that are wrong—but it is also there to teach you how to learn things for yourself, without a curriculum, a required reading list, a teacher, and a confirmation that “this will be on the test.” Trying to cajole others to do your homework for you is self-sabotating. It is a form of learned helplessness. Those are dumb questions.
That out of the way, onto the questions.
Did giraffes come from dinosaurs?
No, giraffes are mammals. This is what some might call a dumb question, but I don’t consider it to be so dumb. Ignorant perhaps, but not dumb. I don’t know why you would come to think that giraffes evolved from dinosaurs, but I guess the long neck has something to do with it. Which leads to us to an interesting facet of evolution: the fact that some traits have evolved independently numerous times. When a scientist sees the same trait in different living creatures, it is natural to ask whether they have a common descent. But sometimes, this is not the case. Similar evolutionary forces have led some traits to evolve independently in different lineages and at different times. Reaching high up in the treetops to feed was useful to long-necked dinosaurs, and it was useful to giraffes. The most prominent and interesting example, however, is the eye. Being able to respond to light is useful to any kind of life that lives, feeds and procreates in an environment filled with light, and the eye is believed to have evolved independently maybe fifty or even a hundred times.
Could it be that the universe will keep expanding so long as light keeps traveling?
Yes and no. Light is not the driving force of the universe’s expansion. In the extremely short period just following the Big Bang, the universe expanded at an incredible rate. This expansion happened in a time frame so short even if we played it a trillion times in slow motion, the time frame would still be incomprehensible to humans except as numbers on a blackboard. The cause of this inflation is believed to be a natural force called by various names such as the inflaton field, analogous to the gravitational and electromagnetic fields. The exact nature of this period and the inflation is still unclear. After this initial period of cosmic inflation, the universe continued to expand at a much slower rate. But recent observations indicate that the universe is expanding at a faster rate, and the hypothetical force responsible for this acceleration is called dark energy. We have a lot to learn yet about the universe.
It’s worth noting that the universe is, as far as we know, everything that exists; there is nothing “on the outside” of the universe. The universe is not expanding like inflating a balloon; the balloon is only a small part of the universe expanding into a larger part of it. The universe itself is experiencing metric expansion: the fabric of space itself is stretching. Only on very large distance scales is this important: distant galaxies are getting further apart, not because they are moving in opposite directions, but because the space between them is stretching. This does not apply inside our own galaxy or even nearby galaxies, because our galaxies are dense (relative to the rest of the universe) collections of matter held together by their own gravitational fields. You and I are in no danger of expanding apart from ourselves anytime soon.
If a USB drive weighs 20 grams when empty, will it gain weight when you add files to it? And if so, do the files constitute weight?
This is the kind of question that sounds a little like a homework question, but I’ll give it the benefit of the doubt. The short answer is no. Here we need to get into the difference between representation and actuality. Information is weightless, because information is immaterial: it is imaginary. But to bring information out of the realm of imagination and into reality, we need to represent it by something physical. But this distinction isn’t academic. If you have a box, and you put a book full of information into it, it stands to reason that the box will get heavier. The book has mass. But since information is simply an abstract pattern, it need not be represented so crudely. We can just as well represent it by rearranging preexisting matter, without changing the object’s mass. Picture a Rubik’s cube. You could store information simply by turning it around in a special manner, representing a particular pattern distinct from other patterns. The cube wouldn’t gain any weight.
In flash memory, the information is stored in the form of electrons. Now, electrons aren’t massless, unlike photons. They do have less than a thousandth of the mass of neutrons and protons, the building blocks of atomic nuclei that make up most of the mass of a USB stick. So even if you had to add some electrons to the stick to add information to it, the weight increase would be negligible. But as it happens, that’s not how they work. To read out USB memory, you don’t simply count how many electrons are in there. Instead, you measure the conductivity of channels within the stick, and whether or not they conduct a current you send through determines the readout. You might think that the default state of a new drive is all zeroes, 00000…. but it’s actually the opposite: the default state is on, or 1. When you want to write something, you apply a current that, through magical quantum trickery—any sufficiently advanced technology is indistinguishable from magic, and to us non-hardware engineers, it might as well be—moves electrons back and forth within the memory cells, altering the conductivity and therefore the readouts.
So just like rearranging a Rubik’s cube to encode a different pattern doesn’t change its mass, adding files to a USB drive doesn’t increase its mass.
Hey my name is Jason! I have a few questions. So I study musicology but my other love is astronomy. I have a pretty good telescope but I haven’t made any progress in the field of amateur astronomy other than watching the stars, planets and the moon with my telescope. I want to be able to do more, maybe even contribute in the community of astronomy. Where should I begin?
You probably possess a telescope far superior to Galileo’s. With the proper training, you could discover things with it he never saw. The problem is that all those things have already been discovered. Long gone are the days of the polymath: the man who makes significant advances in chemistry, linguistics, painting, astronomy, botany and mathematics all in the course of one life (it was usually a man, back then, which is not the to say it couldn’t be a woman, today). We are so far advanced and specialized that if you seriously want to advance any scientific field, you need to dedicate yourself to it.
There is a good story somewhere in Surely You’re Joking, Mr. Feynman about how Feynman participated in a biology-related startup, working with cell cultures and such if you memory serves me. But he was Richard Feynman, Nobel winner in physics and one of the greatest scientists of the 20th century, and even he couldn’t just jump into a semi-related field willy-nilly. I’m afraid if you actually want to advance astronomy scientifically, you need to lay off the musicology and dedicate yourself to years of study in physics.
But there are other ways to advance the cause of science, if not the state of scientific knowledge itself. I have no scientific training, but I like to think I play a tiny part in advancing science—although I have no pretensions of bringing any new knowledge into the fray—by running this blog. Educating, inspiring, spreading the Good Word of The Lord Science is one way you can contribute.
One word of advice though: never try to give the impression that you’re something you’re not. I’m not a scientist, and I don’t pretend to be one, and all my posts must be read with the caveat that I’m just an interested amateur who must defer to better qualified sources for more authoritative and trustworthy information. This blog, just like Wikipedia—did I just compare myself to Wikipedia, oh my Darwin, I did, didn’t I—is only a starting point, not an end point for knowledge.
How do you run a successful science blog?
This is a question that allows me to pat myself on the back and speculate like some sort of guru. That is not a role I’m comfortable in, but I rarely pat myself on the back anyway, so I’ll indulge myself slightly. I know most of you don’t want to hear about how great I’m doing here. Some of you may laugh at my puny follower count, others might be envious, but apropos of that: we just passed 200,000 followers a few days ago. I guess that is pretty successful. Certainly I’d never expected this when I started this blog six years ago.
Snagging a good URL has probably helped a good deal in establishing a name and a following. But I won’t be so insecure as to give all credit to external factors. I have to believe the quality of the writing and the various other things I share has played a part in building a reputation and following. This is a labor of love, and I hope that shines through. I try to spend time preparing before I post; I don’t just go to Wikipedia and call it a day. Nor do I follow any set posting schedule: I think having a deadline, a set number of posts you feel you need in order to maintain your readers’ interest is simply going to dilute the writing. Look at me: I can go months between posts, interspersed with periods where I’ll post almost every day, and people still follow along.
Patience and diligence pay dividends. As does humility: like I repeat until the point of nausea, I am no scientist, only an interested amateur and a decent writer after years of practice. The person who asked this question, whose name has been withheld, styled herself “a neuroscientist” who had been studying the subject since she was “like, thirteen?” She was fifteen and presumably in high school. Girl, you are not Richard Feynman. You are not a prodigy, you have not been studying neuroscience—reading Wikipedia and pop culture books, by the looks of it—and you are no scientist. Adjust your self-image a bit and you may go on to achieve great things. I wish you all the best, both in blogging and in science. If I am harsh, it is only because I’m speaking the truth, and for that reason I’ve withheld your name. I am not in the business of publicly shaming teenagers. Teens be teens. I was one not so long ago. I know what it’s like. You alternate between knowing everything and having the world before your feet and soul-crushing despair. Both are normal. Both are expected. Neither, perhaps, are fertile soil for a good science blog.
So I have to make a presantation for chemistry class. I will choose the subject but I havent choose it yet. Do you know any subject that easy and easy to find gifs abou it. Could you recommend me?
If you limit yourself to things that are easy to learn enough about to repeat it half-convincingly to a teacher who will subsequently grant you a B+ out of a sense of obligation while sighing inwardly and mumbling under their breath about kids today who can’t learn or present anything without those fancy animated picture things, you will never learn anything truly worth learning.
I recommend trying to learn something about chirality, which is a very important concept in organic chemistry, and biochemistry in particular. In pharmacology, right- and left-handed molecules can have radically different effects. An interesting example is dextromethorphan, which is found in cough syrup sold over the counter in many countries, and which in larger doses has dissociative effects sometimes—by which I mean all the time—used for recreation, and levomethorphan, which is a scheduled narcotic for its potent opioid activity. My favorite part about not being a high school teacher who needs to give out B pluses while sighing inwardly and so on is that I get to talk about how to get high on cough syrup and the irony of scheduling one stereoisomer for its narcotic effects while selling another equally power mind-altering drug over the counter because in lower than recreational doses it happens to reduce coughing.
Chiral molecules are named for the way they rotate light. Prepare to have your mind blown by the mere fact that light can be rotated. In fact the polarization of light is a property to which we are as blind as someone who sees only black and white would be to color—note that I didn’t say color blind, as most color blindedness leads to an inability to distinguish only certain colors, most commonly red and green, not to the complete lack of color vision—but to the magnificent mantis shrimp, light polarized in different directions looks as different as light of different colors looks to us.
Chiral molecules are abundant in nature because life depends on carbon, which is tetravalent, meaning it has four free electrons available to bond with. Most biological processes depend on molecules of a particular handedness—one apocalyptic science fiction scenario involves all the biological molecules of the correct handedness suddenly—by an impossible mechanism which is hand-waved away by the author for the sake of the story—being turned into their other-handed cousins. Thus life as we know it on Earth couldn’t survive.
See what happens when you look further than what is easily available in GIF form? You just learned how to get high on cough syrup, that light can be polarized and that polarized light can be rotated and that many important medicines and biological molecules come in right and left-handed versions—frequently, but not always, on account of a “chiral center” around some carbon atom—and that the handedness is named originally after the direction in which they rotate such polarized light, and that the mantis shrimp can perceive this rotation, and in fact has vision superior vision to all of us. Oh, and how to write a killer science fiction novel.
I read through previous posts about temperatures and pressure and got to thinking about something I’d never considered: why Celsius? And what’s up with Fahrenheit, anyway?
Anders Celsius, the man behind the name, was born on November 27, 1701 in Uppsala, Sweden. Son of an astronomer and grandson of two, it’s unsurprising that he dedicated himself to higher mathematics. Before he created his famous temperature scale, he was involved in an equally interesting debate: the question of the earth’s shape. At the time, there was wide agreement among scholars that our planet is basically round: however, there was equal agreement that the earth was not a perfect sphere. The question was about the nature and degree of deformation from that imaginary ideal. Newton had calculated that our planet is flattened at the poles; as a consequence, a degree of latitude would be longer near the poles than at the equator. On the other hand, measures by the French astronomer extraordinaire Jean-Félix Picard indicated that the planet is more egg-shaped.
Anders Celsius helped resolve the dispute. The French Royal Academy of the Sciences sent an expedition to Peru, current day Equador, to measure latitude near the equator. Celsius suggested that another expedition travel to the Torne Valley, on the border of Sweden and Finland and north of the Arctic Circle, to take measures for comparison. Celsius imagined that this expedition, carried out in 1736-37, would be the final word in the dispute over the shape of the earth. Celsius couldn’t imagine, of course, that in 2014 we would all be carrying in our pockets navigation instruments based on measurements of our earth accurate to a degree only possible in his fantasies, anchored in satellite technology. Nevertheless, he makes the following prescient remark in his “Letter to N. N.,” a pedagogical brochure written to explain the purpose of his expedition:
My Lord might be puzzled that Astronomy, which claims to know the length, shape and size of planets thousands of miles removed from us, still does not know the size and shape of the planet they walk upon daily. But that is not so strange; because one who observes our planet from e.g., the moon, can much easier observe her figure; and on the other hand, that we know the moon’s shape.
If only you knew about GPS, Anders! I’m sure he’d be smiling. The Torne Valley expedition was led by Pierre-Louis Moreau de Maupertuis, the first director of the French Academy of Sciences, and it was a success. Their measurements and the measurements from Peru confirmed that Newton was right: we live on a spheroid flattened at the poles.
Onto the thermometers. In the 18th century, there were dozens of different temperature scales proposed and circulating in scientific circles. Newton had proposed a scale based on the fixpoints of water’s melting and boiling points. Celsius used a Delisle thermometer to make measurements. The deciding factor, as it often is, was scientific rigor. Celsius recognized that in order to create an international temperature scale, equal amounts of heat must measure out to the same number in Paris as it would in his hometown of Uppsala. He set out as scientists do, to separate variables. The ideal is to keep conditions exactly the same in all matters that could conceivably affect the result of an experiment except one. If you vary many different variables at once, you can’t separate them and find out which variable gave rise to the effect. So to measure temperature, you’d want every condition except heat to be equal. Celsius made rigorous measurements of how the boiling point of water varied with pressure. Thus the boiling point of water at a specified pressure would have to serve as the fixpoint.
Celsius fixed his scale at 100 degrees for the melting point of water and 0 for the boiling point.
Wait, what? Yes. Some early scales, such as Delisle’s, were reversed: the colder it got, the higher the temperature in degrees. Perhaps influenced by his Delisle thermometer, Anders Celsius also employed this reverse scale. His scale was quickly picked up and spread, but people such as Daniel Ekström, maker of Celsius thermometers, and Carl Linnaeus, the botanist, inverted the scale and gave us the familiar Celsius scale. Measurements on this scale were commonly referred to as “degrees centigrade,” but the International Committee for Weights and Measurements officially declared that the term to be used was “degrees Celsius”—in recognition of the possible confusion between temperatures and 1/100th of a degree of arc, a term also dubbed “centigrade.”
As for Fahrenheit, the scale was defined by two fixpoints: zero, the lowest temperature Daniel Gabriel Fahrenheit could cool brine, and a hundred, the average human core body temperature. Neither of these were rigorous enough, and today the scale is officially defined with reference to other temperature scales. That, perhaps, says enough, although cultural explanations would have to fill in the gap as to why a scientifically inferior—because it was lacking rigour—scale has survived in North America and to some degree Britain while most of the world has long since switched to Celsius.
Come the 19th century, the need for an absolute temperature scale became apparent. Lord Kelvin introduced the unit that was later named for him, the Kelvin, denoted °K, in 1848. It is based on the Celsius scale insofar as the temperature intervals are the same, but the zero point is instead defined as being absolute zero, the point at which all particles are at their lowest energy state, at which no further cooling is physically possible: -273.15 °C. 0 °K. Shortly thereafter, the Rankine scale was proposed, which takes the same approach, defining its zero point at absolute zero, but taking the intervals between each degree after the Fahrenheit scale, that is 9/5ths or 1.8 degrees for each degree celsius.
One could again turn to cultural explanations for why the Kelvin scale, which could be said to be scientifically superior to the Celsius scale, has not seen use outside scientific circles. I suppose in daily life the boiling and freezing points of water are more relevant to our interests than absolute zero, a physical curiosity that we never encounter naturally—even in deep space, the temperature is slightly above absolute zero. One thing we can say in the Kelvin scale but not in Celsius or Fahrenheit is that twice the temperature actually means twice the thermal energy. 20 °K really is twice as hot as 10 °K, which is not the case with 10 and 20 °C.
A little funny aside that I found while researching this article: remember Linnaeus, the botanist? You might know him as the father of modern taxonomy, the classification of living things. A century before Darwin introduced his theory of evolution, which would give the right framework for such taxonomies, he gave us essentially our modern way of classifying plants and animals. One part of that method is the so-called type specimen: the particular sample of an animal or plant that has been studied and fixes a name to that particular species. And the type specimen of Homo sapiens? Carl Linnaeus.
A wise man named David Hume once wrote that, in all the moral systems he had observed, at some point the author proceeds imperceptibly from “is” statements, statements about how things are, to “ought” statements, normative statements about how things ought to be. But they never seem to add the necessary logical step between—how one goes from those statements about the natural world as it is, to the statements about how it ought to be. In other words, morality. What has become known to some as Hume’s Law states that one cannot logically derive an “ought” statement from a series of “is” statements.
This is a philosophical question, and as philosophers are wont to do, they still argue about it, almost three hundred years later. Some claim that Hume’s Law is false. They subscribe to a theory called moral realism. Others claim that it holds, and they are called moral anti-realists. This whole philosophical field is called metaethics, and concerns questions such as what moral statements really mean, whether one can derive normative “oughts” from facts about the natural world, and related issues.
Science is in the business of describing the world as it is. As such, scientists are rarely interested in questions about how it ought to be. Or they might be interested, but they can only come with proposals, not actual, logically deduced demands about how people should treat one another. That is philosophy, not science. Science explores and teaches us about how the world works, not about how humans should behave towards one another.
In my personal opinion, science can certainly explore moral questions, but cannot conclusively answer them. We can do polls about what people think, but is it given that what the majority thinks is true? In any other field, one would say no. When people thought the world was flat, or that the Earth, not the Sun was the center of the universe—later, of course, we realized that the Sun isn’t even the center of the universe, which has no center, but merely the center of the solar system, but that’s a tangent—would that majority opinion make it true? No.
Game theorists and others try to model how one can optimally behave in various situations. But if taken as a moral theory, that could easily lead to egoism.
Some claim that what is natural is right, but they also skip the necessary logical step between “is” and “ought”. Rape happens in nature. Does that make it right? No. That is sometimes called the naturalistic fallacy.
This is a super complicated issue that has been debated since Socrates. If you are interested, you can read Plato’s dialogue Euthyphro. Or you can read about the is-ought problem on Wikipedia. The best source, however, is the Stanford Encyclopedia of Philosophy, which is a free encyclopedia peer-reviewed by philosophers. See here, here, here and here. But the Stanford Encyclopedia is rather dense and technical, and perhaps hard to read if you have no previous experience reading analytic philosophy.
Personally, I subscribe to a theory called moral quasi-realism, which was inspired by Hume and by Ludwig Wittgenstein and developed by Simon Blackburn. Blackburn has also written some books aimed at introducing people unfamiliar to philosophy to the field. Quasi-realism allows you to make moral statements without betraying Hume’s Law, but admittedly they have less force than if they could be claimed to be grounded in science.
In general, I have to say this is a very complex question to answer. It’s hard to answer properly without getting too technical, and I think most of the readers of this blog would lose track or patience or get bored quite quickly if I really got into it. Not because they’re dumb, just because this is Tumblr, they are unfamiliar, it’s technical and they might just want to look at pretty pictures or hear the latest in science explained in an understandable, but not dumbed-down way. That is my goal with this blog: to bring science to the people in a way that neither betrays the science by explaining it with half-baked metaphors or overhyping findings which are really just small developments in a field. But also to make it readable and enjoyable for as many people as possible.
Science is fantastic, people! It’s not just pretty pictures of galaxies or neurons or puppies transplanted with genes so they glow in the dark.
But to conclude: No, science can’t answer moral questions. Only explore them.
The Golden Rule, advocated by such luminaries as Jesus and Buddha, is still a good rule of thumb. It’s not scientific, it’s just a basic test to see if you’re being an asshole or not.
This is not scientific advice grounded in peer-reviewed journals, but it’s still damn important: be kind to one another, and as long as people are not hurting anyone else, tolerate them, whether they have the same skin color or the same politics or religion or musical tastes as you or not.
Icthyosaurs were giant reptiles who lived in the seas at the same time as the dinosaurs ruled the land. They gave live birth to their children. A fossil newly discovered in China, dating back almost a quarter of a billion years, shows the earliest known live birth of a reptile. The fossil literally captures the moment of birth, as there was an embryo still inside the mother, a newborn just outside her, and a third halfway between, in the process of exiting the pelvis. The headfirst posture of the second baby indicates to researchers that live birth may have evolved on land, not in the water in reptiles as previously thought.
Icthyosaurs, although they lived in the same time period and could be mistakenfor them owing to their shared reptilian inheritage, were not dinosaurs. (And neither were the plesiosaurs, which, unlike the more fish-like icthyosaurs, look exactly like you’d expect water-dwelling dinosaurs to look.) Live birth is one of the things that distinguishes them.
Although not a first per se, it still amazes me that, through fossilization, we can look back at a birth in progress that started 248 million years ago.
If a person dies quickly, would he/she feel the pain before death, or would they just die without feeling anything since it was so quick?
This is an interesting question. A good example of rapid death would be decapitation. For obvious reasons, one can’t do experiments on humans to figure out if they remain conscious for any period of time after having their head chopped off. Ranging back to the time of the French revolution, the heyday of the guillotine, there are anecdotes about people who apparently remained conscious and responsive to stimuli for a few seconds after the head was severed. There are also anecdotes about people who promised to give a sign after death to indicate awareness, and failed to do so. These anecdotes are impossible to verify. Decapitation or any other human death by extreme and rapid trauma has never happened during any form of scientifically controlled conditions. All we can say based on these anecdotes is that it’s possible that some people may remain conscious and feel pain for a short while after such a violent death. It’s also possible they aren’t, and it seems likely to be so for the majority of cases.
Rats are frequently used as model animals in research, so scientists have taken some interest in the question, do rats feel pain during decapitation? Is decapitation a humane way to sacrifice animals in research? Monitoring brain activity in rats as they were killed, researchers found no brain activity normally associated with pain in rats who were awake while their head was cut off. That suggests the rats were not in pain. Other scientists have calculated that it would take no more than 2.7 seconds for the rat brain to go unconscious from lack of oxygen. Given the nature of the trauma, more intense brain activity would be expected, which would use even more oxygen, and so unconsciousness would result even quicker. Taken together, these data suggest that rats, at least, do not feel pain after decapitation, and if they did, the duration of the pain would be no more than a couple seconds.
We can’t directly extrapolate to humans, but we can speculate. It seems likely that humans, too, go unconscious more or less instantly, at least in the majority of cases. Given the anecdotes, it seems possible that some may retain some kind of awareness for an instant after trauma, but the evidence is weak, so we’ll have to call it an open question. It’s certain, however, that the brain cannot function without oxygen, and that oxygen would run out in a matter of seconds after the supply was permanently cut off, so at the most, one could hypothetically feel a few seconds of pain. But then again the rats’ brain activity didn’t suggest pain. So the best answer I can give is that most people will probably not have time to feel pain before they’re dead, and it remains an open question whether some rare exceptions may retain a few seconds of consciousness, but if so they wouldn’t necessarily be in pain for that time.
I believe that we survive death. I have read that the soul is made of matter with a higher frequensy.
I have read that Yahweh created the world in six days, and on the seventh he rested. I have read that according to exact calculations of genealogies, this occurred less than six thousand years ago.
One should not believe everything one reads. This is not a question, but if it were one, it would not be well formed. Science, as far as it has been concerned with the matter of consciousness, has not found any evidence that consciousness continues past death. That is all we can say: nothing points to it. If by “the soul” you mean something that sustains our personality or consciousness, then the answer to your non-question is: no, that is not true. I can guarantee you the phrase “the soul is made of matter with a higher frequency” does not occur in any peer-reviewed scientific paper. Anywhere. Its likelihood of being true ranks up there with “the tooth fairy is made of matter with a higher frequency.”
Some of the most interesting things that happened in science and on this blog in 2013. Previous years: 2012, 2011, 2010.
Story of the year
The story of the year has to be the discovery of the elusive Higgs boson. The new particle’s discovery was announced in July, 2012, but it wasn’t until March of this year that a full analysis confirmed its status as a Higgs boson. Either way, I didn’t pick a story of the year for 2012, so it can serve for both years, in lieu of any obvious competitors. (If you think there’s a more important science story for 2013, I’d love to hear it!)
You’ll notice that the headline above says the new particle is a Higgs boson, not the Higgs boson. This is because some models posit several Higgs bosons. The Large Hadron Collider shut down operations in February after a three-year run, but will restart in 2015 at higher energies, hopefully bringing us more useful data about the Higgs mechanism. In case you’ve lived under a rock the past five years, the reason the Higgs boson is important is because it is the particle tied to the mechanism that gives everything mass in the Standard Model of physics. Its existence has been theorized for decades, but it is only in the past two years that we have had experimental confirmation of its existence.
In chemistry: Martin Karplus, Michael Levitt and Arieh Warshel for the development of multiscale models for complex chemical systems. In practice, this means computer models that draw on quantum mechanics in the most important parts and simplify to classical physics, which is less accurate but also less computationally expensive, in the less important parts of a reaction. Illustration of Newton and his apple fighting, then reconciling with Schrödinger’s cat from the Nobel Institute. Note that this research was done in the 1970s; although the Physics prize was highly topical, most Nobel prizes are awarded decades after a discovery; indeed, the Higgs boson was initially theorized in 1964.
We’re living in the future, when scientists can create mouse brains with human brain cells and measure the impact on learning. Earlier this year, a paper was published in Cell describing experiments where scientists implanted human glial progenitor cells into the brains of newborn mice. Glial cells are cells as numerous as neurons in the brain that support and nourish neurons. Until recently it was thought that they played exclusively a supporting role in the brain, but we now know that they can also impact neurotransmission. The mice who had human glial cells implanted in the neonatal stage grew chimera brains: on reaching adulthood, the human glial cells had spread to large parts of the mouse brain, retaining their human form and integrating themselves into the mouse neural network similar to human glial cells in a human brain or mouse glial cells in a normal mouse. The partially human mice learned significantly faster than control mice and mice implanted with mouse glial cells on all behavioral tests, and the same could be seen on measures of long-term potentiation (LTP)—a molecular process of strengthening the connections between neurons that is known to underlie some forms of memory and learning.
Science Behind the Factoid: Lottery Winners Are No Happier than Quadriplegics
Here’s a frequently repeated, counterintuitive factoid: people who win large sums in the lottery are no happier, over time, than people who become paralyzed in traumatic accidents. This “fact” comes from Brickman et al’s 1978 paper called Lottery Winners and Accident Victims: Is Happiness Relative? The researchers interviewed 22 major lottery winners, 22 randomly selected controls from the same area, and 29 paraplegics and quadriplegics who had suffered the injury in the recent past. The lottery winners had won sums ranging from $300,00 (more than a million in 2013 dollars) to $1,000,000. Here are some of the results:
The respondents rated their happiness and their enjoyment of everyday pleasures such as hearing a good joke or receiving a compliment on a scale from 1 to 5, where 5 was the happiest. As you can see, lottery winners were not significantly happier than controls. They also derived significantly less pleasure from everyday events. The victims were significantly less happy than the controls or the winners; however, one may have suspected them to be even unhappier. After all, these were people who had suffered a life-changing, paralyzing injury less than 12 months ago, and were still engaged in extensive rehabilitation. The victims also reported slightly more enjoyment of everyday pleasures than the lottery winners. All reports, past, future and present were made at one moment, and so we can see that the victims idealize the past, which they report as significantly happier than the controls or lottery winners report the present. All groups reported similar expected future happiness levels.
The second study showed that the results were not due to preexistent differences between people who buy lottery tickets and those who don’t.
The results are surprising, but they aren’t particularly strong. The sample size is small, and the results have not been replicated since. It’s a long way from results such as these to Oprah self-help slogans like “major events, happy or not, lose their impact on happiness levels in less than three months.”
Does money increase happiness? Some real-world studies have attempted to look at this. One study interviewed average Joes and Forbes 500 multi-millionaires. The wealthy were happier than the average Joes, but only modestly so. A larger study of poorer people was undertaken as part of the Seattle and Denver Income Maintenance Experiments. Part of a large-scale experiment involving 4,800 families and negative income tax—a way of ensuring a minimum income regardless of work status—it provided fertile ground for investigating the question: does having a stable income increase happiness?
A three-to-five year study tackled the question. Household heads who received extra monetary support and controls who didn’t were queried for symptoms of psychological distress. The results were surprising: for most groups, a stable income did not have any impact on psychological distress. In some groups, psychological distress was increased.
In 2006, Jonathan Gardner and Andrew J. Oswald decided to take another look at the lottery winner findings. They had the benefit of longitudinal data. Instead of asking subjects to rate their past, present and future happiness, they used data from the British Household Panel Survey. This survey already asks participants to rate their happiness every year. Gardner and Oswald looked at participants who had won medium-sized lottery prizes, from £1000 to £120,000. The 137 winners—a small sample, but much larger than the 1978 sample of 22 winners—went on to improve a small, but significant amount on a scale of general happiness.
Does increasing everyone’s income increase happiness? Gross Domestic Product per capita has been steadily increasing the last three decades. But are we happier? No, we’re exactly as happy or unhappy as we’ve always been. Below are some data:
What does this mean for happiness? Clearly, money isn’t everything. Equally clearly, money is something. It’s easy to come up with a folk psychology explanation to Brickman et al’s findings. Let’s give it a go: In the moment, a major accident is a huge negative factor in your life. Winning a million dollars is a huge positive factor in the moment. But as time passes, the happiness fades. Lottery winners grow accustomed to their new wealth, and no longer derive significant happiness from it; on the contrary, compared to the euphoric winning moment, everyday pleasures become duller. On the other hand, quadriplegics grow accustomed to their injury, and in contrast to the injury, the joy of everyday pleasures becomes greater. In time (say, 3 months, that sounds good in a soundbite), major life events don’t really affect your happiness level.
But as the data above shows, that’s an oversimplification. When it comes to our understanding of happiness, we may not have come much further than Socrates or Seneca. We lack historical data from antiquity, but it’s easy to imagine Socrates being a happier man than the man who won a million dollars in 1978.
The New Yorker has a long article about the development of a new insomnia drug. It starts with the discovery of a novel neurotransmitter—discovered independently by two different groups of researchers in the 1990s, named by some hypocretin (because it is produced in the hypothalamus), by other orexin (meaning appetite-stimulating, because of its observed effects). Orexin/hypocretin was first thought to regulate appetite, but attention soon turned to sleep. Orexin receptors are only present in a few thousand nerve cells in the hypothalamus, a tiny number compared to the billions of neurons across the brain. But these neurons have connections all over the brain, and they appear to act as an “awake switch.”
Orexin comes along and tells the brain, “hey, be awake, don’t fall asleep.” Soon after the discovery of orexin’s effects on rats’ appetite, it was discovered that rats lacking orexin receptors (“keyholes”) acted similarly to human narcoleptics. They have disturbed sleep patterns, and tend to fall asleep suddenly or fall into a heap, their muscles inert, at intervals during the day. Human narcoleptics were found to lack orexin (“keys”). This novel discovery led to the possibility of new medications. Orexin agonists (which activate the receptors) could become new treatments for narcolepsy or daytime sleepiness. And orexin antagonists (which block the receptors) could become new sleep aids. The search for the next blockbuster drug was on.
The article gives a fascinating look into the evolution of pharmaceutical research. Here is a description of how scientists came up with the zolpidem molecule, the active component in the popular sleep aid Ambien:
[Jean-Pierre Kaplan] and Pascal George—a younger colleague whom Kaplan described as “sympathetic and brilliant”—started by building wooden models, including ones for Valium, Halcion, and zopiclone. Colored one-inch spheres, representing atoms, were connected by thin rods, creating models the size of a shoebox. This was a more empirical, architectural approach than is typical in a lot of pharmaceutical chemistry. Kaplan and George tried to identify what these molecules had in common, structurally, that allowed them to affect the brain in the same way. Kaplan told me that their thinking wasn’t wildly creative, but it was agile: “You know, at that time it was maybe clever, because you have no computer. Now it’s routine work.”
Then a couple of decades later, pharmaceutical giant Merck is trying to find a drug to block orexin in order to help patients sleep:
Merck has a library of three million compounds—a collection of plausible chemical starting points, many of them the by-products of past drug developments. I saw a copy of this library, kept in a room with a heavy door. Rectangular plastic plates, five inches long and three inches wide, were indented with hundreds of miniature test tubes, or wells, in a grid. Each well contained a splash of chemical, and each plate had fifteen hundred and thirty-six wells. There were twenty-four hundred plates; stacked on shelves, they occupied no more space than a filing cabinet.
In 2003, Merck conducted a computerized, robotized examination of almost every compound in the library. At this stage, the scientists were working not with Renger’s animals but with a cellular soup derived from human cells and modified to act as a surrogate of the brain. Plate by plate, each of the three million chemicals in the library was introduced into this soup, along with an agent that would cause the mixture to glow a little if orexin receptors were activated. Finally, orexin was added, and a camera recorded the result. Renger and his colleagues, hoping to find a chemical that sabotaged the orexin system, were looking for the absence of a glow.
But drug development isn’t just science. Politics and marketing also enter into it. Everything from the color of the pills (“reds are culturally not acceptable in some places”) to the packaging (“the U.S. prefers everything in a thirty-count bottle”) to the dose. The final hurdle is approval by the Food and Drug Administration—America’s final arbiter on what drugs are allowed to be marketed and sold, and for which diseases—and other regulatory bodies in other parts of the world. It’s good that such hurdles exist, because otherwise dangerous drugs—such as thalidomide, which caused severe birth defects—would enter the market much more frequently. However, there is a question of balance: at what point do potential downsides outweigh the benefits? The FDA has turned to a more conservative line: dosages should be as small as possible.
This poses a problem for Merck. Their orexin antagonist, their potential superstar new sleep medication, suvorexant, is effective (as measured by objective measurements) at a dose of 10 milligrams. However, at this dose, patients don’t experience any subjective improvement in sleep quality. At higher dosages, both objective and subjective measurements agree that the drug is effective. But the FDA argues that such higher doses run a higher risk of side effects, and recommends the lowest dose, the dose which doesn’t make patients feel any better even if they’re getting better sleep as determined by objective measurements. This leads to an absurd situation where the FDA is arguing the drug’s effectiveness (at the lowest dose) while the drug company is arguing its ineffectiveness. If the FDA will only approve the lowest dose, this poses a problem for marketers:
How successfully can a pharmaceutical giant—through advertising and sales visits to doctors’ offices—sell a drug at a dose that has been repeatedly described as ineffective by the scientists who developed it?
Regardless of marketing and backroom tactics and FDA meetings, the research into orexin continues on. And that’s the really interesting part from a scientific perspective. Just a little more than a decade ago we discovered a completely new piece of the brain puzzle. We still don’t understand sleep. That’s the big thing. More basic research—the kind of research that just tries to figure out how things work, without regard for practical applications such as drug development—is needed. We don’t know why we need to sleep, we don’t know the exact significance of the different sleep phases. We do know that sleep is vitally important, that a specific cycle of brain states throughout the night is needed to perform well the next day. But why must we sleep at all? Why is resting awake not good enough? We have some ideas—memory consolidation, ridding the brain of certain toxins that can build up during wakefulness—but we’re not sure.
Sleep remains a mystery. Orexin is likely to play some part in the solution. And that’s exciting, whether you have sleep troubles or not.
It isn’t always the case, but this time you should actually go to Wikipedia for a good explanation. Read about the development of the barometer and the theory of gauging pressure. It’s rather fascinating that people were able to figure all this out 400 years ago. And how did they do it? By questioning a long-held assumption: that air is weightless. That’s a fine example of the scientific method.
Fundamentals of Computer Science (Some Math Ahead!)
I hate it when people choose nicknames that don’t work in conversation. A Photo of Dorian Grey, alternatively “50% physics, 50% mountains” writes:
Actually the first general-purpose digital computer was built by Tommy Flowers in 1943.
That computer was the Colossus, and the Colossus computer was not Turing complete. Therefore, it wasn’t the first general-purpose digital computer.
During the 1930s, the decade before the first modern computers were built, many mathematicians were working on the fundamentals of computation. Just what was computable? What problems could and could not be solved algorithmically, and how could this property be formalized? Several men solved this problem independently, in different ways which turned out to be equivalent.
Alan Turing was one of those men. He imagined a theoretical machine which would later be referred to only by his name—the Turing Machine—which could compute anything that is theoretically computable. A Turing Machine consists of an infinite tape of cells one after the other, filled with symbols. A reading head moves along the tape, scanning the content of one cell at a time, and deciding based on a small table of instructions whether to change the symbol, move forward or rewind. A specific Turing Machine cannot be programmed; it can only solve the single program for which its instruction table was designed. But Turing further imagined a Universal Turing Machine (UTM) which could simulate any other Turing Machine by reading off not only the input but also the instruction table from the tape. In effect, this is how modern computers with programs stored in memory work.
The UTM is one way to formalize the concept of what can and cannot be computed. It’s a very simple, elegant theoretical construct, and very clever people have managed to device ingenious UTMs that use a tiny set of symbols and rules to effectively compute anything that can be computed. A computer is said to be Turing Complete if and only if it can compute exactly the same things an UTM can compute. (Ignoring the fact that non-theoretical computers have finite memory.) And if it cannot, it is not a general-purpose computer in the modern sense, since there are algorithmic problems it cannot solve. The Colossus was very useful for its purpose, but it was not Turing complete.
Around the same time, Alonzo Church attacked the problem of defining computability from a slightly different angle. He deviced a formal calculus called the lambda calculus. The lambda calculus is also extraordinarily elegant and simple. It is built upon the theory of anonymous functions and substitution. Here is how wikipedia defines it:
A variable, x, is itself a valid lambda term
If t is a lambda term, and x is a variable, then (λx. t) is a lambda term (called a lambda abstraction)
if s and t are lambda terms, then (t s) is a lambda term (called a function application)
In effect, lambda calculus is defined by anonymous functions that can take one named variable and can later be applied to one lambda term. Only the syntax above and a couple rules about how to perform function application are needed. From these humble beginnings, some clever bootstrapping allows us to create something called the Y Combinator, which allows us to use recursion in a calculus that has no native support for it. Numbers can be encoded using Church encoding. If you’re willing to do theoretical groundwork, you can compute anything that is computable using the lambda calculus. As it turns out, Church’s lambda calculus and Turing’s UTM are equivalent; they define the exact same class of functions. And the Church-Turing thesis posits that anything that is computable is computable by both these formal systems.
Now, these formal systems aren’t meant to be practical. When they were invented, they were intended only to define computability, not to provide a practical means of achieving it in a machine. Yet the Turing Machine formed the basis of the Von Neumann architecture, which is the basis of almost all modern computers. And many modern programming languages—called functional programming languages, because they are based around functions in the mathematical sense—are based on lambda calculus. The most basic form of lambda calculus sketched above is called the untyped lambda calculus, because the variables have no type; any function can accept any variable. But typed variants of the lambda calculus are very important in type theory, a field in the cross-section of computer science and pure mathemathics that investigates ways to reduce errors in computer programs by more rigorously defining what kind of operations can be performed on which variables. For instance, it makes no sense to perform a “search for the letter ‘t’” operation on an integer, but it makes perfect sense to do so in a text string like “this is a string!” These are some of the errors typed programming languages can help catch before the program runs, during development.
One of the programming languages strongly based on a typed lambda calculus is Haskell, named for Haskell Curry, another computer science pioneer. He is known, among other things, for the so-called Curry-Howard isomorphism, which states a formal equivalence between mathematical proofs and computer programs. This equivalence is useful in several ways. If one has a particularly gnarly mathematical problem, such as the four-color theorem, one could write a computer program that is equivalent to a proof of the theorem. Or if one has a particularly gnarly computing problem, one can write a mathematical proof that some algorithm is correct, but which also doubles as an executable program that can perform that algorithm.
If you’d like to learn more about the fascinating world of fundamental computer science, I highly recommend the book The Structure and Interpretation of Computer Programs. It is a programming manual, but also the best introduction to computer science ever written. It’s practical, it’s theoretical, it will tax your brain, and if you read it and complete the exercises, you will become a zen master of computing. (I cheated; read the book but skipped some exercises. Don’t be like me. Be a real man/woman, do the work, learn the good stuff.)
What are most scientists’ view on paranormal phenomena?
The simple answer is that most of them don’t believe in it. Paranormal research rarely intersects with mainstream science. Most so-called researchers into the paranormal have none of the rigor needed to perform real science, and their “experiments” usually have methodological flaws that can easily be spotted by a bright middle schooler. On occasion, actual scientists attempt to explore alleged paranormal phenomena, and sometimes there’s even a semblance of rigor to their investigations. Such is the case I’ll tell you about today.
Back in 2011, professor of psychology Daryl J. Bem turned a lot of heads when he published rigorous experimental data that appeared to prove a form of extrasensory perception (ESP), in the form of precognition and premonition: the ability of future events to determine an individual’s thoughts and feelings in the present. Clearly, such informational time-travel would go against everything we know about physics. But Bem is no run-of-the-mill crackpot: he is a widely cited and influential psychologist most known for his self-perception theory of attitude formation, which states that we form our attitudes by observing our behaviors, rather than the other way around. Although counterintuitive, many studies have found support for the idea. For instance, while we know that happy people smile and angry people frown, it has also been shown that people get happier by smiling and angrier by frowning. Bem’s smart move when it comes to ESP research was attempting to do the research according to the established standards of psychological science. He also encouraged others to attempt to replicate his findings, correctly reasoning that replication is at the heart of science.
Bem’s experiments were rather clever. He simply took established psychological effects and time-reversed them. For instance, it is known that mere exposure to a word or concept can “prime” a person to more readily think of, or even like the concept at a later time. For instance, if you read a list of words that includes the word table, and later do a word completion task in which you are asked to make a word beginning with the letters tab, you are much more likely to go for table than if you had not been primed. This effect persists even after you have consciously forgotten the priming, or even if you were never aware of it in the first place. The same concept is responsible for the effectiveness of subliminal advertising. Two of Bem’s experiments applied priming after the fact, and appeared to show a “retroactive priming” effect. The setup resembled a typical priming experiment: subjects were asked to judge whether each in a series of pictures was pleasant or unpleasant. Usually, people respond faster when an emotionally congruent word is flashed before an emotionally charged picture than when a word representing the opposite emotional charge (e.g., a positive picture and a negative word) is flashed. Bem observed that this effect persisted even when the word was flashed after the picture.
In total, Bem did 9 different ESP experiments, each with 100 or more participants, all time-reversed variants of known psychological phenomena. 8 of 9 appeared to show statistically significant evidence for ESP phenomena. Bem also appeared to show evidence for a link between stimulus seeking (a personality characteristic associated with extraversion) and ESP abilities, as more stimulus seeking individuals (as indicated by their answers to one or two questions) seemed to exhibit a stronger ESP effect.
Enter the scientific process. As Bem agrees in his paper, extraordinary claims require extraordinary evidence, and one batch of experiments isn’t enough to disprove a very stable theory about how the world works. If ESP exists, what we think we know about physics goes out the window. Because Bem, unlike most researchers into the paranormal, did not believe that the paranormal was above the normal process of science, and also because he has a good scientific track record, other researchers took his claims seriously and set out to replicate or discredit them by repeating Bem’s experimental procedure.
Most attention was given to Bem’s eight and ninth experiments, because they were among the easiest to replicate according to Bem, because they had some of the largest effect sizes of all the experiments, and because they provide the least amount of wiggle room. Either they should show definite effects, or not. If performed correctly, there is little room for observer bias, and there are also few points of contention (unlikely some of the other experiments, which rely on participants’ subjective responses, where null results could conceivably be due to idiosyncrasies in the participants).
The eight and ninth experiment investigate retroactive facilitation of recall. Participants were briefly shown 24 test words and 24 control words, unaware of which category each word fell into. They were then given as much time as they wanted to freely recall as many of the words as possible. Finally, they were given the 24 test words to practice. The results of the experiment appeared to show that memorizing words after the fact could affect results in the present; the words the participants would later practice were more readily recalled than the control words, despite the fact that the participants didn’t know which words they would later practice. The usual effect of practice, naturally, is that words you have previously attempted to memorize are more readily recalled than unknown words.
Bem’s original paper was called Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. It was published in the Journal of Personality and Social Psychology. In 2012, the same journal published a large study attempting to replicate Bem’s retroactive recall experiments called Correcting the Past: Failures to Replicate Psi. As the title suggests, the study attempted to replicate the two retroactive recall experiments and found strong evidence for the null hypothesis (i.e., no ESP). It’s important to note that this is not the same as not finding evidence for ESP. Bem’s hypothesis predicted certain effects, which Bem’s results appeared to support. However, a larger-scale attempt to replicate the findings did not yield those results. The failure to provide the expected results for the hypothesis is not simply lack of evidence for Bem’s hypothesis, it is strong evidence for the opposite. As far as Bem’s hypothesis is falsifiable, it has been falsified. Retroactive facilitation of recall doesn’t exist. The authors did seven different experiments with a sum total of 3,289 participants and also analyze Bem’s own data and other independent attempts to replicate the eight and ninth experiments. In total, more than 6,000 subjects participated in the analyzed experiments, and the effect was not replicated. Only Bem’s own data show evidence for ESP.
Now, that was a good-faith attempt to replicate Bem’s results. Others responded more negatively, analyzing Bem’s procedures and data and suggesting that he was deliberately not reporting negative results, or stopping experiments at the point where his desired results were found. One example was the idea that effect sizes were inversely proportional to number of participants in the studies, certainly a red flag. Others went so far as to suggest that to the extent that Bem’s experiments followed established procedure in psychological research, there’s something wrong with established procedure.
Now, you may think, perhaps the eight and ninth experiments didn’t pan out, but what about the other experiments? But it doesn’t bode well for Bem’s hypothesis when his most rigorous experiments, the ones that most unanimously show results, cannot be replicated.
Nevertheless, Bem should be applauded for attempting to bring rigor and proper methodology to a field of study that is usually mocked—rightfully so—as pseudoscientific at best. Although some fellow scientists were immediately dismissive of Bem’s results, he was rigorous and forthcoming enough to warrant attempts at replication by other scientists. If their experiments prove him wrong, well, that’s science for you. At least it isn’t, as the theoretical physicist Wolfgang Pauli supposedly was fond of saying, so bad it’s not even wrong.
Answer: A new paper in the journal Astrobiology estimates the time various exoplanets—and Earth—will live in the “habitable zone” which is just the right distance from their star. In the case of Earth, they find a lower bound of 1.75 billion years into the future and an upper bound of 3.25 billion—quite a while, that is. At that point, Earth will have become unlivably hot, and the oceans will have boiled off. It’s important to note that this is the end for all life on Earth—complex life like humans will have died off long before. But then again, anatomically modern humans have only existed for 200,000 years. 1.75 billion years is almost 9,000 times longer than humanity has existed.
Earth might become uninhabitable before that due to a runaway greenhouse effect. The current greenhouse effect is what keeps the planet inhabitable right now—without greenhouse gases, almost all the heat from the Sun would be reflected back into space, and there would be no life on Earth. The current anthropogenic climate changes are increasing the greenhouse effect, but not to runaway levels.
Venus, it is believed, once had oceans just like Earth. But early on in its history it became a runaway greenhouse. Water vapor is actually one of the most efficient greenhouse gases, but on Earth only a small portion of the upper atmosphere is constituted by water. On Venus, a large portion of the atmosphere at all height levels was water. As more water evaporated, the greenhouse effect increased, trapping more heat, which caused more water to boil off, in a positive feedback loop until all the water was gone. But on Venus, the protective layer of other gases above the water vapor was much thinner. As a result, more water could be destroyed by photodissociation—broken down by light into hydrogen and oxygen. The hydrogen likely went off into space while the oxygen reacted with materials on the surface, oxidizing them. Thus the oceans of Venus disappeared.
On Earth, this is unlikely to happen until the Sun becomes much hotter, in billions of years. There are two possible scenarios for a rampaging greenhouse effect: the runaway greenhouse (in which all the water disappears) and the moist greenhouse (which is stable, i.e., not self-reinforcing, but stable at a much hotter level than today). Could burning fossil fuels release enough carbon dioxide to create a runaway or moist greenhouse on Earth far sooner than the projected billion-year timeline?
A recent paper by Colin Goldblatt and Andrew J. Watson considers this question. They calculate that the runaway greenhouse is very unlikely. As pressure increases, the boiling point of water also increases. (This also goes the other way: at the top of Mount Everest, water boils at around 70 C or 158 F.) As more water evaporates, pressure increases, thus increasing the boiling point of water. This means that the critical temperature at which all the water boils off the surface is not the usual boiling point of water, 373 K or 100 degrees Celsius, but rather 647 K, which is 374 C or 705 F. (Needless to say, things would get rather difficult for humans long before surface temperatures get to 647 K. Neither the pressure nor the temperature would be livable.) On Venus, this was less of an issue because the atmospheric pressure was lower to begin with.
A moist greenhouse might be more likely, but Goldblatt and Watson estimate that it would take about 10,000 parts per million (PPM) of CO2 (assuming carbon dioxide was the only noncondensable greenhouse gas) to induce a moist greenhouse effect. Currently, we’re at about 395 PPM. In other words, it would take 25 times the current levels of carbon dioxide in the atmosphere to induce a moist greenhouse. Such levels are unlikely to be reached even if we burned off all the fossil fuel in the world, although there is some small amount of uncertainty built into the calculation.
Interestingly, there is an area of Earth that displays the characteristics of a runaway greenhouse effect. This area is the Pacific warm pool, northeast of Australia. Seasonally, these waters get very hot—upwards of 29 degrees Celsius on the surface. This leads to lots of evaporation, and large volumes of water vapor in the upper reaches of the atmosphere. However, this is only a relatively small region of the planet, and through mechanisms not yet fully understood, the excess heat is distributed throughout the rest of the atmosphere, leaving the water in a state of equilibrium: it rarely exceeds 30.5 degrees Celsius, and thus doesn’t start a runaway greenhouse effect.
Despite the unlikelihood of a runaway greenhouse or a moist greenhouse, the more modest projected figures of an increase in average temperatures of 1.5 to 4 degrees Celsius, which we will likely reach in the next hundred years given current greenhouse emission rates will still be hugely destructive to life on Earth. It won’t wipe us out, but it will destroy fragile ecosystems and make life unbearable for humans in many corners of the globe. Nature doesn’t let us off the hook. We shouldn’t aim just to avoid the worst doomsday scenarios. That goal is far too modest, unworthy of a race as technologically and morally advanced as humanity. We should want something more for our children, grandchildren and future generations.
Extraversion and Introversion: What You Believe Is Probably Wrong
For some reason, during the month of August introversion exploded on the internet, reaching meme status. I don’t know why, but I figured I’d chime in with some scientifically backed data on this interesting personality category.
But first. How do you spell it? In psychological research, extravert is preferred, while in popular writing extrovert is commonly used. Either is correct, although perhaps appropriate for different target audiences. Luckily, both researchers and laymen call the opposite end introverts (intraverts are not a thing).
So is extraversion a thing? Yes, very much so. The extraversion-introversion axis is one of the most robust dimensions in personality research. Scientists in the field more or less unanimously agree that whatever categories best describe personality, the major component of one of them is the degree of extraversion-introversion (extraversion for short, although this refers to the whole spectrum, not just the one end of it).
Is it a Western invention? Nope, it has been found in a sample spanning 40 different countries across the world, ranging from very individualistic to very collectivist cultures.
Exactly what is it? This one’s trickier. The simplest folk idea is that extraversion is simply the degree to which a person is sociable. Extraverted people are people who are outgoing and engage in a lot of social interaction. This is a broad oversimplification according to science. A slightly more complex folk psychology idea is that extraversion is simply the question of whether a person draws “mental energy” from social interaction or alone time. According to this idea, introverts may enjoy social interaction, but need to be alone to recharge, while extraverts may enjoy alone time, but need to spend time with others in order to recharge. Surprisingly, there is little scientific evidence for this idea either.
Scientists don’t agree exactly on what constitutes the core of extraversion. Here is a list of facets that have been included in models of extraversion (different authors will include different subsets of these):
These six facets are venturesome (feelings of excitement
seeking and desire for change), affiliation (feelings of warmth and
gregariousness), positive affectivity (feelings of joy and enthusiasm), energy (feeling lively and active), ascendance (feeling dominant or being an exhibitionist), and ambition (valuing achievement and endurance).
However, there seems to be broad agreement that there is some underlying principle that explains all or most of these and ties them together. Extremely robust evidence links positive affect and extraversion, both between subjects and within subjects (more on this later). Extraverts are happier than introverts.
Wait, what? Yes. This is undeniable. All the research agrees: positive emotions and extraversion are very strongly correlated. Extraverts are happier than introverts. This has led some to propose that the unifying theme underlying all or most of the extraversion facets is positive affect. In the aforementioned study spanning more than 6,000 subjects in 40 countries, the authors contrasted two hypotheses for explaining the link between extraversion and positive feelings. One says that sociability is the core trait of extraversion, and that the correlation between happiness and extraversion is indirect: either social interaction is highly pleasurable, and extraverts spend more time interacting, thus becoming happier; or extraverts and introverts spend equal time alone and together with others, but people on the extraverted end of the spectrum enjoy the together time more.
The other hypothesis is their novel reward-sensitivity model. According to this model, extraverted people are simply more sensitive to and prone to seek out rewarding stimuli. It just so happens that social interaction is especially rewarding for humans, and empirical studies show that both introverts and extraverts tend to report more positive affect in social situations; extraverts are also apparently happier than introverts even when alone. According to this hypothesis, extraverts should be more likely to seek out rewarding stimuli both in social and nonsocial situations, and their sociable behavior is simply an instance of a more general pattern of reward seeking. According to their statistical analysis, the other facets of extraversion correlated much more strongly with positive affect than with sociability. The statistical evidence suggests that the unifying phenomenon that underlies the complex set of behaviors and traits that make up extraversion is reward sensitivity.
How would this be expressed in the brain? Recent research has found connections between differences in the dopaminergic system and differences in extraversion. Dopamine is a neurotransmitter implicated in reward, motivation and reinforcing behavior. For example, one study published this year measured brain activity in people who scored very low or very high on a measure of extraversion, corresponding to extreme introverts and extreme extraverts. Subjects were given placebo or different doses of sulpiride, a drug that interacts in a dose-dependent manner with dopamine. At low doses, it is a partial agonist, meaning it slightly increases dopamine activity; at higher doses, it is an antagonist, meaning it decreases activity. At higher dosages, introverts’ brain scans shifted in the direction of extraverts’ baselines and beyond, while extraverts shifted in the opposite direction. The authors suggest that a difference in the density of dopamine receptors might best explain their results. Basically, the hypothesis goes, introverts have less of certain kinds of dopamine receptors and more of a different kind, resulting in an overall lower dopamine activity. Since dopamine is implicated in reward and motivation, this would lead them to be less reward seeking and to experiencing less rewarding, positive feelings in general.
Another 2013 study looked at this behaviorally. It administered methylphenidate (Ritalin), a dopamine reuptake inhibitor, to subjects in what could best be described as a recreational manner. The subjects were conditioned to associate the lab environment with reward by taking feel-good drugs while in the lab environment. But tests showed that this association between reward and the specific lab environment was only acquired by extraverts, not by introverts, nor by introverts/extraverts who had been conditioned to associate a different lab environment with the reward stimulus. This is further evidence that extraverts are more reward sensitive than introverts.
But wait, does this mean if I’m introverted I’m genetically doomed to unhappiness, or that I will never learn to associate social contexts with pleasure? Here’s the twist. All of the above rather robust evidence could be turned on its head by a different line of research. Because all the above data were gathered by comparing groups of individuals. On average, what are the differences between introverts and extraverts? But a separate line of investigation seeks to understand the relationship between extraversion and other traits within the same person.
A 2002 paper details experiments that probe the relationship between positive affect and extraversion within the same subject. In one experiment, subjects were asked to answer questions designed to assess their state extraversion (how extraverted they acted in the moment) within the previous hour and their state happiness (how happy they felt in the moment) during the same time period. This was done five times per day, using PDAs (this was before smartphones). In the second experiment, a single report was made for the previous week, for ten weeks. In the third, the findings from the first two experiments were tested in the lab. Subjects were randomly assigned to group discussions in groups of three. They participated in two group discussions, randomly assigned to act in an introverted manner in one discussion and in an extraverted manner in the other. Then they evaluated each other and themselves using the same metrics as in the other experiments.
Now, one might think that simply acting extraverted wouldn’t be as effective as being extraverted. But the experiments proved this wrong. The individuals who were trait introverts (tended to be introverted) had just as strong a correlation between positive emotions and extraversion as the extraverts. In fact, in the first trial, the one where emotional states were assessed 5 x day, the introverts had a stronger correlation between acting extraverted and happiness than the extraverts! In the third trial, the people who were told to act introverted or extraverted were consistently rated by others to be introverted or extraverted, respectively. And yet again, the more extraverted they acted, the more they enjoyed the discussion.
Faking it ‘till you make it, scientifically validated!
This suggests two things: first, the simple fact that acting extraverted appears to make you happier. And second, that all the neurological differences between extraverts and introverts may not actually explain much. After all, within subjects, most people had periods of very outgoing and very introverted behavior, and their mental state for the most part correlated exactly with the behavior. Your brain is malleable, but it doesn’t completely rewire itself multiple times every day. These results throw a wrench into the reward sensitivity / dopamine framework detailed above. If we can replicate and expand on these within-subject findings and reunite these results with the between-subject findings, we’ll have come a lot closer to understanding introversion and extraversion.
OK, I get it. There’s still much work to be done. But however it is, how come these two different temperaments evolved? Especially if one sort is much happier than the other, wouldn’t one die out under selective pressure?
Good question. First of all, extraversion is a complex phenomenon, controlled by many genes. It takes a long, long time for evolution to completely eliminate variation in all of these genes. Second of all, there is good reason to believe that introverts and extraverts evolved to fill different niches in human society. It has been empirically shown that extraverts have more mates, but also die younger than introverts. Perhaps extraverts’ social activity and prowess make them more attractive mates, or give them more opportunity to meet mates, or some combination, which raises their evolutionary fitness, but are also more prone to impulsitivity and reckless behavior in their search for excitement and reward, thus resulting in more premature deaths.
Whatever happened to introvert/extravert pride? A common trend in the latest popular culture writings on introverts and “extroverts” is that whichever end of the spectrum the author identifies with is presented in a broadly positive light. The author takes pride in their orientation, whichever way it goes, often extolling its virtues in opposition to some perceived or real prevailing cultural current.
But science doesn’t function this way. It does not say that one kind of person is more worth than any other kind of person. That is well outside science’s scope.
However, the fact remains that the scientific consensus is that extraverted people are happier. And acting in an extraverted manner makes introverts and extraverts alike happier. And being happy, aside from being the higher-order goal of every human being, will make you more successful in almost every area of life—from income and marriage to mental health and longevity. It seems that from nature’s side, introverts have been cursed. But hey, at least you’re less likely to die in a parachuting accident or hunting sharks or from some other dangerous, thrill-seeking behavior. Besides, chances are you can be instantly happier by acting more outgoing.
But I like being introverted!
That’s perfectly fine. There is still a lot of work to be done on this personality dimension. The last word has not been said. But the current data we have cannot be denied: the link between happiness and high extraversion is undeniable.
Let us clear up a few confusing things. When a Chinese philosopher talks about “chi,” which is often translated as “energy,” he is talking about something quite different from what a modern physicist is talking about when he talks about energy, as in, “energy can’t be created or destroyed.”
I just finished reading your answer to the “mind-body question”, and I have a question regarding your answer. You say there’s no way to scientifically prove consciousness surpasses us after death, but what about our energy? According to physics, energy can’t be created or destroyed. Could it be possible our consciousness is linked in with this energy (which some may call our soul)? This is actually an on-going debate I’ve had for quite some time and I would love to hear your thoughts :)
In physics, energy is roughly the capacity to do work. Not the mental energy to show up for another grueling day of 9-5 at the office, but the capacity to apply force over distance. To push a particle in a certain direction, say. The international standard unit for energy, the joule, is given by the equation J = kg × m2 / s2, where kg is the kilogram, m is the meter, s is the second, and J is the amount of energy measured in joules.
Energy persists. The matter and energy that make up you once made up stars. To call it “your” energy is a gross injustice to the universe. You are simply borrowing the energy. In fact, even that is misleading. You are the universe. Not all of it, of course, but you and I are as intrinsic and inextricable parts of the universe as stars and planetary systems. There’s nothing special about us, which is either beautiful or terrifying, depending on how you look at it. We are the same matter and energy that was once in stars and when we die, this matter and energy will simply go over to other forms, become other things.
But this continuity can hardly be appealed to for eternal life. When the atomic nuclei in your body were part of a star billions of years ago, were they conscious? Was the star conscious? When your body is dirt, will the dirt have thoughts? Will the dirt cry inwardly when its relatives, living humans, stand upon it and watch its grave? If energy and consciousness are equivalent, consciousness must be omnipresent. Some have argued for this position, but we shouldn’t take that very seriously, because philosophers are a crazy bunch who will argue for anything, sometimes just for sport, sometimes because they’re not right in the head. If you don’t believe soil is conscious, there is no reason to think that the conservation of energy means the conservation of the soul.
This is known as the fallacy of equivocation: of using the same word with different meanings, carelessly substituting one meaning for the other. It’s an honest mistake to make, but it’s a mistake nonetheless. Ancient philosophers thought that the thing they called variously by names like “chi,” the “life force,” and so on was interchangeable with other things which we today call “energy.” For this reason, modern spiritualists will often use the word energy to refer to such nebulous, spiritual concepts (for which there is little to no scientific support). The first law of thermodynamics is the one about conservation of energy. You know how it goes, at least in the layman’s version: “energy can be neither created nor destroyed, only converted from one form to another.” But the “energy” physicists talk about is the kind that is measured in joules. It has nothing to do with consciousness.
If, like most scientists now believe, consciousness arises out of a particular kind of arrangement of matter and energy in the brain, then obviously there is a connection between consciousness and energy. Obviously, if mind is physical, then it must be matter and energy. But the catch here is that this matter and energy must be arranged in a particular way. A rock, for instance, is not conscious. The matter and energy that make up the rock aren’t arranged in the appropriate pattern for consciousness to arise. And when humans die, the same happens to “our” energy. It gets converted into other forms, takes part in other configurations of matter. Perhaps in a billion years, the same molecules that once made up you or me will, through an endlessly complicated series of operations, again take part in a configuration of matter and energy sufficient for consciousness to arise. Does this mean your “soul” has reincarnated? I think not. Not in the sense that religious people mean when they talk about the soul surviving or reincarnation. But in a poetic sense, perhaps it has. That is a matter of interpretation, for you to decide.
What’s your opinion on the mind-body problem? Do you think the mind can survive after death and is a separate element?
It’s tempting to suggest that if science can’t explain the way consciousness arises from matter, then there might be something to the old myths about the afterlife and the survival of the soul or reincarnation anyway. But it’s very important to distinguish between these two questions: what is consciousness, and does consciousness continue after death?
There is no reason to suppose that the mind doesn’t die when the body does. Near-death experiences are just that: they’re not experiences of death, otherwise the person who had the experience couldn’t have lived to tell the tale. Furthermore, it isn’t implausible that we might sometimes enter an altered state of consciousness in extreme conditions that bring us to the brink of death—after all, everyone experiences the altered state of consciousness we call dreaming almost every night. That these experiences might resemble myths or narratives prevalent in your society is hardly surprising. If your subconscious is creating a story, surely it must take the raw material from somewhere, and if you live in a traditionally monotheistic society where life after death is a common notion known to everyone, believers or not, then tunnels of white light or talks with dead relatives during a hallucinatory near-death experience is almost to be expected. After all, people who ingest a lot of drugs or become psychotic also have similar experiences.
What other reasons might one have for believing in the continuation of consciousness after the brain is dead? Excluding theoretical future inventions that facilitate “brain uploading,” and excluding religious scripture, what more is there to say? How would we ever know? If the mind is intangible, unobservable, at best we could say that the mind might survive death, but without any good reason to believe so, why would you? The only mind we have direct access to is our own.
The famous philosophical question of other minds asks, how do you know your neighbor is conscious? How do you know your mailman isn’t an automaton, an assemblage of matter that behaves like a human but has no consciousness, for whom there is nothing to be like them? The answer, of course, is that we don’t, but they behave roughly like we do, they respond as if they are conscious, and besides it seems hard to imagine how a robot could accomplish what other people do without having sentience. Whether such automatons—identical molecule by molecule with humans, but lacking conscious experience—can exist is known as the philosophical zombie problem. Most of us who have common sense accept that other minds exist, that other people are conscious, because they appear to be. And we also accept that rocks and bacteria aren’t conscious, because they display no signs of being conscious. Of course, until we can prove definitely that the mind is purely physical, we can’t prove that rocks aren’t conscious, but not even the most hardcore treehugger argues that we shouldn’t tread on rocks for fear of hurting their feelings. The same argument can be applied to life after death: dead people don’t display any of the signs of being conscious. The most parsimonious explanation is that when the body dies, so does the mind. Even if dead people’s minds somehow were separated from the body, and remained in existence after physical death, we would never know.
There is a lot of things we don’t understand about the brain. But one thing we’re pretty sure: it isn’t breaking any of the laws of physics. Clearly mind and brain are intrinsically linked. If the mind were nonphysical but somehow capable of interacting with the physical brain in the way that a dualist account of consciousness requires, we would expect to see energy spontaneously come into existence as the non-physical self pulls the strings. But we don’t. The second law of thermodynamics holds as well for the brain as it does for everything else in the universe.
I believe the mind isn’t nonphysical. Exactly what that entails is a matter of both philosophy and science. The philosophical zombie thought experiment is troubling; not so troubling that I believe it proves the mind can’t be physical, for reasons explained above, but troubling in a more fundamental sense. The fundamental question is: why does a certain configuration of physical, objective matter translate into subjective consciousness? It seems entirely conceivable that it wouldn’t. Biology reduces to chemistry which reduces to physics, and it would be logically contradictory for biological mechanisms to violate the laws of chemistry, or for chemistry to violate the laws of physics. And it works the other way around: physics necessarily gives rise to chemistry, which necessarily gives rise to biology. But with psychology, the same seems not to be true. We have yet to find the piece of the puzzle that guarantees that physics gives rise to consciousness. It seems entirely conceivable that it wouldn’t, in ways it’s inconceivable that physics wouldn’t give rise to chemistry.
I believe the mind-body problem is the hardest one in science, in part because the brain is so complicated, and in part because it is a problem that is both philosophical, linguistic, and scientific all at once, and which discipline which aspect belongs to is hard to determine. Combining quantum mechanics and general relativity? I think we’ll do that long before we solve the mind-body problem. Perhaps, in the end, we must simply accept that there is no explanation. For no good reason, certain configurations of matter give rise to subjective experience, just like there is no reason for the other fundamental constants of physics to be what they are. They just are. But this seems hard to accept because the fundamental forces of nature are so elementary, and the mind so complex, and it seems hard to believe that elementary facts can be true for no good reason, but that they still can’t explain more complicated facts which are entirely supervenient on them. (That, I’m afraid, is philosopher-speak. I have been damaged by reading too many philosophy papers, and this is a very complicated subject, and it’s very hard to write about without using technical language, even though I try and explain technical terms the average reader would not be familiar with as much as possible on this blog.)
Because it is so complicated—perhaps the hardest nut to crack in science—and because it is so essential to our lives—by definition, all our experience is, well, conscious experience—the study of mind and brain is probably my favorite part of science. You may have noticed there’s a lot of pharmacology and psychology on here. But it’s very hard to give a straightforward answer to a question like, “What’s your opinion on the mind-body problem?” Life after death though? Nah, old fairy tale.
Here’s a fun idea: what if we could improve on nature and create more efficient blood? Or at least, artificial blood that is as efficient as natural blood? The medical applications are too numerous and obvious to mention. There are two main approaches to artificial respiratory carriers—artificial ways of transporting oxygen and carbon dioxide around the body: one involves hemoglobin, the protein inside red blood cells that binds CO2 and oxygen. But hemoglobin becomes much less stable outside red blood cells, so various modifications or micro-encapsulations are required. The other approach involves emulsions of fluorocarbons. But both approaches have many inherent flaws.
In 1998, Robert A. Freitas proposed a radical idea: what if nanorobots could carry oxygen around, replacing red blood cells? Freitas’ design involves tiny nanoscale artificial cells fueled by blood sugar that store more than 200 times more oxygen per volume than red blood cells. A nanoengine drives a rotor that sorts molecules and stores the right kinds inside, while a tiny nanocomputer calculates when to release them. Augmenting your blood with a solution of Freitas’ “Respirocytes” could allow you to hold your breath for hours—useful for divers, patients who stop breathing away from hospitals, endurance runners, geriatric patients, and a whole lot more. Being nonbiological in nature, they could sit on the shelf indefinitely and still be ready to go, and once in the body could possibly last a lifetime.
Nanotechnology is far from the kind of precision and efficiency required to actually make respirocytes. And even if we could make them, no amount of theoretical calculations can tell us for sure how they’d behave in living humans. I’m not signing up to have my blood replaced by tiny robots anytime soon. But it’s certainly an interesting idea, if you can look at it with the appropriate amount of skepticism.
Gone are the days when we could comfortably assume that a single gene or hormone is responsible for a complex disease or behavior. As new data roll in, previously clear-cut cases turn out to be more complex. One such case is oxytocin, a neurohormone closely related to prosocial, bonding behavior in decades of research. This strong association has given oxytocin the nickname “the love hormone,” and oxytocin nasal spray—currently used primarily for its effects on lactation in nursing women—has been proposed as a novel treatment for autism and social anxiety. But new data suggests that oxytocin also plays a role in human ethnocentrism, and in strengthening negative memories in mice.
Researchers at the University of Amsterdam investigated the effects of oxytocin on in-group v. out-group bias in a series of trials. Ethnically Dutch males self-administered either oxytocin or placebo and then completed a series of double-blind computer tasks designed to measure implicit bias towards a perceived in-group or against two perceived out-groups, represented by ethnically Dutch people and by Germans or people of Middle Eastern descent, respectively. The tasks included trolley problems, empathy exercises, and tasks where the participants had to group positive words together with names from either in- or out-group and negative words with the other, or vice versa (the speed with which this grouping takes place being a measure of implicit bias). The researchers found a significant incrase in bias towards the in-group as compared to either out-group (the Germans or the Middle Easterners) with oxytocin as compared to placebo. They also found limited out-group derogation, but the good news is that it appears oxytocin more strongly increases favoritism towards the in-group than hate towards the out-group. Still, this research suggests the “love hormone” is implicated in ethnocentrism and xenophobia.
This, however, contrasts with previous research which has showed that oxytocin leads to prosocial behavior and decreases our aversion towards the unknown, and increases trust in economic games. Previous studies, however, may not have controlled properly for in- versus out-group bias. For example, one study measured the brain activity of fathers as they looked at pictures of their own child, a familiar child, or an entirely unknown child, and found that oxytocin lead to decreased activity in areas of the brain implicated in critical social evaluation. That study, however, did not control for ethnicity of the children and fathers or other factors that may create an in- versus out-group dynamic, only for gender and whether the child was known to the father or not.
A recent study in mice found that oxytocin strengthens negatively charged memories. Rather than reducing future anxiety, it strengthened it. In one experiment, three groups of mice (without oxytocin receptors, with normal densities and with increased densities of receptors) were exposed to aggressive mice in a cage, a stressful experience. When later reintroduced to the cage, the mice without oxytocin receptors didn’t appear to remember the aggressive mice, and did not exhibit fear responses. The mice with normal and high densities of receptors, however, exhibited typical and above average fear responses, respectively. These effects were not limited to socially salient negative memories. In another experiment, the aggressive mice were replaced by startling, but not painful electric shocks. When reintroduced to the box, the oxytocin-deficient mice did not show any particular fear while in the box where they had received shocks, while the oxytocin-boosted mice showed enhanced fear responses.
This new data certainly complicates the picture of oxytocin as a cuddly hormone involved with all that is good in human nature. Despite this, research into the possible therapeutic benefits of oxytocin remains promising, although perhaps not as unanimously cheery as it may once have looked.
At the time of writing, we have discovered 916 planets outside our own solar system, in 706 planetary systems.
After we got over the silly idea that we are Special, that the universe revolves around us, we have been smitten by the idea that we are the opposite—we’re typical, and our home tract of the universe is probably broadly representative of the rest of existence, although we do know that life is somewhat rare, since we have yet to discover it elsewhere. But as it turns out, the more planetary systems we discover, the more our own looks like an outlier. Not just because it has a habitable planet, but in more general terms which have nothing to do with the conditions for life.
But consider this, also: we know of less than a thousand exoplanets. There are likely billions out there in the unimaginably vast universe. The data point in one direction at the moment, but the data are also incredibly limited. We don’t even know what we don’t know.
It’s an interesting time to be alive. It’s only been 25 years since we discovered the first exoplanet. Who knows what we will discover in the next twenty-five?
if you're not using this url anymore could i have it?
Since 2008, I have made approximately 300 posts on this blog. Almost every one of them has taken extensive research. It has been one month since the last post, which, if you look at the archives of this blog, is not uncommon. I do not post simply to fill up space. I have 160,000 followers and never made a dime off this blog. I will update it when I have something interesting to write about.
I am not an expert in anything. I hold no scientific degrees, and there are certain topics I don’t write about simply because they are beyond my comprehension, and I detest lazy science journalism. Whenever possible, I read original research papers if I’m going to report on a story. This is not a job but a hobby, and unless some wealthy patron is willing to pay me to do it, it will remain as such. The fact that one month has gone by without a post means nothing except that you have been spared from one month of meaningless drivel. Rest assured, I will continue updating this blog in the indefinite future, but it will be at my own pace.
If you have an interesting question, tidbit or fact related to science, you are welcome to submit them. If it is interesting, it is an area in which I am qualified to comment on, and I find reliable sources about it, I will make a post and credit you with the suggestion. For example, someone recently asked what, exactly, wormholes are, which is an interesting question, but not one I feel qualified to answer.
In this attention-deficient internet era, I want this to be a place to slow down, think, and marvel at the magnificence of nature, the ingenious mechanisms of life, the intricate network that connects the smallest quark with the largest supercluster. It is not a place for pretty pictures of cats, which would undoubtedly make my life easier. I’ve deliberately kept the meta-posts to an absolute minimum, because this blog is not about me or about my personal opinions, but this needs to be said. This blog’s ethos is that science is good, science should be shared with the people, and science should be reported accurately and in ways that laymen can understand without resorting to highly inaccurate simplifications. If you want more frequent content, you have two options: hire me for actual $$$, or be patient.
No. What is and is not science is a question which has vexed scientists and philosophers for hundreds of years. Many influential definitions have been put forth, but none are without weaknesses. I think it’s naive to suggest that such a heterogenous body of practices and knowledge as 21st century science can be defined in a few sentences. There is Popper’s falsifiability criterion, the “scientific” method as taught to high school students—hypotheses, predictions, experiments, confirmed theories—and Kuhn’s “paradigm shifts” and more besides. None of these general, relatively simple-to-define ideas manage to completely encompass all that is reasonably regarded by scientists as science, while also excluding everything that is reasonably regarded as non-science.
The general principles of science are curiosity, methodological rigor, repeatability, falsifiability, peer review, full disclosure of both failures and successes—but these alone are not enough to define what science is. I could waste more words, but at the end of it I would have to admit that neither I, nor anyone else has successfully defined science in blog-post length. That is not to say that anything goes, or that what is and isn’t science is completely subjective—only that, despite all our efforts, we are yet to find a clear line of demarcation that is both simple to state and which includes all the right things and excludes precisely those things which are not scientific, and none other.
Why are barns painted red? Because red paint used to be cheapest. But why is red paint cheaper than other colors? Ultimately, nuclear physics. Yonatan Zunger explains. For more on stellar nucleosynthesis, see: we are all made of star stuff.