science tumbled

(pretty pics / longer stories / ask)

Why did we think we dreamed in black and white?

In the mid-twentieth century, the heyday of black-and-white media, researchers and laymen seemed to agree that dreams were wholly or predominantly black and white. Yet to Aristotle, Descartes or Kant, colored dreams seemed too obvious to question, and today, with popular media again appearing mostly in color, we report a much higher incidence of colored dreams. So what’s going on?

Did dreams, for a short span of a few decades, radically alter under the influence of popular media? Did Martin Luther dream in color, but Martin Luther King, Jr. in black-and-white? This hypothesis seems odd for several reasons. Firstly, because sleep has such a long evolutionary history, shared not just between humans from ancient times to today, but also between ourselves and other species. Strong evidence would have to presented for the ability of popular media to so radically alter human experience in such a short period of time, then so easily revert back again. Secondly, because even at the height of black and white media, most of our time was spent looking at the real world, which is colored. And thirdly, because monochromatic imagery existed before the twentieth century.

Perhaps, then, it is the reporting that is flawed. Dreaming is an amnesic experience: most of us can’t remember our dreams more than a few minutes after waking, if it wasn’t a particularly vivid dream or we took steps to record them. Maybe respondents simply compared their dreams to popular media of the time; and maybe, in times before mass distribution of media, people simply compared dreams to waking reality. This would suggest that dreams are mostly or wholly in color, yet our limited memory of them means we have to reconstruct them by analogy to media, or to conscious experience.

A third possibility, also explored in the linked essay, is that, like the features of a scene in a novel, colors are indeterminate or underdetermined in dreams. When you read the sentence, “she fled quickly from the scene,” can you not imagine a great variety of different scenes to be fled, a great variety of people fleeing, and a great variety of colors on everything and everyone involved? Or better yet, can you not read the sentence and comprehend it without constructing an elaborate mental image? Perhaps dreams are more like that, and dream tableaux consist of elements which cannot properly be understood as either having or lacking color—just like the word “red” is not red, and we can print the word in many different colored fonts, but yet the word itself belongs to none and all of them—interspersed with color at times. Perhaps, the same way in a dream we can see a person and just know they’re our friend, our lover, or mother even though they look nothing alike, we can see a rose and know it’s red without perceiving the color. Or perhaps we attribute the color afterwards, not to a black-and-white picture, but to a picture that is not a picture at all so much as a collection of mental symbols.

A fourth possibility is that we just don’t know, that our introspection is completely unreliable when it comes to the sensory modalities of dreams. One intriguing question is raised by the essay: today, it may seem natural to report that we can walk in dreams yet do not feel our feet touching the pavement. In the future, if media emerge that incorporate more senses—touch, smell, taste in addition to audiovisual media—will we report these senses as being more prominent in dreams, too?

One thing seems to be clear from historical writings, though: the idea that before color films and color television, we all dreamed in black and white, is almost certainly false.

A brave new world for psychology?

It’s no secret that I and this blog are excited about the return of psychedelic drugs to academic study. After being shut out of the warmth, labelled as criminal and shunned for four decades, finally scientists are yet again investigating the effects of drugs such as LSD, magic mushrooms and DMT. Not only do these drugs hold great potential for helping us understand more about how the brain functions both in normal and altered states of consciousness, they also hold great promise as therapeutic aids in clinical psychiatry.

The Psychologist magazine currently has a free special issue on psychedelics in psychology. There’s lots of interesting stuff there, including an article on how psychedelics exert their effects in the brain. The classic psychedelics such as LSD and magic mushrooms in particular activate the serotonin 2A receptor, leading to a cascade of changes on many levels of the brain:

Much of brain activity is rhythmic or oscillatory in nature and electroencephalography (EEG), magnetoencephalography (MEG) and local field potential (LFP) recordings are techniques that measure the collective, synchronously oscillating activity of large populations of neurons. Studies in animals and humans have found decreases in oscillatory activity in the cortex after the administration of hallucinogens, and in one of our most recent and informative studies with psilocybin we observed a profound desynchronising influence on cortical activity (Muthukumaraswamy et al., 2013). (…)

To help illustrate this principle by analogy, the strength of cortical rhythms can be thought of as analogous to the rhythmic sound generated by a population of individuals clapping their hands in synchrony. The presence of an individual clapper among a population of clappers means that his/her rate of clapping becomes quickly entrained by the collective sound generated by the population as a whole. Now imagine that a number of mischievous ‘ticklers’ are introduced to the scene, inducing sporadic clapping by tickling individual clappers. Although the individuals targeted may be excited into clapping more often, there will be a disruptive effect on the regularity and volume of the sound generated by the population as a whole. The basic principle is that although hallucinogens excite certain excitatory neurons in the cortex to fire more readily, this has a disorganising influence on cortical activity as a whole.

And further, psychedelics have the potential to dissolve the ego, our perception of a continuous self. The mechanism for this seems to be screwing with the so-called “default mode network,” a network of neurons that is pretty much active in the background all the time, helping to maintain our regular sense of ourselves as unitary selves flowing through time:

Evidence has accumulated in recent years highlighting a relationship between a particular brain system and so-called ‘ego functions’ such as self-reflection (Carhart-Harris & Friston, 2010). This network is referred to as the ‘default mode network’ because it has a high level of ongoing activity that is only suspended or interrupted when one’s attention is taken up by something specific in the immediate environment, such as a cognitive task (Raichle et al., 2001).

It was a matter of great intrigue to us therefore that we observed a marked decrease in brain activity in the default mode network under psilocybin (Carhart-Harris et al., 2012) whilst participants described experiences such as: ‘Real ego-death stuff! I only existed as an idea or concept… I felt as though I was kneeling before God!’

The default-mode network is also called the “task-negative” network. It is anticorrelated with the so-called task-positive network. This is a brain network that is highly engaged when our attention is on goal-oriented activity. The anticorrelation means that when one system is highly active, the other is not, and vice versa. Thus we can, to put it in terms perhaps a little too much like pop psychology, literally “lose ourselves” in a task or activity. This comes about because the default mode or task-negative network that is largely responsible—as far as we understand the brain at this time—for introspection and maintaining our sense of self, while the task-positive network which is activated during goal-oriented activity intrinsically suppresses this introspective network.

Thus there are similarities between the psychedelic state and flow states, when we are so engaged in an activity that everything else, including our sense of self, seems to fade away into the background. That, of course, doesn’t mean that flow states and being on LSD are exactly alike—there are many differences too obvious to point out. However, it does indicate that we do in fact enter altered states of consciousness all the time: when we’re deeply engaged in an activity, when we’re asleep or half asleep, and so on. Not just when we’re taking mind-altering drugs or engaging in ritualistic religious rites.

There’s more interesting stuff in the issue, such as third- and first-person accounts of psychedelic treatment, and a look at what famous writers have had to say about being on hallucinogens. There’s also an article by Vaughan Bell on cultural views on chemically induced hallucinations:

The typical Western account of why ayahuasca is consumed usually focuses on ‘getting in contact with the spirit world’, but this fails to capture either the cultural worldviews in which ayahuasca consumption is situated or the motivations behind the ceremonies. The first thing to note is that Amazonian people can differ greatly in how they understand reality in relation to themselves. For example, the Cashinahua, Siona, and Schuar peoples all use ayahuasca as a tool for revelation but differ markedly in how they understand the experiences it produces. The Cashinahua understand ayahuasca as causing hallucinations that provide guidance (Kensinger, 1973), the Siona believe that it allows access to an alternative reality (Langdon, 1979), while the Schuar take all normal human experience to be a hallucination and take ayahuasca as a way of accessing true reality (Obiols-Llandrich, 2009).

Perhaps, then, it’s fitting to end with a quote from one of the most famous writers on psychedelics, Dr. Gonzo himself, Hunter S. Thompson. Thompson regarded himself and his reckless drug use as embodying something of a national archetype:

I am the prototype, the perfect American. Half out of control, violent, drunk, high on drugs, carrying a .44 Magnum. Rather than being strange, I may be the embodiment of the national character…all the twisted notions that have made this country the beast it is.
The solar eclipse of August 21, 1914, seen from 66 degrees north, in the town of Sandnessjøen, in Northern Norway. Solar eclipses are always cool, and this is especially interesting to me because the center of this eclipse, the point at which the Moon most completely obscured the Sun, passed over my hometown one hundred years ago. The German scientist Adolf Miethe took a huge risk traveling to Norway to build an observatorium specifically for this astronomical event. If the day had been overcast, all would have been for nought.
Many astronomers were interested in observing this event, but the outbreak of war prevented many of them. Luckily for Miethe and his team, he got to observe the event even as his country went to war. Three of his fellow expedition members had to return back home for military duty.
Miethe is an interesting character, having co-invented both an early photographic flash and a process of color photography.
Observations of solar eclipses later helped confirm Einstein’s theory of relativity, as one of his predictions, the existence of gravitational lensing, could be seen.
The locals, however, were reportedly unimpressed by the eclipse, having expected it to be darker. Oh, well.

The solar eclipse of August 21, 1914, seen from 66 degrees north, in the town of Sandnessjøen, in Northern Norway. Solar eclipses are always cool, and this is especially interesting to me because the center of this eclipse, the point at which the Moon most completely obscured the Sun, passed over my hometown one hundred years ago. The German scientist Adolf Miethe took a huge risk traveling to Norway to build an observatorium specifically for this astronomical event. If the day had been overcast, all would have been for nought.

Many astronomers were interested in observing this event, but the outbreak of war prevented many of them. Luckily for Miethe and his team, he got to observe the event even as his country went to war. Three of his fellow expedition members had to return back home for military duty.

Miethe is an interesting character, having co-invented both an early photographic flash and a process of color photography.

Observations of solar eclipses later helped confirm Einstein’s theory of relativity, as one of his predictions, the existence of gravitational lensing, could be seen.

The locals, however, were reportedly unimpressed by the eclipse, having expected it to be darker. Oh, well.

starbuqzz said: Why is yawning contagious?

Mirror neurons, These are neurons with a curious property: they fire both when you do something, but also when you observe the same action in others. Much speculation surrounds the functional role of mirror neurons, and in particular how they might factor into developing empathy, and whether defects in the mirror neuron system could contribute to autistic spectrum disorders, which are characterized by poor cognitive empathy.

In this instance, we’re seeing a primitive kind of “motor empathy,” which might underlie cognitive empathy, our ability to understand others’ thoughts, feelings, motivations and so on from their outward behavior. Brodmann’s area 9, a part of the mirror neuron system in the brain, lit up when test subjects engaged in contagious yawning. This area of the brain has also been implicated in mentalizing, i.e., precisely in understanding other people’s mental states. Interestingly, in people with Major Depressive Disorder, we have found neurons in this area to be smaller, and glia—the support cells which are more numerous than neurons, and increasingly are understood to play more than just a passive role in thought—to be fewer and further between.

It remains to be seen exactly what role mirror neurons play in human empathy, but they’re certainly interesting. It’s fascinating that not only we can automatically do something because we saw someone else do it; this automatic act is caused by parts of our perfectly healthy brain not being able to distinguish between ourselves and our fellow human beings.

llapacas said: Is it true we don't use all of our brain??? If so, why can't we. I mean, we have a brain, why not use it all to its superlative capability?

Why not indeed. The idea that we only use a small portion of the brain, usually quantified by a very specific number, is completely false. I don’t even have a guess as to where it originated, but it has since spread and infected the public consciousness. We do, in fact, use all of our brain.

Of course, this implies that we can’t just “switch on” the rest of the dormant brain and magically become smarter and more handsome, like Bradley Cooper’s character in Limitless. If it sounds too good to be true, it probably is.

However, that doesn’t mean that the way in which our brain operates is at all times completely optimal for our goals. Increasing or decreasing activity in certain parts of the brain, or certain neurotransmitter pathways, could plausibly make at least some of us happier or more productive. Which, of course, is nothing new, since we have been using psychoactive drugs for such purposes since the dawn of medicine. As we learn more about the brain, we will come closer to the level of understanding required to really mess with it in ways that can, possibly, make us smarter or happier without risking dangerous side effects. But we aren’t really there yet. Most current drugs, or non-drug methods of altering the brain come with a long sheet of possible adverse reactions.

Obviously it would be easier if there really were large swaths of the brain going unused all the time, just sort of hitchhiking on the evolutionary trail, a sort of parasitic neural network gobbling up nutrients and energy—the brain is the part of our body that uses the most energy as compared to its volume—that we could activate to become superhuman. But that really isn’t the case.

And if you think about it, that really makes no sense at all on two levels. First, why would we have a huge organ that consumes huge amounts of precious (at least in prehistoric times) energy if we only used a small portion of it? If we could do with the brain of a baboon, we would never have retained, or evolved such a big brain in the first place. And secondly: consider the extremely implausible-even-for-a-hypothetical scenario that we all were actually carrying around a huge brain but only using a small portion of it. That would constitute normal experience. What would happen if we suddenly activated the rest? In the movies, the obvious answer is that we’d be superhuman. But maybe we’d actually become emotional wrecks, or maybe we’d become intellectually impaired because the mind could not integrate all the new activity into a coherent picture.

Luckily for us, no such dilemma faces us. The 10% or whatever number is making the rounds is completely fabricated.

However, while it’s not the case that ordinary healthy people go around not using a large chunk of their brain, it is possible to survive and even thrive with minimal loss of cognitive function with only half your brain. A procedure known as hemispherectomy involves removing or severing one hemisphere of the brain. This surgery is only performed in extreme cases of epilepsy where the source of seizures has been found to be localized to one hemisphere, due to the obvious risks of cutting out or off one half of someone’s brain. Remarkably, the brain, especially if the surgery is performed at a young age, is able to adapt and allow basically all of the functions of the other hemisphere to be taken over by the one remaining.

Deep Water

In March, a study reported an interesting finding: inside a diamond brought up from the depths of the Earth by a volcano in Brazil, a small piece of the mineral ringwoodite was found, and about one percent of its mass was accounted for by water bound in solid form inside the crystalline structure. Now, a study bringing together evidence from an array of seismic sensors across the United States and laboratory work simulating the conditions of the transition zone between the Earth’s upper and lower mantle, around 400-700 kilometers’ depth, suggests that this was no anomaly. The lab work suggests that, under the conditions of extreme pressure in the transition zone, ringwoodite can soak up more than one percent of its mass in water. When some of this ringwoodite is pushed down further into the lower mantle, it gets crushed into a different kind of mineral that can’t hold water. As a result, the rock “sweats” water, which is trapped in pockets deep beneath the surface.

The observations of seismic waves found changes in wave velocity consistent with such subterranean water. If 1% of the rock in the transition zone is water, that would be the equivalent of three times the mass of water in all of the oceans on the surface.

Astronomers find a new type of planet: The 'mega-Earth'

Typically, planets much larger than Earth would be gas giants. That’s what we thought, anyway. But now astronomers have discovered an exoplanet seventeen times heavier than Earth, made up of rock and solids, some 560 light-years away. Not only is the planet exceptionally large for its composition, it’s also surprisingly old. Its parent solar system is 11 billion years old. In order to make the heavier elements needed to create an earthy planet, you require stellar nucleosynthesis—stars merging atomic nuclei into successively heavier elements until they explode, dispersing the mass, which can then form planets. There weren’t a whole lot of heavy elements present in the universe less than three billion years after the Big Bang, but apparently, there was enough to create Kepler-10c. Fascinating.

Think of the implications for life elsewhere in the universe. Although we have yet to confirm its existence, the conditions conducive to it could have appeared much earlier than one would have thought.

oneidaiscrazyforyou said: Why is carbon Dioxide Really hot?

Carbon dioxide isn’t really hot. Like other gases, it all depends on how much you heat it. The boiling point of CO2 is far below freezing. But carbon dioxide and other greenhouse gases in the atmosphere absorb heat that would otherwise be reflected off earth into space, thus increasing the average temperature on our planet’s surface. In absolute terms, global warming doesn’t amount to much warming at all—if you saw it on the weather forecast, you might shrug it off—but an increase in average temperature of only a few degrees can have dramatic and devastating consequences.

In-ear headphones were patented all the way back in 1891, when French engineer Ernest Mercadier invented his “bi-telephone”:
After extensive testing and optimization of telephone receivers, Mercadier was able to produce miniature receivers that weighed less than 1 3/4 ounces and were “adapted for insertion into the ear.” His design is an incredible feat of miniaturization and is remarkably similar to contemporary earbud headphones, down to the use of a rubber cover “to lessen the friction against the orifice of the ear… effectually close the ear to external sounds.”
Surely there’s a hip kickstarter waiting to happen in there somewhere.

In-ear headphones were patented all the way back in 1891, when French engineer Ernest Mercadier invented his “bi-telephone”:

After extensive testing and optimization of telephone receivers, Mercadier was able to produce miniature receivers that weighed less than 1 3/4 ounces and were “adapted for insertion into the ear.” His design is an incredible feat of miniaturization and is remarkably similar to contemporary earbud headphones, down to the use of a rubber cover “to lessen the friction against the orifice of the ear… effectually close the ear to external sounds.”

Surely there’s a hip kickstarter waiting to happen in there somewhere.

Who Can Name the Bigger Number?

Ah! Sometimes I need to be reminded why I love science in the first place. The answer is simple curiosity, and the extraordinary sensation of satisfying it. A child-like wonder at the world is a great thing. It can lead in two directions: either to the mystic, who so clings to that wonderful feeling that any attempt to dissolve it by explanation is seen as a threat; or to the scientist, who enjoys the wonder for what it is, but who sees it rather as a motivation to explore, invent, discover, and seek the equally extraordinary sensation of satisfying curiosity. That is the phenomenology of science in a nutshell, the science of what it feels like to do science, or learn science, or at least my idealized version of it.

Scott Aaronson is the kind of rigorous modern scientist who hasn’t lost touch with his child-like curiosity and wonder at the world. He asks an innocent question—who can name the bigger number in fifteen short seconds—and goes on to explore how this question connects to a series of incredible discoveries in the history of mathematics and science. And he’s funny, too. Read it. I’m surprised I haven’t linked this essay before.