selpac asked: How do you measure air pressure
It isn’t always the case, but this time you should actually go to Wikipedia for a good explanation. Read about the development of the barometer and the theory of gauging pressure. It’s rather fascinating that people were able to figure all this out 400 years ago. And how did they do it? By questioning a long-held assumption: that air is weightless. That’s a fine example of the scientific method.
I hate it when people choose nicknames that don’t work in conversation. A Photo of Dorian Grey, alternatively “50% physics, 50% mountains” writes:
Actually the first general-purpose digital computer was built by Tommy Flowers in 1943.
That computer was the Colossus, and the Colossus computer was not Turing complete. Therefore, it wasn’t the first general-purpose digital computer.
During the 1930s, the decade before the first modern computers were built, many mathematicians were working on the fundamentals of computation. Just what was computable? What problems could and could not be solved algorithmically, and how could this property be formalized? Several men solved this problem independently, in different ways which turned out to be equivalent.
Alan Turing was one of those men. He imagined a theoretical machine which would later be referred to only by his name—the Turing Machine—which could compute anything that is theoretically computable. A Turing Machine consists of an infinite tape of cells one after the other, filled with symbols. A reading head moves along the tape, scanning the content of one cell at a time, and deciding based on a small table of instructions whether to change the symbol, move forward or rewind. A specific Turing Machine cannot be programmed; it can only solve the single program for which its instruction table was designed. But Turing further imagined a Universal Turing Machine (UTM) which could simulate any other Turing Machine by reading off not only the input but also the instruction table from the tape. In effect, this is how modern computers with programs stored in memory work.
The UTM is one way to formalize the concept of what can and cannot be computed. It’s a very simple, elegant theoretical construct, and very clever people have managed to device ingenious UTMs that use a tiny set of symbols and rules to effectively compute anything that can be computed. A computer is said to be Turing Complete if and only if it can compute exactly the same things an UTM can compute. (Ignoring the fact that non-theoretical computers have finite memory.) And if it cannot, it is not a general-purpose computer in the modern sense, since there are algorithmic problems it cannot solve. The Colossus was very useful for its purpose, but it was not Turing complete.
Around the same time, Alonzo Church attacked the problem of defining computability from a slightly different angle. He deviced a formal calculus called the lambda calculus. The lambda calculus is also extraordinarily elegant and simple. It is built upon the theory of anonymous functions and substitution. Here is how wikipedia defines it:
In effect, lambda calculus is defined by anonymous functions that can take one named variable and can later be applied to one lambda term. Only the syntax above and a couple rules about how to perform function application are needed. From these humble beginnings, some clever bootstrapping allows us to create something called the Y Combinator, which allows us to use recursion in a calculus that has no native support for it. Numbers can be encoded using Church encoding. If you’re willing to do theoretical groundwork, you can compute anything that is computable using the lambda calculus. As it turns out, Church’s lambda calculus and Turing’s UTM are equivalent; they define the exact same class of functions. And the Church-Turing thesis posits that anything that is computable is computable by both these formal systems.
Now, these formal systems aren’t meant to be practical. When they were invented, they were intended only to define computability, not to provide a practical means of achieving it in a machine. Yet the Turing Machine formed the basis of the Von Neumann architecture, which is the basis of almost all modern computers. And many modern programming languages—called functional programming languages, because they are based around functions in the mathematical sense—are based on lambda calculus. The most basic form of lambda calculus sketched above is called the untyped lambda calculus, because the variables have no type; any function can accept any variable. But typed variants of the lambda calculus are very important in type theory, a field in the cross-section of computer science and pure mathemathics that investigates ways to reduce errors in computer programs by more rigorously defining what kind of operations can be performed on which variables. For instance, it makes no sense to perform a “search for the letter ‘t’” operation on an integer, but it makes perfect sense to do so in a text string like “this is a string!” These are some of the errors typed programming languages can help catch before the program runs, during development.
One of the programming languages strongly based on a typed lambda calculus is Haskell, named for Haskell Curry, another computer science pioneer. He is known, among other things, for the so-called Curry-Howard isomorphism, which states a formal equivalence between mathematical proofs and computer programs. This equivalence is useful in several ways. If one has a particularly gnarly mathematical problem, such as the four-color theorem, one could write a computer program that is equivalent to a proof of the theorem. Or if one has a particularly gnarly computing problem, one can write a mathematical proof that some algorithm is correct, but which also doubles as an executable program that can perform that algorithm.
If you’d like to learn more about the fascinating world of fundamental computer science, I highly recommend the book The Structure and Interpretation of Computer Programs. It is a programming manual, but also the best introduction to computer science ever written. It’s practical, it’s theoretical, it will tax your brain, and if you read it and complete the exercises, you will become a zen master of computing. (I cheated; read the book but skipped some exercises. Don’t be like me. Be a real man/woman, do the work, learn the good stuff.)
ENIAC, the first general-purpose, programmable digital computer, developed at the University of Pennsylvania for the US army and completed in late 1945. The machine was revolutionary for its time, but would soon be obsolete in two ways: (1) it used decimal, rather than binary arithmetic; and (2), it was built on vacuum tubes. In 1947, the invention of the transistor, which replaced the costly and unreliable vacuum tubes, paved the way for the digital revolution.
The history of computing goes much further back, of course. The first programmable, mass-produced machine was the Jacquard loom, which was programmed by punch cards and occasioned the development of the Luddite movement, as skilled textile workers now became replaceable by machines. In those days, a computer was a person who performed calculations, generally rote work like calculating logarithmic tables. Interestingly, although mathematics has traditionally been an area largely closed off to women, many women did important work in the early history of computing.
The most famous name is Ada Lovelace, daughter of Lord Byron. She could by rights be called the first computer programmer, as she wrote programs for Charles Babbage’s analytical engine, which would have been the first general-purpose computer if it had ever been completed. Sadly for Babbage and for Lovelace, they were a hundred years ahead of their time. But countless others worked behind the scenes. Many women were hired as manual computers, partially because of their skill and partially because by the gender norms of their time, they could be paid less than men. One such group of women were “Pickering’s Harem,” women who processed astronomical data for Edward Charles Pickering, head of the Harvard Observatory from 1877-1919.
Six women were the primary programmers on ENIAC. In 1997, they finally received their due credit.
See ya later world asks:
What are most scientists’ view on paranormal phenomena?
The simple answer is that most of them don’t believe in it. Paranormal research rarely intersects with mainstream science. Most so-called researchers into the paranormal have none of the rigor needed to perform real science, and their “experiments” usually have methodological flaws that can easily be spotted by a bright middle schooler. On occasion, actual scientists attempt to explore alleged paranormal phenomena, and sometimes there’s even a semblance of rigor to their investigations. Such is the case I’ll tell you about today.
Back in 2011, professor of psychology Daryl J. Bem turned a lot of heads when he published rigorous experimental data that appeared to prove a form of extrasensory perception (ESP), in the form of precognition and premonition: the ability of future events to determine an individual’s thoughts and feelings in the present. Clearly, such informational time-travel would go against everything we know about physics. But Bem is no run-of-the-mill crackpot: he is a widely cited and influential psychologist most known for his self-perception theory of attitude formation, which states that we form our attitudes by observing our behaviors, rather than the other way around. Although counterintuitive, many studies have found support for the idea. For instance, while we know that happy people smile and angry people frown, it has also been shown that people get happier by smiling and angrier by frowning. Bem’s smart move when it comes to ESP research was attempting to do the research according to the established standards of psychological science. He also encouraged others to attempt to replicate his findings, correctly reasoning that replication is at the heart of science.
Bem’s experiments were rather clever. He simply took established psychological effects and time-reversed them. For instance, it is known that mere exposure to a word or concept can “prime” a person to more readily think of, or even like the concept at a later time. For instance, if you read a list of words that includes the word table, and later do a word completion task in which you are asked to make a word beginning with the letters tab, you are much more likely to go for table than if you had not been primed. This effect persists even after you have consciously forgotten the priming, or even if you were never aware of it in the first place. The same concept is responsible for the effectiveness of subliminal advertising. Two of Bem’s experiments applied priming after the fact, and appeared to show a “retroactive priming” effect. The setup resembled a typical priming experiment: subjects were asked to judge whether each in a series of pictures was pleasant or unpleasant. Usually, people respond faster when an emotionally congruent word is flashed before an emotionally charged picture than when a word representing the opposite emotional charge (e.g., a positive picture and a negative word) is flashed. Bem observed that this effect persisted even when the word was flashed after the picture.
In total, Bem did 9 different ESP experiments, each with 100 or more participants, all time-reversed variants of known psychological phenomena. 8 of 9 appeared to show statistically significant evidence for ESP phenomena. Bem also appeared to show evidence for a link between stimulus seeking (a personality characteristic associated with extraversion) and ESP abilities, as more stimulus seeking individuals (as indicated by their answers to one or two questions) seemed to exhibit a stronger ESP effect.
Enter the scientific process. As Bem agrees in his paper, extraordinary claims require extraordinary evidence, and one batch of experiments isn’t enough to disprove a very stable theory about how the world works. If ESP exists, what we think we know about physics goes out the window. Because Bem, unlike most researchers into the paranormal, did not believe that the paranormal was above the normal process of science, and also because he has a good scientific track record, other researchers took his claims seriously and set out to replicate or discredit them by repeating Bem’s experimental procedure.
Most attention was given to Bem’s eight and ninth experiments, because they were among the easiest to replicate according to Bem, because they had some of the largest effect sizes of all the experiments, and because they provide the least amount of wiggle room. Either they should show definite effects, or not. If performed correctly, there is little room for observer bias, and there are also few points of contention (unlikely some of the other experiments, which rely on participants’ subjective responses, where null results could conceivably be due to idiosyncrasies in the participants).
The eight and ninth experiment investigate retroactive facilitation of recall. Participants were briefly shown 24 test words and 24 control words, unaware of which category each word fell into. They were then given as much time as they wanted to freely recall as many of the words as possible. Finally, they were given the 24 test words to practice. The results of the experiment appeared to show that memorizing words after the fact could affect results in the present; the words the participants would later practice were more readily recalled than the control words, despite the fact that the participants didn’t know which words they would later practice. The usual effect of practice, naturally, is that words you have previously attempted to memorize are more readily recalled than unknown words.
Bem’s original paper was called Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. It was published in the Journal of Personality and Social Psychology. In 2012, the same journal published a large study attempting to replicate Bem’s retroactive recall experiments called Correcting the Past: Failures to Replicate Psi. As the title suggests, the study attempted to replicate the two retroactive recall experiments and found strong evidence for the null hypothesis (i.e., no ESP). It’s important to note that this is not the same as not finding evidence for ESP. Bem’s hypothesis predicted certain effects, which Bem’s results appeared to support. However, a larger-scale attempt to replicate the findings did not yield those results. The failure to provide the expected results for the hypothesis is not simply lack of evidence for Bem’s hypothesis, it is strong evidence for the opposite. As far as Bem’s hypothesis is falsifiable, it has been falsified. Retroactive facilitation of recall doesn’t exist. The authors did seven different experiments with a sum total of 3,289 participants and also analyze Bem’s own data and other independent attempts to replicate the eight and ninth experiments. In total, more than 6,000 subjects participated in the analyzed experiments, and the effect was not replicated. Only Bem’s own data show evidence for ESP.
Now, that was a good-faith attempt to replicate Bem’s results. Others responded more negatively, analyzing Bem’s procedures and data and suggesting that he was deliberately not reporting negative results, or stopping experiments at the point where his desired results were found. One example was the idea that effect sizes were inversely proportional to number of participants in the studies, certainly a red flag. Others went so far as to suggest that to the extent that Bem’s experiments followed established procedure in psychological research, there’s something wrong with established procedure.
Now, you may think, perhaps the eight and ninth experiments didn’t pan out, but what about the other experiments? But it doesn’t bode well for Bem’s hypothesis when his most rigorous experiments, the ones that most unanimously show results, cannot be replicated.
Nevertheless, Bem should be applauded for attempting to bring rigor and proper methodology to a field of study that is usually mocked—rightfully so—as pseudoscientific at best. Although some fellow scientists were immediately dismissive of Bem’s results, he was rigorous and forthcoming enough to warrant attempts at replication by other scientists. If their experiments prove him wrong, well, that’s science for you. At least it isn’t, as the theoretical physicist Wolfgang Pauli supposedly was fond of saying, so bad it’s not even wrong.
First off, I love reading the stuff you guys post. I find it all very interesting. Currently, I’m looking into the evolution of whales, as evolution is among the most fascinating topics for me. But, the reason I’m sending this message is because I’m a bit lost and I’m hoping you may be able to help out. So far, I have a basic understanding of whale evolution and know what the key adaptations were that lead to our modern whales. My confusion, however, comes when I look at the phylogeny of whales. Perhaps I’m not looking at it right, but I see many “gaps”, if you will, that would connect one ancestor to another. For example, they show that Pakicetus and Ambulocetus share a common ancestry, but don’t show what the common ancestor would be. I’m curious if you could elaborate on this for me. Perhaps the phylogenetic trees I’m looking at simply don’t go that in depth or scientists in that field haven’t found a common ancestor that links them, or maybe I just don’t understand how to read a phylogenetic tree properly. Any help or information you could give me would be greatly appreciated.
Whales evolved within the even-toed ungulates. Modern examples of this order include cows, pigs, giraffes, hippos, deer, camels and chevrotains. The earliest known whales are indeed Pakiceti, small creatures that waded in shallow waters in Pakistan some 50 million years ago. From these evolved the amphibious, crocodile-like Ambulocetids, and from them, the protocetids or proto-whales, aquatic creatures that died out around 42 million years ago. It is from them the Basilosaurids, ancestors of modern whales and the most whale-like of the whale ancestors, evolved.
The closest living relative of whales, dolphins and porpoises has been determined by genetic analysis to be the hippopotamus. This, however, doesn’t really help paleontologists all that much since the earliest known proto-hippos are only 15 million years old. More recent findings, however, have unearthed the Indohyus, or “Indian pig,” a reconstruction of which is shown above, as the closest known relative of the whale ancestors. Indohyus resembled small deer, the size of modern-day raccoons, or perhaps most closely (in terms of modern-day creatures) the mouse deer or chevrotain. They would have hid in the water during times of danger, and otherwise waded in shallow waters, and it is not known whether they had a mostly terrestrial or aquatic diet. The indohyus is not a direct ancestor of whales, but rather the closest cousin to an as yet unknown whale ancestor, more closely related to whales than either is to the hippo.
So to answer your question, the Pakiceti are the ancestors of Ambulocetids, and the ancestor of the Pakistanian proto-whales is currently unknown. Based on what we know at the moment, it seems likely that the earliest branch of the whale family were creatures resembling modern-day deer, but much smaller, and that they would have hid in shallow water during times of danger, a behavior resembling that of modern-day water chevrotains. Gradually, these creatures spent more time in the water and adapted to this environment, moving from shallow freshwater out to sea, eventually becoming the whales and dolphins of today.
In general, the search for “missing links” or transitional fossils is somewhat misguided. The fossil record is both necessarily incomplete and incompletely known. Although it’s always good to find fossils in which we can observe the theorized transition between known species, it is far from expected that we would always find such fossils. Many, many transitional creatures simply weren’t preserved until the present—which is to be expected given what we know about the climate and geology of Earth—and even when they were, there is still tons of ground left to cover, and many, many unknown specimens yet to uncover around the world. It’s only in the last twenty years or so that we have discovered pretty much all the fossils we have from the early evolution of whales, most of them on the Indian subcontinent.
Answer: A new paper in the journal Astrobiology estimates the time various exoplanets—and Earth—will live in the “habitable zone” which is just the right distance from their star. In the case of Earth, they find a lower bound of 1.75 billion years into the future and an upper bound of 3.25 billion—quite a while, that is. At that point, Earth will have become unlivably hot, and the oceans will have boiled off. It’s important to note that this is the end for all life on Earth—complex life like humans will have died off long before. But then again, anatomically modern humans have only existed for 200,000 years. 1.75 billion years is almost 9,000 times longer than humanity has existed.
Earth might become uninhabitable before that due to a runaway greenhouse effect. The current greenhouse effect is what keeps the planet inhabitable right now—without greenhouse gases, almost all the heat from the Sun would be reflected back into space, and there would be no life on Earth. The current anthropogenic climate changes are increasing the greenhouse effect, but not to runaway levels.
Venus, it is believed, once had oceans just like Earth. But early on in its history it became a runaway greenhouse. Water vapor is actually one of the most efficient greenhouse gases, but on Earth only a small portion of the upper atmosphere is constituted by water. On Venus, a large portion of the atmosphere at all height levels was water. As more water evaporated, the greenhouse effect increased, trapping more heat, which caused more water to boil off, in a positive feedback loop until all the water was gone. But on Venus, the protective layer of other gases above the water vapor was much thinner. As a result, more water could be destroyed by photodissociation—broken down by light into hydrogen and oxygen. The hydrogen likely went off into space while the oxygen reacted with materials on the surface, oxidizing them. Thus the oceans of Venus disappeared.
On Earth, this is unlikely to happen until the Sun becomes much hotter, in billions of years. There are two possible scenarios for a rampaging greenhouse effect: the runaway greenhouse (in which all the water disappears) and the moist greenhouse (which is stable, i.e., not self-reinforcing, but stable at a much hotter level than today). Could burning fossil fuels release enough carbon dioxide to create a runaway or moist greenhouse on Earth far sooner than the projected billion-year timeline?
A recent paper by Colin Goldblatt and Andrew J. Watson considers this question. They calculate that the runaway greenhouse is very unlikely. As pressure increases, the boiling point of water also increases. (This also goes the other way: at the top of Mount Everest, water boils at around 70 C or 158 F.) As more water evaporates, pressure increases, thus increasing the boiling point of water. This means that the critical temperature at which all the water boils off the surface is not the usual boiling point of water, 373 K or 100 degrees Celsius, but rather 647 K, which is 374 C or 705 F. (Needless to say, things would get rather difficult for humans long before surface temperatures get to 647 K. Neither the pressure nor the temperature would be livable.) On Venus, this was less of an issue because the atmospheric pressure was lower to begin with.
A moist greenhouse might be more likely, but Goldblatt and Watson estimate that it would take about 10,000 parts per million (PPM) of CO2 (assuming carbon dioxide was the only noncondensable greenhouse gas) to induce a moist greenhouse effect. Currently, we’re at about 395 PPM. In other words, it would take 25 times the current levels of carbon dioxide in the atmosphere to induce a moist greenhouse. Such levels are unlikely to be reached even if we burned off all the fossil fuel in the world, although there is some small amount of uncertainty built into the calculation.
Interestingly, there is an area of Earth that displays the characteristics of a runaway greenhouse effect. This area is the Pacific warm pool, northeast of Australia. Seasonally, these waters get very hot—upwards of 29 degrees Celsius on the surface. This leads to lots of evaporation, and large volumes of water vapor in the upper reaches of the atmosphere. However, this is only a relatively small region of the planet, and through mechanisms not yet fully understood, the excess heat is distributed throughout the rest of the atmosphere, leaving the water in a state of equilibrium: it rarely exceeds 30.5 degrees Celsius, and thus doesn’t start a runaway greenhouse effect.
Despite the unlikelihood of a runaway greenhouse or a moist greenhouse, the more modest projected figures of an increase in average temperatures of 1.5 to 4 degrees Celsius, which we will likely reach in the next hundred years given current greenhouse emission rates will still be hugely destructive to life on Earth. It won’t wipe us out, but it will destroy fragile ecosystems and make life unbearable for humans in many corners of the globe. Nature doesn’t let us off the hook. We shouldn’t aim just to avoid the worst doomsday scenarios. That goal is far too modest, unworthy of a race as technologically and morally advanced as humanity. We should want something more for our children, grandchildren and future generations.
For some reason, during the month of August introversion exploded on the internet, reaching meme status. I don’t know why, but I figured I’d chime in with some scientifically backed data on this interesting personality category.
But first. How do you spell it? In psychological research, extravert is preferred, while in popular writing extrovert is commonly used. Either is correct, although perhaps appropriate for different target audiences. Luckily, both researchers and laymen call the opposite end introverts (intraverts are not a thing).
So is extraversion a thing? Yes, very much so. The extraversion-introversion axis is one of the most robust dimensions in personality research. Scientists in the field more or less unanimously agree that whatever categories best describe personality, the major component of one of them is the degree of extraversion-introversion (extraversion for short, although this refers to the whole spectrum, not just the one end of it).
Is it a Western invention? Nope, it has been found in a sample spanning 40 different countries across the world, ranging from very individualistic to very collectivist cultures.
Exactly what is it? This one’s trickier. The simplest folk idea is that extraversion is simply the degree to which a person is sociable. Extraverted people are people who are outgoing and engage in a lot of social interaction. This is a broad oversimplification according to science. A slightly more complex folk psychology idea is that extraversion is simply the question of whether a person draws “mental energy” from social interaction or alone time. According to this idea, introverts may enjoy social interaction, but need to be alone to recharge, while extraverts may enjoy alone time, but need to spend time with others in order to recharge. Surprisingly, there is little scientific evidence for this idea either.
Scientists don’t agree exactly on what constitutes the core of extraversion. Here is a list of facets that have been included in models of extraversion (different authors will include different subsets of these):
These six facets are venturesome (feelings of excitement seeking and desire for change), affiliation (feelings of warmth and gregariousness), positive affectivity (feelings of joy and enthusiasm), energy (feeling lively and active), ascendance (feeling dominant or being an exhibitionist), and ambition (valuing achievement and endurance).
However, there seems to be broad agreement that there is some underlying principle that explains all or most of these and ties them together. Extremely robust evidence links positive affect and extraversion, both between subjects and within subjects (more on this later). Extraverts are happier than introverts.
Wait, what? Yes. This is undeniable. All the research agrees: positive emotions and extraversion are very strongly correlated. Extraverts are happier than introverts. This has led some to propose that the unifying theme underlying all or most of the extraversion facets is positive affect. In the aforementioned study spanning more than 6,000 subjects in 40 countries, the authors contrasted two hypotheses for explaining the link between extraversion and positive feelings. One says that sociability is the core trait of extraversion, and that the correlation between happiness and extraversion is indirect: either social interaction is highly pleasurable, and extraverts spend more time interacting, thus becoming happier; or extraverts and introverts spend equal time alone and together with others, but people on the extraverted end of the spectrum enjoy the together time more.
The other hypothesis is their novel reward-sensitivity model. According to this model, extraverted people are simply more sensitive to and prone to seek out rewarding stimuli. It just so happens that social interaction is especially rewarding for humans, and empirical studies show that both introverts and extraverts tend to report more positive affect in social situations; extraverts are also apparently happier than introverts even when alone. According to this hypothesis, extraverts should be more likely to seek out rewarding stimuli both in social and nonsocial situations, and their sociable behavior is simply an instance of a more general pattern of reward seeking. According to their statistical analysis, the other facets of extraversion correlated much more strongly with positive affect than with sociability. The statistical evidence suggests that the unifying phenomenon that underlies the complex set of behaviors and traits that make up extraversion is reward sensitivity.
How would this be expressed in the brain? Recent research has found connections between differences in the dopaminergic system and differences in extraversion. Dopamine is a neurotransmitter implicated in reward, motivation and reinforcing behavior. For example, one study published this year measured brain activity in people who scored very low or very high on a measure of extraversion, corresponding to extreme introverts and extreme extraverts. Subjects were given placebo or different doses of sulpiride, a drug that interacts in a dose-dependent manner with dopamine. At low doses, it is a partial agonist, meaning it slightly increases dopamine activity; at higher doses, it is an antagonist, meaning it decreases activity. At higher dosages, introverts’ brain scans shifted in the direction of extraverts’ baselines and beyond, while extraverts shifted in the opposite direction. The authors suggest that a difference in the density of dopamine receptors might best explain their results. Basically, the hypothesis goes, introverts have less of certain kinds of dopamine receptors and more of a different kind, resulting in an overall lower dopamine activity. Since dopamine is implicated in reward and motivation, this would lead them to be less reward seeking and to experiencing less rewarding, positive feelings in general.
Another 2013 study looked at this behaviorally. It administered methylphenidate (Ritalin), a dopamine reuptake inhibitor, to subjects in what could best be described as a recreational manner. The subjects were conditioned to associate the lab environment with reward by taking feel-good drugs while in the lab environment. But tests showed that this association between reward and the specific lab environment was only acquired by extraverts, not by introverts, nor by introverts/extraverts who had been conditioned to associate a different lab environment with the reward stimulus. This is further evidence that extraverts are more reward sensitive than introverts.
But wait, does this mean if I’m introverted I’m genetically doomed to unhappiness, or that I will never learn to associate social contexts with pleasure? Here’s the twist. All of the above rather robust evidence could be turned on its head by a different line of research. Because all the above data were gathered by comparing groups of individuals. On average, what are the differences between introverts and extraverts? But a separate line of investigation seeks to understand the relationship between extraversion and other traits within the same person.
A 2002 paper details experiments that probe the relationship between positive affect and extraversion within the same subject. In one experiment, subjects were asked to answer questions designed to assess their state extraversion (how extraverted they acted in the moment) within the previous hour and their state happiness (how happy they felt in the moment) during the same time period. This was done five times per day, using PDAs (this was before smartphones). In the second experiment, a single report was made for the previous week, for ten weeks. In the third, the findings from the first two experiments were tested in the lab. Subjects were randomly assigned to group discussions in groups of three. They participated in two group discussions, randomly assigned to act in an introverted manner in one discussion and in an extraverted manner in the other. Then they evaluated each other and themselves using the same metrics as in the other experiments.
Now, one might think that simply acting extraverted wouldn’t be as effective as being extraverted. But the experiments proved this wrong. The individuals who were trait introverts (tended to be introverted) had just as strong a correlation between positive emotions and extraversion as the extraverts. In fact, in the first trial, the one where emotional states were assessed 5 x day, the introverts had a stronger correlation between acting extraverted and happiness than the extraverts! In the third trial, the people who were told to act introverted or extraverted were consistently rated by others to be introverted or extraverted, respectively. And yet again, the more extraverted they acted, the more they enjoyed the discussion.
Faking it ‘till you make it, scientifically validated!
This suggests two things: first, the simple fact that acting extraverted appears to make you happier. And second, that all the neurological differences between extraverts and introverts may not actually explain much. After all, within subjects, most people had periods of very outgoing and very introverted behavior, and their mental state for the most part correlated exactly with the behavior. Your brain is malleable, but it doesn’t completely rewire itself multiple times every day. These results throw a wrench into the reward sensitivity / dopamine framework detailed above. If we can replicate and expand on these within-subject findings and reunite these results with the between-subject findings, we’ll have come a lot closer to understanding introversion and extraversion.
OK, I get it. There’s still much work to be done. But however it is, how come these two different temperaments evolved? Especially if one sort is much happier than the other, wouldn’t one die out under selective pressure?
Good question. First of all, extraversion is a complex phenomenon, controlled by many genes. It takes a long, long time for evolution to completely eliminate variation in all of these genes. Second of all, there is good reason to believe that introverts and extraverts evolved to fill different niches in human society. It has been empirically shown that extraverts have more mates, but also die younger than introverts. Perhaps extraverts’ social activity and prowess make them more attractive mates, or give them more opportunity to meet mates, or some combination, which raises their evolutionary fitness, but are also more prone to impulsitivity and reckless behavior in their search for excitement and reward, thus resulting in more premature deaths.
Whatever happened to introvert/extravert pride? A common trend in the latest popular culture writings on introverts and “extroverts” is that whichever end of the spectrum the author identifies with is presented in a broadly positive light. The author takes pride in their orientation, whichever way it goes, often extolling its virtues in opposition to some perceived or real prevailing cultural current.
But science doesn’t function this way. It does not say that one kind of person is more worth than any other kind of person. That is well outside science’s scope.
However, the fact remains that the scientific consensus is that extraverted people are happier. And acting in an extraverted manner makes introverts and extraverts alike happier. And being happy, aside from being the higher-order goal of every human being, will make you more successful in almost every area of life—from income and marriage to mental health and longevity. It seems that from nature’s side, introverts have been cursed. But hey, at least you’re less likely to die in a parachuting accident or hunting sharks or from some other dangerous, thrill-seeking behavior. Besides, chances are you can be instantly happier by acting more outgoing.
But I like being introverted!
That’s perfectly fine. There is still a lot of work to be done on this personality dimension. The last word has not been said. But the current data we have cannot be denied: the link between happiness and high extraversion is undeniable.