science tumbled

(selections: pretty pics / longer stories)

Mice with human-mouse chimeric brains learn faster and better

We’re living in the future, when scientists can create mouse brains with human brain cells and measure the impact on learning. Earlier this year, a paper was published in Cell describing experiments where scientists implanted human glial progenitor cells into the brains of newborn mice. Glial cells are cells as numerous as neurons in the brain that support and nourish neurons. Until recently it was thought that they played exclusively a supporting role in the brain, but we now know that they can also impact neurotransmission. The mice who had human glial cells implanted in the neonatal stage grew chimera brains: on reaching adulthood, the human glial cells had spread to large parts of the mouse brain, retaining their human form and integrating themselves into the mouse neural network similar to human glial cells in a human brain or mouse glial cells in a normal mouse. The partially human mice learned significantly faster than control mice and mice implanted with mouse glial cells on all behavioral tests, and the same could be seen on measures of long-term potentiation (LTP)—a molecular process of strengthening the connections between neurons that is known to underlie some forms of memory and learning.

The Very Large Telescope in Chile fires off a laser strike at the galactic center. Fluctuations in the Earth’s atmosphere distort the telescope image. The telescope can compensate using adaptive optics, but it needs some way to measure the fluctuations. The twinkling of a bright star would do nicely, but often there is no bright star in the vicinity the astronomers want to study. The solution? Use a powerful laser to create an artificial star in the night sky.

The Very Large Telescope in Chile fires off a laser strike at the galactic center. Fluctuations in the Earth’s atmosphere distort the telescope image. The telescope can compensate using adaptive optics, but it needs some way to measure the fluctuations. The twinkling of a bright star would do nicely, but often there is no bright star in the vicinity the astronomers want to study. The solution? Use a powerful laser to create an artificial star in the night sky.

Science Behind the Factoid: Lottery Winners Are No Happier than Quadriplegics

Here’s a frequently repeated, counterintuitive factoid: people who win large sums in the lottery are no happier, over time, than people who become paralyzed in traumatic accidents. This “fact” comes from Brickman et al’s 1978 paper called Lottery Winners and Accident Victims: Is Happiness Relative? The researchers interviewed 22 major lottery winners, 22 randomly selected controls from the same area, and 29 paraplegics and quadriplegics who had suffered the injury in the recent past. The lottery winners had won sums ranging from $300,00 (more than a million in 2013 dollars) to $1,000,000. Here are some of the results:

The respondents rated their happiness and their enjoyment of everyday pleasures such as hearing a good joke or receiving a compliment on a scale from 1 to 5, where 5 was the happiest. As you can see, lottery winners were not significantly happier than controls. They also derived significantly less pleasure from everyday events. The victims were significantly less happy than the controls or the winners; however, one may have suspected them to be even unhappier. After all, these were people who had suffered a life-changing, paralyzing injury less than 12 months ago, and were still engaged in extensive rehabilitation. The victims also reported slightly more enjoyment of everyday pleasures than the lottery winners. All reports, past, future and present were made at one moment, and so we can see that the victims idealize the past, which they report as significantly happier than the controls or lottery winners report the present. All groups reported similar expected future happiness levels.

The second study showed that the results were not due to preexistent differences between people who buy lottery tickets and those who don’t.

The results are surprising, but they aren’t particularly strong. The sample size is small, and the results have not been replicated since. It’s a long way from results such as these to Oprah self-help slogans like “major events, happy or not, lose their impact on happiness levels in less than three months.”

Does money increase happiness? Some real-world studies have attempted to look at this. One study interviewed average Joes and Forbes 500 multi-millionaires. The wealthy were happier than the average Joes, but only modestly so. A larger study of poorer people was undertaken as part of the Seattle and Denver Income Maintenance Experiments. Part of a large-scale experiment involving 4,800 families and negative income tax—a way of ensuring a minimum income regardless of work status—it provided fertile ground for investigating the question: does having a stable income increase happiness?

A three-to-five year study tackled the question. Household heads who received extra monetary support and controls who didn’t were queried for symptoms of psychological distress. The results were surprising: for most groups, a stable income did not have any impact on psychological distress. In some groups, psychological distress was increased.

In 2006, Jonathan Gardner and Andrew J. Oswald decided to take another look at the lottery winner findings. They had the benefit of longitudinal data. Instead of asking subjects to rate their past, present and future happiness, they used data from the British Household Panel Survey. This survey already asks participants to rate their happiness every year. Gardner and Oswald looked at participants who had won medium-sized lottery prizes, from £1000 to £120,000. The 137 winners—a small sample, but much larger than the 1978 sample of 22 winners—went on to improve a small, but significant amount on a scale of general happiness.

Does increasing everyone’s income increase happiness? Gross Domestic Product per capita has been steadily increasing the last three decades. But are we happier? No, we’re exactly as happy or unhappy as we’ve always been. Below are some data:

What does this mean for happiness? Clearly, money isn’t everything. Equally clearly, money is something. It’s easy to come up with a folk psychology explanation to Brickman et al’s findings. Let’s give it a go: In the moment, a major accident is a huge negative factor in your life. Winning a million dollars is a huge positive factor in the moment. But as time passes, the happiness fades. Lottery winners grow accustomed to their new wealth, and no longer derive significant happiness from it; on the contrary, compared to the euphoric winning moment, everyday pleasures become duller. On the other hand, quadriplegics grow accustomed to their injury, and in contrast to the injury, the joy of everyday pleasures becomes greater. In time (say, 3 months, that sounds good in a soundbite), major life events don’t really affect your happiness level.

But as the data above shows, that’s an oversimplification. When it comes to our understanding of happiness, we may not have come much further than Socrates or Seneca. We lack historical data from antiquity, but it’s easy to imagine Socrates being a happier man than the man who won a million dollars in 1978.

The Big Sleep

The New Yorker has a long article about the development of a new insomnia drug. It starts with the discovery of a novel neurotransmitter—discovered independently by two different groups of researchers in the 1990s, named by some hypocretin (because it is produced in the hypothalamus), by other orexin (meaning appetite-stimulating, because of its observed effects). Orexin/hypocretin was first thought to regulate appetite, but attention soon turned to sleep. Orexin receptors are only present in a few thousand nerve cells in the hypothalamus, a tiny number compared to the billions of neurons across the brain. But these neurons have connections all over the brain, and they appear to act as an “awake switch.”

Orexin comes along and tells the brain, “hey, be awake, don’t fall asleep.” Soon after the discovery of orexin’s effects on rats’ appetite, it was discovered that rats lacking orexin receptors (“keyholes”) acted similarly to human narcoleptics. They have disturbed sleep patterns, and tend to fall asleep suddenly or fall into a heap, their muscles inert, at intervals during the day. Human narcoleptics were found to lack orexin (“keys”). This novel discovery led to the possibility of new medications. Orexin agonists (which activate the receptors) could become new treatments for narcolepsy or daytime sleepiness. And orexin antagonists (which block the receptors) could become new sleep aids. The search for the next blockbuster drug was on.

The article gives a fascinating look into the evolution of pharmaceutical research. Here is a description of how scientists came up with the zolpidem molecule, the active component in the popular sleep aid Ambien:

[Jean-Pierre Kaplan] and Pascal George—a younger colleague whom Kaplan described as “sympathetic and brilliant”—started by building wooden models, including ones for Valium, Halcion, and zopiclone. Colored one-inch spheres, representing atoms, were connected by thin rods, creating models the size of a shoebox. This was a more empirical, architectural approach than is typical in a lot of pharmaceutical chemistry. Kaplan and George tried to identify what these molecules had in common, structurally, that allowed them to affect the brain in the same way. Kaplan told me that their thinking wasn’t wildly creative, but it was agile: “You know, at that time it was maybe clever, because you have no computer. Now it’s routine work.”

Then a couple of decades later, pharmaceutical giant Merck is trying to find a drug to block orexin in order to help patients sleep:

Merck has a library of three million compounds—a collection of plausible chemical starting points, many of them the by-products of past drug developments. I saw a copy of this library, kept in a room with a heavy door. Rectangular plastic plates, five inches long and three inches wide, were indented with hundreds of miniature test tubes, or wells, in a grid. Each well contained a splash of chemical, and each plate had fifteen hundred and thirty-six wells. There were twenty-four hundred plates; stacked on shelves, they occupied no more space than a filing cabinet.

In 2003, Merck conducted a computerized, robotized examination of almost every compound in the library. At this stage, the scientists were working not with Renger’s animals but with a cellular soup derived from human cells and modified to act as a surrogate of the brain. Plate by plate, each of the three million chemicals in the library was introduced into this soup, along with an agent that would cause the mixture to glow a little if orexin receptors were activated. Finally, orexin was added, and a camera recorded the result. Renger and his colleagues, hoping to find a chemical that sabotaged the orexin system, were looking for the absence of a glow.

But drug development isn’t just science. Politics and marketing also enter into it. Everything from the color of the pills (“reds are culturally not acceptable in some places”) to the packaging (“the U.S. prefers everything in a thirty-count bottle”) to the dose. The final hurdle is approval by the Food and Drug Administration—America’s final arbiter on what drugs are allowed to be marketed and sold, and for which diseases—and other regulatory bodies in other parts of the world. It’s good that such hurdles exist, because otherwise dangerous drugs—such as thalidomide, which caused severe birth defects—would enter the market much more frequently. However, there is a question of balance: at what point do potential downsides outweigh the benefits? The FDA has turned to a more conservative line: dosages should be as small as possible.

This poses a problem for Merck. Their orexin antagonist, their potential superstar new sleep medication, suvorexant, is effective (as measured by objective measurements) at a dose of 10 milligrams. However, at this dose, patients don’t experience any subjective improvement in sleep quality. At higher dosages, both objective and subjective measurements agree that the drug is effective. But the FDA argues that such higher doses run a higher risk of side effects, and recommends the lowest dose, the dose which doesn’t make patients feel any better even if they’re getting better sleep as determined by objective measurements. This leads to an absurd situation where the FDA is arguing the drug’s effectiveness (at the lowest dose) while the drug company is arguing its ineffectiveness. If the FDA will only approve the lowest dose, this poses a problem for marketers:

How successfully can a pharmaceutical giant—through advertising and sales visits to doctors’ offices—sell a drug at a dose that has been repeatedly described as ineffective by the scientists who developed it?

Regardless of marketing and backroom tactics and FDA meetings, the research into orexin continues on. And that’s the really interesting part from a scientific perspective. Just a little more than a decade ago we discovered a completely new piece of the brain puzzle. We still don’t understand sleep. That’s the big thing. More basic research—the kind of research that just tries to figure out how things work, without regard for practical applications such as drug development—is needed. We don’t know why we need to sleep, we don’t know the exact significance of the different sleep phases. We do know that sleep is vitally important, that a specific cycle of brain states throughout the night is needed to perform well the next day. But why must we sleep at all? Why is resting awake not good enough? We have some ideas—memory consolidation, ridding the brain of certain toxins that can build up during wakefulness—but we’re not sure.

Sleep remains a mystery. Orexin is likely to play some part in the solution. And that’s exciting, whether you have sleep troubles or not.

That's ok, because descriptivism

Bonus: can [adjective]-ass occur predicatively? Yes, there are indeed people who take such questions seriously.

asimplecorn asked: How do you measure air pressure

It isn’t always the case, but this time you should actually go to Wikipedia for a good explanation. Read about the development of the barometer and the theory of gauging pressure. It’s rather fascinating that people were able to figure all this out 400 years ago. And how did they do it? By questioning a long-held assumption: that air is weightless. That’s a fine example of the scientific method.

Fundamentals of Computer Science (Some Math Ahead!)

I hate it when people choose nicknames that don’t work in conversation. A Photo of Dorian Grey, alternatively “50% physics, 50% mountains” writes:

Actually the first general-purpose digital computer was built by Tommy Flowers in 1943.

That computer was the Colossus, and the Colossus computer was not Turing complete. Therefore, it wasn’t the first general-purpose digital computer.

During the 1930s, the decade before the first modern computers were built, many mathematicians were working on the fundamentals of computation. Just what was computable? What problems could and could not be solved algorithmically, and how could this property be formalized? Several men solved this problem independently, in different ways which turned out to be equivalent.

Alan Turing was one of those men. He imagined a theoretical machine which would later be referred to only by his name—the Turing Machine—which could compute anything that is theoretically computable. A Turing Machine consists of an infinite tape of cells one after the other, filled with symbols. A reading head moves along the tape, scanning the content of one cell at a time, and deciding based on a small table of instructions whether to change the symbol, move forward or rewind. A specific Turing Machine cannot be programmed; it can only solve the single program for which its instruction table was designed. But Turing further imagined a Universal Turing Machine (UTM) which could simulate any other Turing Machine by reading off not only the input but also the instruction table from the tape. In effect, this is how modern computers with programs stored in memory work.

The UTM is one way to formalize the concept of what can and cannot be computed. It’s a very simple, elegant theoretical construct, and very clever people have managed to device ingenious UTMs that use a tiny set of symbols and rules to effectively compute anything that can be computed. A computer is said to be Turing Complete if and only if it can compute exactly the same things an UTM can compute. (Ignoring the fact that non-theoretical computers have finite memory.) And if it cannot, it is not a general-purpose computer in the modern sense, since there are algorithmic problems it cannot solve. The Colossus was very useful for its purpose, but it was not Turing complete.

Around the same time, Alonzo Church attacked the problem of defining computability from a slightly different angle. He deviced a formal calculus called the lambda calculus. The lambda calculus is also extraordinarily elegant and simple. It is built upon the theory of anonymous functions and substitution. Here is how wikipedia defines it:

In effect, lambda calculus is defined by anonymous functions that can take one named variable and can later be applied to one lambda term. Only the syntax above and a couple rules about how to perform function application are needed. From these humble beginnings, some clever bootstrapping allows us to create something called the Y Combinator, which allows us to use recursion in a calculus that has no native support for it. Numbers can be encoded using Church encoding. If you’re willing to do theoretical groundwork, you can compute anything that is computable using the lambda calculus. As it turns out, Church’s lambda calculus and Turing’s UTM are equivalent; they define the exact same class of functions. And the Church-Turing thesis posits that anything that is computable is computable by both these formal systems.

Now, these formal systems aren’t meant to be practical. When they were invented, they were intended only to define computability, not to provide a practical means of achieving it in a machine. Yet the Turing Machine formed the basis of the Von Neumann architecture, which is the basis of almost all modern computers. And many modern programming languages—called functional programming languages, because they are based around functions in the mathematical sense—are based on lambda calculus. The most basic form of lambda calculus sketched above is called the untyped lambda calculus, because the variables have no type; any function can accept any variable. But typed variants of the lambda calculus are very important in type theory, a field in the cross-section of computer science and pure mathemathics that investigates ways to reduce errors in computer programs by more rigorously defining what kind of operations can be performed on which variables. For instance, it makes no sense to perform a “search for the letter ‘t’” operation on an integer, but it makes perfect sense to do so in a text string like “this is a string!” These are some of the errors typed programming languages can help catch before the program runs, during development.

One of the programming languages strongly based on a typed lambda calculus is Haskell, named for Haskell Curry, another computer science pioneer. He is known, among other things, for the so-called Curry-Howard isomorphism, which states a formal equivalence between mathematical proofs and computer programs. This equivalence is useful in several ways. If one has a particularly gnarly mathematical problem, such as the four-color theorem, one could write a computer program that is equivalent to a proof of the theorem. Or if one has a particularly gnarly computing problem, one can write a mathematical proof that some algorithm is correct, but which also doubles as an executable program that can perform that algorithm.

If you’d like to learn more about the fascinating world of fundamental computer science, I highly recommend the book The Structure and Interpretation of Computer Programs. It is a programming manual, but also the best introduction to computer science ever written. It’s practical, it’s theoretical, it will tax your brain, and if you read it and complete the exercises, you will become a zen master of computing. (I cheated; read the book but skipped some exercises. Don’t be like me. Be a real man/woman, do the work, learn the good stuff.)

ENIAC, the first general-purpose, programmable digital computer, developed at the University of Pennsylvania for the US army and completed in late 1945. The machine was revolutionary for its time, but would soon be obsolete in two ways: (1) it used decimal, rather than binary arithmetic; and (2), it was built on vacuum tubes. In 1947, the invention of the transistor, which replaced the costly and unreliable vacuum tubes, paved the way for the digital revolution.
The history of computing goes much further back, of course. The first programmable, mass-produced machine was the Jacquard loom, which was programmed by punch cards and occasioned the development of the Luddite movement, as skilled textile workers now became replaceable by machines. In those days, a computer was a person who performed calculations, generally rote work like calculating logarithmic tables. Interestingly, although mathematics has traditionally been an area largely closed off to women, many women did important work in the early history of computing.
The most famous name is Ada Lovelace, daughter of Lord Byron. She could by rights be called the first computer programmer, as she wrote programs for Charles Babbage’s analytical engine, which would have been the first general-purpose computer if it had ever been completed. Sadly for Babbage and for Lovelace, they were a hundred years ahead of their time. But countless others worked behind the scenes. Many women were hired as manual computers, partially because of their skill and partially because by the gender norms of their time, they could be paid less than men. One such group of women were “Pickering’s Harem,” women who processed astronomical data for Edward Charles Pickering, head of the Harvard Observatory from 1877-1919.
Six women were the primary programmers on ENIAC. In 1997, they finally received their due credit.

ENIAC, the first general-purpose, programmable digital computer, developed at the University of Pennsylvania for the US army and completed in late 1945. The machine was revolutionary for its time, but would soon be obsolete in two ways: (1) it used decimal, rather than binary arithmetic; and (2), it was built on vacuum tubes. In 1947, the invention of the transistor, which replaced the costly and unreliable vacuum tubes, paved the way for the digital revolution.

The history of computing goes much further back, of course. The first programmable, mass-produced machine was the Jacquard loom, which was programmed by punch cards and occasioned the development of the Luddite movement, as skilled textile workers now became replaceable by machines. In those days, a computer was a person who performed calculations, generally rote work like calculating logarithmic tables. Interestingly, although mathematics has traditionally been an area largely closed off to women, many women did important work in the early history of computing.

The most famous name is Ada Lovelace, daughter of Lord Byron. She could by rights be called the first computer programmer, as she wrote programs for Charles Babbage’s analytical engine, which would have been the first general-purpose computer if it had ever been completed. Sadly for Babbage and for Lovelace, they were a hundred years ahead of their time. But countless others worked behind the scenes. Many women were hired as manual computers, partially because of their skill and partially because by the gender norms of their time, they could be paid less than men. One such group of women were “Pickering’s Harem,” women who processed astronomical data for Edward Charles Pickering, head of the Harvard Observatory from 1877-1919.

Six women were the primary programmers on ENIAC. In 1997, they finally received their due credit.

Adventures in Precognition

See ya later world asks:

What are most scientists’ view on paranormal phenomena?

The simple answer is that most of them don’t believe in it. Paranormal research rarely intersects with mainstream science. Most so-called researchers into the paranormal have none of the rigor needed to perform real science, and their “experiments” usually have methodological flaws that can easily be spotted by a bright middle schooler. On occasion, actual scientists attempt to explore alleged paranormal phenomena, and sometimes there’s even a semblance of rigor to their investigations. Such is the case I’ll tell you about today.

Back in 2011, professor of psychology Daryl J. Bem turned a lot of heads when he published rigorous experimental data that appeared to prove a form of extrasensory perception (ESP), in the form of precognition and premonition: the ability of future events to determine an individual’s thoughts and feelings in the present. Clearly, such informational time-travel would go against everything we know about physics. But Bem is no run-of-the-mill crackpot: he is a widely cited and influential psychologist most known for his self-perception theory of attitude formation, which states that we form our attitudes by observing our behaviors, rather than the other way around. Although counterintuitive, many studies have found support for the idea. For instance, while we know that happy people smile and angry people frown, it has also been shown that people get happier by smiling and angrier by frowning. Bem’s smart move when it comes to ESP research was attempting to do the research according to the established standards of psychological science. He also encouraged others to attempt to replicate his findings, correctly reasoning that replication is at the heart of science.

Bem’s experiments were rather clever. He simply took established psychological effects and time-reversed them. For instance, it is known that mere exposure to a word or concept can “prime” a person to more readily think of, or even like the concept at a later time. For instance, if you read a list of words that includes the word table, and later do a word completion task in which you are asked to make a word beginning with the letters tab, you are much more likely to go for table than if you had not been primed. This effect persists even after you have consciously forgotten the priming, or even if you were never aware of it in the first place. The same concept is responsible for the effectiveness of subliminal advertising. Two of Bem’s experiments applied priming after the fact, and appeared to show a “retroactive priming” effect. The setup resembled a typical priming experiment: subjects were asked to judge whether each in a series of pictures was pleasant or unpleasant. Usually, people respond faster when an emotionally congruent word is flashed before an emotionally charged picture than when a word representing the opposite emotional charge (e.g., a positive picture and a negative word) is flashed. Bem observed that this effect persisted even when the word was flashed after the picture.

In total, Bem did 9 different ESP experiments, each with 100 or more participants, all time-reversed variants of known psychological phenomena. 8 of 9 appeared to show statistically significant evidence for ESP phenomena. Bem also appeared to show evidence for a link between stimulus seeking (a personality characteristic associated with extraversion) and ESP abilities, as more stimulus seeking individuals (as indicated by their answers to one or two questions) seemed to exhibit a stronger ESP effect.

Enter the scientific process. As Bem agrees in his paper, extraordinary claims require extraordinary evidence, and one batch of experiments isn’t enough to disprove a very stable theory about how the world works. If ESP exists, what we think we know about physics goes out the window. Because Bem, unlike most researchers into the paranormal, did not believe that the paranormal was above the normal process of science, and also because he has a good scientific track record, other researchers took his claims seriously and set out to replicate or discredit them by repeating Bem’s experimental procedure.

Most attention was given to Bem’s eight and ninth experiments, because they were among the easiest to replicate according to Bem, because they had some of the largest effect sizes of all the experiments, and because they provide the least amount of wiggle room. Either they should show definite effects, or not. If performed correctly, there is little room for observer bias, and there are also few points of contention (unlikely some of the other experiments, which rely on participants’ subjective responses, where null results could conceivably be due to idiosyncrasies in the participants).

The eight and ninth experiment investigate retroactive facilitation of recall. Participants were briefly shown 24 test words and 24 control words, unaware of which category each word fell into. They were then given as much time as they wanted to freely recall as many of the words as possible. Finally, they were given the 24 test words to practice. The results of the experiment appeared to show that memorizing words after the fact could affect results in the present; the words the participants would later practice were more readily recalled than the control words, despite the fact that the participants didn’t know which words they would later practice. The usual effect of practice, naturally, is that words you have previously attempted to memorize are more readily recalled than unknown words.

Bem’s original paper was called Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. It was published in the Journal of Personality and Social Psy­chology. In 2012, the same journal published a large study attempting to replicate Bem’s retroactive recall experiments called Correcting the Past: Failures to Replicate Psi. As the title suggests, the study attempted to replicate the two retroactive recall experiments and found strong evidence for the null hypothesis (i.e., no ESP). It’s important to note that this is not the same as not finding evidence for ESP. Bem’s hypothesis predicted certain effects, which Bem’s results appeared to support. However, a larger-scale attempt to replicate the findings did not yield those results. The failure to provide the expected results for the hypothesis is not simply lack of evidence for Bem’s hypothesis, it is strong evidence for the opposite. As far as Bem’s hypothesis is falsifiable, it has been falsified. Retroactive facilitation of recall doesn’t exist. The authors did seven different experiments with a sum total of 3,289 participants and also analyze Bem’s own data and other independent attempts to replicate the eight and ninth experiments. In total, more than 6,000 subjects participated in the analyzed experiments, and the effect was not replicated. Only Bem’s own data show evidence for ESP.

Now, that was a good-faith attempt to replicate Bem’s results. Others responded more negatively, analyzing Bem’s procedures and data and suggesting that he was deliberately not reporting negative results, or stopping experiments at the point where his desired results were found. One example was the idea that effect sizes were inversely proportional to number of participants in the studies, certainly a red flag. Others went so far as to suggest that to the extent that Bem’s experiments followed established procedure in psychological research, there’s something wrong with established procedure.

Now, you may think, perhaps the eight and ninth experiments didn’t pan out, but what about the other experiments? But it doesn’t bode well for Bem’s hypothesis when his most rigorous experiments, the ones that most unanimously show results, cannot be replicated.

Nevertheless, Bem should be applauded for attempting to bring rigor and proper methodology to a field of study that is usually mocked—rightfully so—as pseudoscientific at best. Although some fellow scientists were immediately dismissive of Bem’s results, he was rigorous and forthcoming enough to warrant attempts at replication by other scientists. If their experiments prove him wrong, well, that’s science for you. At least it isn’t, as the theoretical physicist Wolfgang Pauli supposedly was fond of saying, so bad it’s not even wrong.

Ryan writes:
First off, I love reading the stuff you guys post. I find it all very interesting. Currently, I’m looking into the evolution of whales, as evolution is among the most fascinating topics for me. But, the reason I’m sending this message is because I’m a bit lost and I’m hoping you may be able to help out. So far, I have a basic understanding of whale evolution and know what the key adaptations were that lead to our modern whales. My confusion, however, comes when I look at the phylogeny of whales. Perhaps I’m not looking at it right, but I see many “gaps”, if you will, that would connect one ancestor to another. For example, they show that Pakicetus and Ambulocetus share a common ancestry, but don’t show what the common ancestor would be. I’m curious if you could elaborate on this for me. Perhaps the phylogenetic trees I’m looking at simply don’t go that in depth or scientists in that field haven’t found a common ancestor that links them, or maybe I just don’t understand how to read a phylogenetic tree properly. Any help or information you could give me would be greatly appreciated.
Whales evolved within the even-toed ungulates. Modern examples of this order include cows, pigs, giraffes, hippos, deer, camels and chevrotains. The earliest known whales are indeed Pakiceti, small creatures that waded in shallow waters in Pakistan some 50 million years ago. From these evolved the amphibious, crocodile-like Ambulocetids, and from them, the protocetids or proto-whales, aquatic creatures that died out around 42 million years ago. It is from them the Basilosaurids, ancestors of modern whales and the most whale-like of the whale ancestors, evolved.
The closest living relative of whales, dolphins and porpoises has been determined by genetic analysis to be the hippopotamus. This, however, doesn’t really help paleontologists all that much since the earliest known proto-hippos are only 15 million years old. More recent findings, however, have unearthed the Indohyus, or “Indian pig,” a reconstruction of which is shown above, as the closest known relative of the whale ancestors. Indohyus resembled small deer, the size of modern-day raccoons, or perhaps most closely (in terms of modern-day creatures) the mouse deer or chevrotain. They would have hid in the water during times of danger, and otherwise waded in shallow waters, and it is not known whether they had a mostly terrestrial or aquatic diet. The indohyus is not a direct ancestor of whales, but rather the closest cousin to an as yet unknown whale ancestor, more closely related to whales than either is to the hippo.
So to answer your question, the Pakiceti are the ancestors of Ambulocetids, and the ancestor of the Pakistanian proto-whales is currently unknown. Based on what we know at the moment, it seems likely that the earliest branch of the whale family were creatures resembling modern-day deer, but much smaller, and that they would have hid in shallow water during times of danger, a behavior resembling that of modern-day water chevrotains. Gradually, these creatures spent more time in the water and adapted to this environment, moving from shallow freshwater out to sea, eventually becoming the whales and dolphins of today.
In general, the search for “missing links” or transitional fossils is somewhat misguided. The fossil record is both necessarily incomplete and incompletely known. Although it’s always good to find fossils in which we can observe the theorized transition between known species, it is far from expected that we would always find such fossils. Many, many transitional creatures simply weren’t preserved until the present—which is to be expected given what we know about the climate and geology of Earth—and even when they were, there is still tons of ground left to cover, and many, many unknown specimens yet to uncover around the world. It’s only in the last twenty years or so that we have discovered pretty much all the fossils we have from the early evolution of whales, most of them on the Indian subcontinent.

Ryan writes:

First off, I love reading the stuff you guys post. I find it all very interesting. Currently, I’m looking into the evolution of whales, as evolution is among the most fascinating topics for me. But, the reason I’m sending this message is because I’m a bit lost and I’m hoping you may be able to help out. So far, I have a basic understanding of whale evolution and know what the key adaptations were that lead to our modern whales. My confusion, however, comes when I look at the phylogeny of whales. Perhaps I’m not looking at it right, but I see many “gaps”, if you will, that would connect one ancestor to another. For example, they show that Pakicetus and Ambulocetus share a common ancestry, but don’t show what the common ancestor would be. I’m curious if you could elaborate on this for me. Perhaps the phylogenetic trees I’m looking at simply don’t go that in depth or scientists in that field haven’t found a common ancestor that links them, or maybe I just don’t understand how to read a phylogenetic tree properly. Any help or information you could give me would be greatly appreciated.

Whales evolved within the even-toed ungulates. Modern examples of this order include cows, pigs, giraffes, hippos, deer, camels and chevrotains. The earliest known whales are indeed Pakiceti, small creatures that waded in shallow waters in Pakistan some 50 million years ago. From these evolved the amphibious, crocodile-like Ambulocetids, and from them, the protocetids or proto-whales, aquatic creatures that died out around 42 million years ago. It is from them the Basilosaurids, ancestors of modern whales and the most whale-like of the whale ancestors, evolved.

The closest living relative of whales, dolphins and porpoises has been determined by genetic analysis to be the hippopotamus. This, however, doesn’t really help paleontologists all that much since the earliest known proto-hippos are only 15 million years old. More recent findings, however, have unearthed the Indohyus, or “Indian pig,” a reconstruction of which is shown above, as the closest known relative of the whale ancestors. Indohyus resembled small deer, the size of modern-day raccoons, or perhaps most closely (in terms of modern-day creatures) the mouse deer or chevrotain. They would have hid in the water during times of danger, and otherwise waded in shallow waters, and it is not known whether they had a mostly terrestrial or aquatic diet. The indohyus is not a direct ancestor of whales, but rather the closest cousin to an as yet unknown whale ancestor, more closely related to whales than either is to the hippo.

So to answer your question, the Pakiceti are the ancestors of Ambulocetids, and the ancestor of the Pakistanian proto-whales is currently unknown. Based on what we know at the moment, it seems likely that the earliest branch of the whale family were creatures resembling modern-day deer, but much smaller, and that they would have hid in shallow water during times of danger, a behavior resembling that of modern-day water chevrotains. Gradually, these creatures spent more time in the water and adapted to this environment, moving from shallow freshwater out to sea, eventually becoming the whales and dolphins of today.

In general, the search for “missing links” or transitional fossils is somewhat misguided. The fossil record is both necessarily incomplete and incompletely known. Although it’s always good to find fossils in which we can observe the theorized transition between known species, it is far from expected that we would always find such fossils. Many, many transitional creatures simply weren’t preserved until the present—which is to be expected given what we know about the climate and geology of Earth—and even when they were, there is still tons of ground left to cover, and many, many unknown specimens yet to uncover around the world. It’s only in the last twenty years or so that we have discovered pretty much all the fossils we have from the early evolution of whales, most of them on the Indian subcontinent.