divulgatum está dedicado a la difusión de conocimiento científico en español e inglés, mediante artículos que tratan en detalle toda clase de temas fascinantes pero poco conocidos.
divulgatum is devoted to the dissemination
of scientific knowledge in Spanish and English, through
articles that go into the details of all sorts of fascinating but not widely known subjects.
Showing posts with label english. Show all posts
Showing posts with label english. Show all posts

Sunday, July 20, 2025

Evolution’s struggle for existence


This article is a revised version of Evolution in evolution (2019), written for the Magdalene College Magazine (2024–25).

The conception of Evolution as proceeding through the gradual transformation of masses of individuals by the accumulation of impalpable changes is one that the study of genetics shows immediately to be false. Once for all, that burden so gratuitously undertaken in ignorance of genetic physiology by the evolutionists of the last century may be cast into oblivion.

William Bateson (1909), p. 289


The first edition of Charles Darwin’s On the Origin of Species by Means of Natural Selection (1859).
(Credit: Scott Thomas Images.)


THERE IS A WIDESPREAD popular construction of scientific revolutions as singular events which unfold, bolt-like and final, in the blink of an eye. Names like Galileo, Newton or Einstein are typically invoked as those of mythical figures with a miraculous capacity single-handedly to transform the way we see the world. It appears, however, that the kind of sweeping, dramatic breaks of paradigm which we have come to associate with scientific revolutions are rather hard to come across today. One may speculate — and be forgiven for it — that this may be the result of certain changes in the nature of academic work, by which the progress of research has been throttled to make space for an ever-swelling volume of inescapable paperwork. But the truth is that, rather than stalling, scientific progress is now considerably faster than it has ever been. The actual reason why sharp and sudden scientific revolutions of the kind encountered in popular science books are nowhere to be found today, is that such events are not revolutions in the usual sense of the word. Rather than cataclysmic changes, these are painfully protracted processes which require decades of cumulative scientific work to mature and develop. While both science popularisers and scientists themselves — not to mention the film industry — are very often guilty of misrepresenting scientific discoveries by filtering them through an almost Wagnerian dramatic lens, the reality is that, academics being sceptical and proud creatures by nature, every great conceptual shift must be slowly percolated, rather than poured, into the pool of accepted knowledge. To name but one example, the double-helix structure of the DNA molecule, now hailed as the central biological breakthrough of the second half of the twentieth century, was regarded by many as little more than a theoretical possibility years after it was first proposed. Even Sir Isaac Newton, that weary archetype of supernatural scientific genius, had to endure a decades-long intellectual war of attrition with his Continental competitors before his law of universal gravitation became widely accepted outside Britain.

Among the documented cases of gradually unfolding scientific revolutions, one stands out for being both particularly interesting and surprisingly obscure; this is the story of how Charles Darwin’s theory of evolution by natural selection came to be the main unifying idea of biology. Contrary to popular belief, this was no swift revolution, but rather a drawn-out process of fierce scholarly debate which began with the publication of Darwin’s ideas in 1859, and which would not relent until the late 1940s. During this period, the differentiation of biology into several new disciplines created the conditions for a chasm to grow between classically trained naturalists and a new breed of experimental biologists. As a result, evolutionary thought split into two mutually opposed currents which would only be reconciled with the eventual development of a unified theory of evolution.

During his lifetime, Darwin witnessed his theory of natural selection gain acceptance and esteem among a small circle of naturalists and evolutionary biologists. This cadre of early Darwinians included Alfred Newton, the first Professor of Zoology at Cambridge, who wrote: ‘I never doubted for one moment, then nor since, that we had one of the grandest discoveries of the age — a discovery all the more grand because it was so simple’ (Newton, 1888, p. 244). This limited success notwithstanding, Darwin never experienced the ultimate development of his theory into the undisputed cornerstone of biology which it is today — a status best encapsulated by Theodosius Dobzhansky’s famous aphorism that ‘Nothing in biology makes sense except in the light of evolution’. In fact, it might be difficult for present-day biologists even to conceive the extremes of opposition which so-called ‘Darwinism’ faced throughout the late nineteenth century, and until as late as the 1930s.

At the time, Darwinism was only one among a number of discordant theories attempting to explain the processes whereby biological species develop. Some of these, now referred to as ‘essentialist’ theories, were built on a notion of species as uniform ‘lines’ of virtually identical individuals, each made in the image of an unchanging ‘essence’ (a concept plainly borrowed from Platonism). Essentialist thinking therefore rejected the existence of significant natural variation within a species. On the other hand, ‘populationist’ theories viewed species as populations composed of distinct, unique individuals, and thus inevitably carrying a substantial degree of natural biological variation; examples of such variation could be differences in adult size, coat colour or leaf shape. Furthermore, some theories presumed the existence of ‘soft inheritance’, characterised by the notion that the hereditary material (what we now call ‘genes’) can be altered to some extent through the interaction of the organism with its environment. Lamarck’s theory of evolution by inheritance of acquired characters stands out as a notorious example of this current, positing that any physiological changes acquired by an individual during its life will be inherited by its descendants. Other theories, in contrast, admitted only ‘hard inheritance’, by which the hereditary material cannot be modified through interaction with the environment, meaning that the characters acquired by an individual during its own life are not passed to its offspring. Modern biology has supplied overwhelming evidence against the notion of soft inheritance; we know that, at least in animals, the germ cells which transmit an individual’s genes to the next generation are sequestered away from other tissues, such that environmental modification of the genetic material in these cells is prevented. (This does not include systemic exposure to certain aggressive agents not normally found in nature, such as X-rays and chemical carcinogens, which are known to induce changes to the germ cells’ DNA; furthermore, while recent discoveries of heritable epigenetic changes in some species have been argued to challenge the notion of strict hard inheritance, the validity of such arguments is still under debate.) It might therefore come as a surprise that nearly all the early theories of evolution, including Darwin’s, allowed some degree of soft inheritance. In particular, Darwinism originally assumed a certain plasticity of the genetic material, such that it could be modified to an extent through the use or disuse of certain organs during life; Darwin believed that such a process would assist natural selection in allowing species to adapt effectively to their environment. Some of Darwin’s supporters, notably the biologists August Weismann and Alfred Russell Wallace, would later develop an elaboration of Darwin’s theory known as ‘neo-Darwinism’, which definitely rejected the possibility of any kind of soft inheritance. Through his own extensive studies of natural populations in Southeast Asia, Wallace had independently arrived at a theory of evolution which was fundamentally similar, though less developed, than Darwin’s; it was knowledge of this fact which finally spurred Darwin to publish the theory on which he had been quietly working for two decades. Before the publication of Darwin’s book, Darwin and Wallace (1858) decided to present a summary of their conclusions in a joint communication to the Linnean Society.

Based on principles such as soft and hard inheritance, essentialism and populationism, a diverse array of evolutionary theories was put forward between the 1860s and the 1940s, of which Darwinism was seldom among the favourites. The chief factor compelling authors to support one theory over another was their particular field of expertise, and the number and variety of such fields within biology was expanding as never before, with emergent disciplines including embryology, cytology and ecology. Yet, from the standpoint of evolutionary thought, one of these new sciences was undoubtedly more impactful than any other: the science of genetics, born out of the unexpected rediscovery of Gregor Mendel’s laws of biological inheritance in the year 1900. The early geneticists built on the knowledge recovered from Mendel’s writings and began developing a detailed understanding of the principles of genetic mutation and inheritance. The spark of this new understanding, however, far from kindling any concerted progress in evolutionary biology, would serve to ignite a long and vicious conflict among the different biological disciplines.


Illustration of the inheritance of seed characters in pea (from Fig. 3 in Bateson 1909). A plant from a variety with green round seeds, when fertilised with pollen from a variety with yellow wrinkled seeds, produces yellow round (YR) seeds (F1). In genetic terms, this indicates that the characters of ‘yellowness’ and ‘roundness’ are both dominant. When crossed among themselves, however, the seeds borne by these new plants (F2) present a distribution of characters which is closely predicted by Mendel’s laws of inheritance.


From the outset, the founding fathers of genetics stood in opposition to Darwin’s idea of natural selection as the main driving force in evolution. Both the first geneticists and the earlier palaeontologists interpreted their own observations as being plainly in accordance with the hypothesis that new biological forms emerge by means of discontinuous change, or ‘mutation’. A mutation was defined as a discrete modification of the genetic material causing a visible and often disruptive physiological change in the organism. Such events, the geneticists argued, would sometimes result in the instantaneous transformation of an existing species into a new one, without the production of intermediate forms. This theory, which drew implicitly on essentialist principles, was known as ‘saltationism’ because of its belief in speciation by ‘saltation’ — a large evolutionary leap leading from one form to another. It provided a counterpoint to Darwin’s theory, which relied on a ‘gradualist’ conception of evolution derived from populationist thinking, whereby species gave rise to new species in a gradual manner, through a continuous succession of intermediate forms. Outlandish as it may sound today, saltationism fitted the experimental observations of geneticists, as well as prior palaeontological evidence, outstandingly well. The extreme sparsity of the fossil record meant that palaeontologists could never witness a continuous progression of forms linking two related species, whereas geneticists were accustomed to working with uniform stocks of nearly identical individuals — typically plants or mice — as a means of minimising experimental interference. The mutants produced in these genetic experiments presented dramatic physical modifications which were inherited by their offspring in accordance to Mendel’s laws. It seemed logical, then, to suppose that mutations such as these, infrequent but highly disruptive events, were the force behind the origin of new species. In the geneticists’ defence, it must be pointed out that we now know of cases where new species have indeed emerged through a singular genetic alteration, such as the duplication of the entire genome in some plants. The idea of speciation by saltation is therefore not impossible, but saltationism as a theory lacks the generality required to explain the evolution of most known species.

Furthermore, there was an additional problem plaguing Darwinism. The physiological basis of inheritance was entirely unknown in the nineteenth century, and Darwin had implicitly made recourse to a theory known as ‘blending inheritance’, according to which an organism’s constitution is a smooth average of its parents’ constitutions. The rediscovery and confirmation of Mendel’s work quickly proved that inheritance does not operate in this way, but rather through the segregation of discrete, individual genes from parent to offspring. Indeed, it could be shown mathematically that blending inheritance would lead to a situation where every individual in a species would display the exact same form of every trait, rendering evolution impossible. Geneticists thus argued that Darwin’s entire notion of gradual evolution, based on continuous variation, blending inheritance and natural selection, was simply untenable in the light of their experimental results. Some of the most distinguished early geneticists, including T. H. Morgan and William Bateson — the latter of whom translated Mendel’s work into English and coined the very term ‘genetics’ — went so far as to declare that genetics had finally put an end to Darwinism (see Bateson’s words at the beginning of this article). It should be borne in mind, however, that genetics was itself a controversial discipline at the time, composed of multiple competing strands; and the early geneticists, or ‘Mendelians’, were just as anxious to establish the validity of their own views on heredity as were the Darwinians to see their evolutionary ideas vindicated. Moreover, in spite of their opposition to Darwinism, the contributions of this first generation of geneticists — most notably the elucidation of the laws of heredity, the discovery of genes and chromosomes, and the refutation of the notion of soft inheritance — would ultimately prove essential to the refinement of evolutionary theory.

In contrast to the geneticists, those biologists who had been trained as naturalists, including zoologists and botanists, were used to deriving their conclusions from the direct study of natural populations, and they insisted that their observations of natural diversity were in perfect agreement with Darwin’s theory of gradual evolution through natural selection. The true root of the disagreement probably lay in the utter lack of communication between the two camps: naturalists and geneticists not only held competing theories, but also followed very distinct approaches to scientific enquiry, pursued divergent biological questions, attended different meetings, read and published in different journals, and even employed distinct vocabulary, including incompatible definitions for such fundamental terms as ‘species’ and ‘mutation’. In addition, geneticists appeared to view naturalists as speculation-lovers who were incapable of subjecting their ideas to proper testing, while naturalists had a tendency to dismissing geneticists as narrow-minded experimentalists who lacked experience of real natural populations. Misunderstanding and resentment compounded easily under such an atmosphere, gradually carving an ever deeper chasm between both disciplines. Astonishing proof of this circumstance comes from the fact that, when a younger generation of theoretical and experimental geneticists — including Sir Ronald Fisher, J. B. S. Haldane, Sewall Wright and H. J. Muller — began to obtain, from the late 1910s, fresh results demonstrating how the accumulation of effects from many discretely inherited genes can give rise to the continuous diversity described by naturalists (see figure below), and therefore how Mendelism and neo-Darwinism were in fact compatible, this did little to bridge the huge divide between geneticists and naturalists. Instead, because of the alienation brought about by constant hostility, scholarly communication was impaired to such an extent that the naturalists would spend decades persevering in their efforts to refute the already obsolete ideas of the earlier geneticists.


Illustration of Fisher’s (1918) ‘infinitesimal model’, explaining the emergence of continuous biological variation from the combined contribution of a large number of discrete Mendelian genes, or loci. Each row in the diagram presents a simulated distribution of population values for a trait determined by an increasing number of individual genes. The bars on the left-hand side indicate the individual effect of each gene contributing to the trait (ranging from only two genes in the top case to 500 in the bottom case). The right-hand side provides the corresponding distributions of trait values in the simulated population, showing how the values for a trait become more normally distributed as the number of genes increases. This explains why many physiological characters in humans and other species follow a normal (or Gaussian) distribution.
(Credit: Chamaemelum/Wikimedia Commons.)


In this way, naturalists and geneticists would go on treading along separate paths for the first three decades of the twentieth century, each dragging their own conceptual burdens: the naturalists held obsolete views about the nature of genetic mutation and inheritance; the geneticists were hampered by saltationist views and by the belief that the evolution of species could be understood by extrapolation from the evolution of single mutations in experimental settings. Even as late as the 1930s, when crucial experiments in artificial selection, together with the work of the first mathematical geneticists, were demonstrating beyond any doubt the reality of evolution by natural selection, specialist textbooks still listed up to six potentially correct theories of evolution.

This stagnant atmosphere would finally be cleared in the 1940s, mainly through the insight of one palaeontologist, George Simpson, and two zoologists, Julian Huxley and Bernhard Rensch. Perhaps the only scientists of their generation who had amassed a profound knowledge of the latest advances in each of the relevant disciplines, they published three independent books (Huxley, 1942; Simpson, 1944; Rensch, 1947) describing how the findings of zoologists, botanists, geneticists, palaeontologists and others could be integrated into one self-consistent theoretical framework which could explain the entire evolutionary process. In his book (which happened to be published first due to circumstances arising from the Second World War), Huxley christened this new theory with the name by which it is known today: the ‘modern evolutionary synthesis’. The modern synthesis states that the gradual evolution of species can be explained in terms of the accumulation of myriad genetic mutations with generally small effects, in conjunction with recombination (the shuffling of genetic material as it is passed from parent to offspring), and the action of both natural selection and stochastic processes on the genetic diversity produced by mutation and recombination. One key feature of the theory is that it explains how these low-level genetic and selective mechanisms give rise to high-level evolutionary processes, including the origin of species, genera and higher taxonomic levels.

The forging of the modern evolutionary synthesis was not in itself a scientific revolution, but rather the conclusion of a protracted paradigm shift initiated by Darwin and Wallace nearly a century earlier. Such a conclusion did not entail the victory of one scientific tradition over another, but the fusion of two radically different conceptual frameworks — naturalism and experimentalism — into one whole. For such a milestone to arrive, a number of obstacles, grown through the persistent isolation between the opposing camps, first had to be cleared up. In the end, this was achieved by those who, rather than focusing on their own narrow specialism, were sufficiently curious to learn about the advances made in other fields, and sufficiently open-minded to notice the commonalities latent underneath the conflict. The legacy of the modern synthesis was the unification of evolutionary biology into a single field; after its arrival, the discord and hostility which had reigned for half a century gave way to widespread agreement. And the bridges erected then would remain solidly in place until the present day: although there is still debate regarding particular aspects of the theory — such as the conceptual implications of epigenetic memory and horizontal exchange of genes between organisms — the basic framework of the synthesis has remained essentially intact since the 1940s.

The history of the modern evolutionary synthesis, our current framework for understanding evolution, is of value to scientists and historians alike. The long series of discoveries and conceptual advances linking Darwin’s original theory to our unified interpretation of the evolutionary process provides an illuminating example of the consequences of such phenomena as have manifested themselves time and again in the history of science: resistance to new ideas, deficient communication compounded by semantic differences, and excessive specialisation leading to tribalistic sentiments of superiority towards foreign disciplines. Hopefully, this story also offers a lesson in how exploring the history of scientific ideas allows much deeper understanding than the mere study of their definitions; for while definitions carry a pretension to simplicity and finality, the history of science conveys the truth that science is a living process, the progress of which is fundamentally arduous, incremental, and positively fraught with quarrel.



References
Bateson, W. (1909). Mendel’s Principles of Heredity (Cambridge University Press).
Darwin, C., Wallace, A.R. (1858). On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection. Journal of the Proceedings of the Linnean Society of London. Zoology, 3 (9): 45–62.
Darwin, C. (1859). On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (John Murray).
Fisher, R.A. (1918). The Correlation between Relatives on the Supposition of Mendelian Inheritance. Transactions of the Royal Society of Edinburgh, 52 (2): 399–433.
Huxley, J. (1942). Evolution: The Modern Synthesis (Allen and Unwin).
Mayr, E. (1980). ‘Some Thoughts on the History of the Evolutionary Synthesis’, in The Evolutionary Synthesis: Perspectives on the Unification of Biology (Harvard University Press).
Newton, A. (1888). Early days of Darwinism. Macmillan’s Magazine, 57: 241–249.
Rensch, B. (1947). Neuere Probleme der Abstammungslehre (Enke).
Simpson, G.G. (1944). Tempo and Mode in Evolution (Columbia University Press).

Monday, November 14, 2022

The causes of ageing

To get back my youth I would do anything in the world, except take exercise, get up early, or be respectable.

Oscar Wilde, The Picture of Dorian Gray


Detail from Old woman and boy with candles (c. 1616–1617) by Peter Paul Rubens.


EVERLASTING YOUTH is one of humanity’s perpetual aspirations. None of us are impervious to the effects of old age, either in ourselves or in those we love. Yet, more than an inescapable element of the human condition, ageing is in fact a universal biological feature of complex animals, and possibly of all life. Biologically speaking, ageing is a gradual decline in the capacity of the cells and tissues in a body to preserve their integrity and carry out their central physiological functions. The ultimate consequence of this process is the body’s inability to sustain its own existence, leading to an inevitable death from ‘old age’. Regardless of how much effort is devoted to prolonging life, humans and other animals seem to carry an intrinsic ‘expiry date’. But why should this be so? How did such an implacable force of decay come to exist, and why do we humans seem unable to vanquish it?

The question of what causes ageing, which can be traced as far back as Aristotle, is in fact composed of two very distinct questions. The first is the question of why we age: what is the ultimate biological reason for the fact that animals have never evolved the capacity to live forever? The second question is that of how we age: what are the immediate physiological processes which cause bodies gradually to decay over time? The degree to which we understand ageing may be expected to vary between these two levels of analysis — but it may come as a surprise that it should be our understanding of how we age, rather than why we age, which remains very much undeveloped. The following presents our current scientific perspective on these two dimensions of the ageing process.

Why we age: Evolutionary causes of ageing

The universality of ageing among animals was a troublesome fact to early evolutionary biologists. In the mid-nineteenth century, Charles Darwin had proposed that the biological traits of organisms were the outcome of evolution by natural selection, and therefore had probably been useful for the survival and reproduction of previous generations. How is it, then, that evolution has not crafted organisms with the clearly beneficial capacity to maintain their youth indefinitely?

The first evolutionary explanation of ageing was proposed by the nineteenth-century biologist August Weismann. An early supporter of Darwin’s ideas, Weismann was a key figure in the development of early theories of biological heredity. To him, the evolutionary paradox of ageing could be resolved if one assumed that an animal’s longevity is indeed the product of natural selection — but not because of any benefit to the animal itself, but rather to the species as a whole. He proposed that the duration of life — the lifespan — has evolved to an optimal value which spares the population from being smothered by a preponderance of old individuals. In Weismann’s account, ageing is therefore a death mechanism explicitly evolved for the purging of older, less competitive generations, enabling the success of younger individuals. Remarkably, this theory was in fact a Darwinian makeover of the views of the ancient Roman poet and philosopher Lucretius.

Weismann’s explanation of ageing, although intuitively cogent, was found by later evolutionary biologists to be flawed. For one thing, the argument that older individuals should be purged because they are less fit than younger ones immediately invokes an assumption that individuals experience physiological ageing. But to infer the evolutionary origins of ageing, we must begin with a population whose individuals do not age, and thus can only die through extrinsic forces such as predation, infection, starvation or accident. In such a population, there is no reason to assume that older individuals should be at a disadvantage — if anything, the fact that they have survived for longer implies that they are, on average, better survivors. Moreover, older individuals should have amassed precious expertise in the manoeuvres and tactics of living, such that they should offer formidable competition to youngsters. Therefore, without the assumption of an ageing process, the death of older individuals cannot easily be defended as of benefit to the species.

Another powerful argument against Weismann’s theory is the now-established fact that traits which benefit the collective at the expense of the individual are evolutionarily unstable. In most situations, natural selection operates overwhelmingly at the level of the individual: if one deer is, for instance, able to outrun the others, it will be less likely to be preyed upon, and hence more likely to leave offspring, which will inherit its superior speed. In the same manner, if a species were to evolve an ageing process that were beneficial to the species but disadvantageous to the individual, then any individual happening to age more slowly than the rest would be at a considerable advantage, just like the faster-running deer, and so this trait would be favoured by natural selection. Ageing therefore cannot have evolved for the sole benefit of the species; if Weismann appears here to have misjudged the implications of Darwin’s theory, it may be said in his defence that Darwin himself would have fared no better. It is only after one and a half centuries of thought that we have come to understand ageing not as a consequence of the direct action of natural selection — but rather of its failure.

One of the earliest hints at the concept which underlies modern evolutionary theories of ageing was advanced by the influential mathematical geneticist JBS Haldane. During an inspired series of lectures in 1940, Haldane noted in passing that natural selection should have little power to suppress a deleterious trait if such a trait only manifests itself late in life. To see why this is the case, let us consider Haldane’s case of interest — Huntington’s disease. Despite its devastating and fatal effects, this degenerative condition typically has its onset after the age of thirty, and hence has little impact on a person’s ability to have children. By the time the disease is diagnosed, the patient’s children may already have inherited the responsible gene. Haldane correctly saw this as the reason why such a pernicious gene has not been purged by evolution. The impact of Huntington’s disease is confined to adulthood, a period of life in which the strength of natural selection declines dramatically, since reproduction has already taken place. This period is now termed the ‘selection shadow’, because biological effects within it are effectively ‘out of sight’ for evolution.

Diagram illustrating the concept of the ‘selection shadow’, referring to the progressive decline in the strength of natural selection after the age of reproductive maturity (Credit: A Baez-Ortega).

The concept of the selection shadow was first developed into a complete theory of ageing by the Nobel laureate Sir Peter Medawar, who in the 1950s attempted to explain ageing as the combined effect of a collection of ‘mutant genes’ — altered versions of ‘normal’ genes — whose effects only arise late in life. Just as in the case of Huntington’s disease, age-related conditions such as cataracts, arthritis and osteoporosis have a late onset and no impact on reproduction, which precludes natural selection from weeding off the implicated ‘mutant genes’. A large number of these problematic genes will therefore accumulate in the ‘shadow’ of selection, their effects amalgamating into what we call ageing. Medawar also grasped the significance of extrinsic mortality, that is, the rate of death from environmental forces such as predation: the later in life the effects of a gene are realised, the less individuals will remain alive to experience them. Thus, a gene which contributes to prolonging the health of heart muscle for many decades may be very beneficial to an elephant, but it is of no use to a mouse that will almost certainly be preyed upon before the age of two.

Building on Medawar’s work, a later theory proposed that ageing may arise from genes which not only have negative effects in old age, but also have beneficial effects in youth, when natural selection is at its strongest. In this theory, ageing would be a detrimental late by-product of processes which have evolved because they are beneficial earlier in life. The current scientific consensus is that each of these two theories is probably correct in some cases, such that certain components of ageing have arisen through accumulation of purely detrimental mutant genes, while others are late side-effects of advantageous genes.

An important aspect of these two evolutionary theories is that they define ageing as the result of the inability of natural selection to maintain physiological integrity for longer than is actually useful ‘in the wild’. The key insight is that it is not evolutionarily advantageous to live longer than we do, because our species has evolved so that we are able to develop and reproduce long before our bodies succumb to age. Furthermore, because the wild environment of early humans made it very unlikely for them to survive as long as we do, there has been no evolutionary need for greater longevity. Notably, our evolutionary explanation of ageing, which is theoretically and empirically well supported, does not depend on which specific physiological mechanisms are responsible for ageing. In other words, we certainly understand why the process of ageing has evolved in the first place; the scene is rather different, however, when it comes to the question of how this process unfolds in organisms.

How we age: Mechanistic causes of ageing

Luckily for junior scientists, our mechanistic theories of ageing are much more abundant and less clearly supported than our evolutionary theories. Perhaps the most immediate question regarding the actual process of ageing is whether it results from a single physiological mechanism, or from multiple mechanisms whose effects are roughly synchronised. Given the conclusion that ageing is a consequence of the ineffectiveness of natural selection, it follows that it must come about through multiple, possibly many, unrelated mechanisms.

As a crude analogy, let us imagine owning a car in a very unsafe city, where vehicles are constantly being stolen or damaged. In such circumstances, we should be wise to buy a cheap car which might last a few years, and to spend as little as possible in maintenance, as otherwise the return on our investment may never materialise. Nevertheless, if by a stroke of fortune, we find ourselves owning the same car after a good number of years, we should expect it to come apart by virtue of its being cheap and poorly maintained. This analogy unflatteringly exposes the ultimate reason for ageing — insufficient quality and care — yet it sheds no light as to which of the car’s components is expected to fail first. Given that the car’s decay is caused by deficient maintenance, we might expect multiple of its components to misbehave with increasing frequency, up to the point where the machine as a whole cannot function. Moreover, different processes might be responsible for each component’s failure: the transmission may expire out of sheer friction, while the pistons might succumb to soot. Hence, even though the ultimate cause of ageing may be universal, the processes immediately involved are manifold.

As suggested by this analogy, current research on ageing focuses on the challenging task of establishing which physiological processes contribute to ageing, and how significant each is. A large number of distinct processes have indeed been proposed as mechanistic causes of ageing. Among the most interesting of these are ‘nutrient signalling pathways’, which are functional networks of molecules responsible for transmitting the physiological signals produced when we acquire nutrients. The most popular molecule in this network is insulin, essential for the regulation of blood glucose levels. Yet in addition to the well-known relationship between deficient insulin signalling and diabetes, it has been found that interventions which interfere with nutrient signalling can considerably prolong the lifespan of many species, both vertebrate and invertebrate. For instance, a treatment known as ‘dietary restriction’, whereby the supply of food (or of certain nutrients) is permanently reduced, is considered the most reliable way of extending animal lifespan. Furthermore, the deactivation of certain nutrient signalling genes, by either mutation or pharmacological treatment, produces similar effects to those of dietary restriction. In the 1990s, Cynthia Kenyon and her colleagues discovered that mutations in such a gene led to a doubling of lifespan in nematode worms, a finding followed by similar reports in fruit flies by the groups of Dame Linda Partridge and Marc Tatar. On the other hand, nutrient signalling also regulates body growth and development, and animals subjected to these life-prolonging interventions tend to be stunted and ill-developed. Interestingly, although the network of effects whereby nutrient signalling modulates development and longevity is not yet fully characterised, it is believed to be the reason why smaller dog breeds live longer than larger ones.

A second leading candidate among possible mechanisms of ageing is molecular damage. Cells are constantly exposed to many kinds of chemical damage, which can alter their constituent molecules and impair the efficiency of cellular processes. The types of molecules subject to such damage include proteins (which are both the cell’s building blocks and its working tools) and DNA (which carries the organism’s genetic information, including the instructions for protein synthesis). One extensively studied type of DNA modification with potential roles in ageing is the shortening of telomeres — long stretches of DNA which are placed at the ends of chromosomes to protect them from fraying, like the aglet in a shoelace. Telomeres are slightly shortened every time a cell divides into two new cells, and eventually become too short to allow further cell division, which is thought to be an important barrier against the emergence of cancer — but might also be a cause of ageing. Recently, the biologist María Blasco and her team reported the striking finding that the rate of telomere shortening in a species is related to its lifespan, such that telomeres erode faster in shorter-lived species. Nevertheless, this relationship is obscured by the fact that shorter-lived species also tend to be smaller, and body size itself is thought to influence many aspects of animal physiology.

Fluorescence microscopy image showing the location of telomeres (white) at the ends of human chromosomes (grey). Telomeres preserve the integrity of DNA inside chromosomes, and their shortening over time has been proposed as a cause of ageing (Credit: NASA/Wikimedia Commons, public domain).

Working with Alex Cagan, Iñigo Martincorena and other researchers at the Wellcome Sanger Institute, I recently explored the relationship between animal lifespan and another common form of DNA modification — somatic mutations. This term refers to the changes that accrue in our DNA over time; such mutations are not present initially in any of our cells, but are acquired by individual cells as our bodies grow and age. Somatic mutations were first hypothesised to contribute to ageing in the 1960s, but their exact role remains elusive. Cagan and I characterised the rate of mutation across sixteen species of mammals, from mice to giraffes, and found a very similar relationship to that described for telomeres: shorter-lived species mutate faster than longer-lived ones, such that a mouse cell acquires as many mutations in two years as a human cell might do in eighty. Crucially, we determined this result to be unaffected by the relationship between longevity and body size: at least in mammals, the mutation rate can be used to predict a species’ lifespan, regardless of its size. The fact that the rates of different forms of molecular damage present similar relationships with lifespan suggests — but does not prove — that these forms of damage may be involved in ageing.

Diagram showing the inverse relationship between lifespan and the rate of somatic mutation in 16 species of mammals. The mutation rate of each species is inversely proportional to its lifespan, such that all species carry a similar number of mutations in their cells’ DNA at the end of their respective lifespans. This relationship is indicated by the blue line, with the shaded area marking a two-fold deviation from this line (Source: Cagan, Baez-Ortega et al., 2022).

It might seem inconsistent that processes as unrelated as nutrient signalling and molecular damage might all contribute to ageing. But these processes are not so distant when viewed in the light of a theory known as the ‘disposable soma’ theory of ageing. According to this, the physiology of complex organisms includes a central energy trade-off, such that the energy acquired from food is distributed between the processes of somatic maintenance (the preservation of the body via repair of molecular damage) and reproduction (the preservation of genes via their transmission to offspring). Rather than grappling with the evolutionary origin of ageing, this theory provides a framework for its physiological regulation. Because our body (the ‘soma’) is ultimately perishable, the energy trade-off between maintenance and reproduction has presumably been optimised by evolution to favour the expensive process of reproduction in times of nutrient abundance, and to promote maintenance instead when nutrients are limited. It is thus possible that nutrient signalling disruption modifies the speed of ageing by interfering with the ‘gauge’ of this energy allocation system, whereas molecular damage may simply be the force which opposes somatic maintenance processes. Despite the remarkable coherence of the disposable soma theory, the evidence for the existence of a universal energy trade-off in animals is currently inconclusive. It is possible that, like so much else in biology, energy trade-offs are crucial but not universal: they might be relevant only for some species, or in certain organs, or at particular periods in life. Even in this time of unparalleled scientific progress, an immensity of knowledge remains to be discovered regarding the physiological processes involved in ageing.

The battle against ageing

Since the days of Darwin and Weismann, we have come to understand ageing not as a death force evolved for the benefit of the species, but rather as an inextricable consequence of the manner in which evolution works. Animal bodies have not evolved to live forever, but to succeed in surviving and reproducing amidst a ruthless environment. The biology of our bodies is such as it is precisely because our ancestors managed to succeed in these tasks, not because they managed to live forever.

Whatever the causes of ageing, the essential question for humanity is whether we shall ever be able to throttle them — perhaps not with a view to living forever, but at least to enjoying longer-lasting health and a happier old age. It seems clear that this target will remain out of reach so long as we fail to understand what exactly ‘ageing’ means at the molecular level. Someday we might gain the power to manipulate the processes by which our bodies fend off the effects of time, or even to combat such effects directly; we may finally be able to subdue and domesticate the process of ageing. But such miracles lie still beyond the horizon, and for years to come we must keep drawing on the power of conventional medicine to manage individual age-related conditions.

When it comes to growing older, the personal theory of the essayist, poet and former Master of Magdalene College, AC Benson, may be more helpful than those discussed here: ‘I have a theory that one ought to grow older in a tranquil and appropriate way, that one ought to be perfectly contented with one’s time of life, that amusements and pursuits ought to alter naturally and easily, and not be regretfully abandoned’. Too modest a theory, perhaps; he goes on to concede that ‘It is easier said than done’. Yet, even as we feel the gentle, impassive slipping away of youth between our fingers, we should be wise to keep in mind the words of Longfellow:

For age is opportunity no less
Than youth itself, though in another dress,
And as the evening twilight fades away
The sky is filled with stars, invisible by day.



References
Weismann, A. ‘The duration of life’ (1881). In Essays Upon Heredity and Kindred Biological Problems (tr. Poulton, EB, Schönland, S, Shipley, AE). Clarendon, 1889.
Haldane, JBS. New Paths in Genetics. Allen & Unwin, 1941.
Kenyon, C, Chang, J et al. A C. elegans mutant that lives twice as long as wild type. Nature, 1993.
Hughes, KA, Reynolds, RM. Evolutionary and mechanistic theories of aging. Annual Review of Entomology, 2005.
Kirkwood, TBL. Understanding the odd science of aging. Cell, 2005.
Flatt, T, Partridge, L. Horizons in the evolution of aging. BMC Biology, 2018.
Whittemore, K, Vera, E et al. Telomere shortening rate predicts species life span. Proceedings of the National Academy of Sciences, 2019.
Cagan, A, Baez-Ortega, A et al. Somatic mutation rates scale with lifespan across mammals. Nature, 2022.

This article was originally published in the 2021–22 Magdalene College Magazine.
The author is grateful to James Raven and Aude Fitzsimons for their comments on the original manuscript.

Friday, July 23, 2021

On eternal life


(Image credit: Francis C. Franklin/Wikimedia.)


I WAS WALKING along a canal in Cambridge, when I stopped to watch a moorhen and its young chick. They were swimming among the branches of a small fallen tree. The moorhen dived briefly to pick one of the tree’s seedpods; the chick dashed as lightning to devour it. I thought of how both of them were oblivious to anything that had ever been, and anything that was yet to be. Their attention was devoted entirely to this thin slice of life. The adult moorhen was oblivious to the fact that it must die in a few years, if not sooner; the chick ignored that it would very soon be feeding its own progeny. Yet I could see this all too clearly. The chick was but a young version of its parent, and was to be an old version of its own offspring, and would then cease to be. I could see this process unfolding backwards in time through millions of repetitions, as more and more remote ancestors of the moorhen slowly shifted in shape to resemble early birds, then dinosaurs, then amphibians, then fish, then simpler organisms, down to the one cell whose genetic material now inhabits every life form. It was the evidence of this endless natural cycle of existence which doubtlessly inspired the idea of spiritual reincarnation, on which the weight of so many human religions rests.

We humans lie in a strange place in the path of life. An immensely long process of evolution connects us to the first self-replicating cells that came into existence over three billion years ago. As life became able to build biochemical systems of higher complexity, a certain kind of self-awareness gradually emerged, from the rudimentary proprioceptor systems of microorganisms, to the evident understanding of their own existence which animals (and even plants) display, and then to the capacity of higher animals for conscious decision-making. Then came Homo sapiens, an overbrained primate that is not only aware of its own existence, but is also able to ponder deeply about it, pose questions about the world it inhabits, and speculate on the causes and meaning of its own life. Humanity marks the point where life stood up, looked back upon its own wake, and was left speechless by its own inconceivable magnificence. We are the only form of life capable, to any extent, of understanding what life is. And yet we are subject to the same cycle of birth, reproduction and death. We reincarnate ourselves in our children, imperfect copies of us who, like the moorhen’s chick, are bound to retrace the arc of our own lives, make imperfect copies of themselves, and finally perish. Our children will admire the descendants of the birds we admire today, and history will keep passively unfolding, while we convince ourselves that life revolves around the infinitely short slice of time which we happen to inhabit. By understanding this, we have perhaps come as close as it is possible to grasping the endless miracle and the infinite calamity of our existence.

It is the ceaseless replication of every organism which makes life immortal. Because every living being must inevitably die, be it from predation, disease, or mere accident, the genetic information that allows life to function cannot rely on a single vessel. Life’s vessels must be such that they are able to construct brand-new vessels before succumbing to any of the natural forces which conspire against their existence. Like infinitesimally short segments of an unfathomably long pipe, we transmit the precious information of life to its new recipients, unintentionally ensuring that life itself carries on after we have been pushed off the stage. Vessels make new vessels; proteins and lipids are created, degraded, created again; only the information of life, written in DNA, survives forever. After our own death, every product and every memory of our life will inevitably dissolve in the vastness of time, whether it takes one generation or one hundred. But our genes, and the genes of our parents, our ancestors, our children, may live as long as humanity does.

Our species also holds a special place for a different reason. Even if Homo sapiens were to become extinct, as have the vast majority of species which have ever existed, life itself will certainly persist, in one form or another, until the Sun’s death throes transform our planet into a ball of molten rock, billions of years into the future. The end of the Earth will mark the end of terrestrial life, and no trace whatsoever will be left of its long and grand history. For even though life possesses the instruments to withstand the unrelenting destruction of its vessels over eons, it is entirely unprepared for such an extreme prospect. Among the many millions of living species, only ours has become aware of it. And so, as life which is conscious of itself and of its destiny, only we can avert the ultimate ending of life, by carrying it to a new planetary vessel. Although the idea of colonising planets outside our own solar system is still an utterly unrealistic one, the fact remains that, unless another species of comparable or superior intelligence emerges in the far future, this will be the only opportunity for life to survive the death of our planet. Whether humanity will ever be in a position even to consider embarking in this ultimate task of self-replication, only time knows. By then, you will have finished feeding your children, and you will no longer exist; perhaps your children and grandchildren will have ceased to be; but the information which you all carry will still inhabit self-conscious vessels of flesh and blood, driven by new thoughts, new hopes, new feelings. This is eternal life.

Friday, January 1, 2021

A boundless mind

The life of the polymath Thomas Young reminds us of the staggering potential of the human intellect.


Portrait of Thomas Young by Henry Briggs, after an original painted by Sir Thomas Lawrence c. 1822 (Wikimedia Commons).

THOMAS YOUNG (1773–1829) is mainly remembered today as the scientist who, in the early nineteenth century, demonstrated that light behaves as a wave, using his celebrated ‘double slit’ experiment. Significant as this discovery was, however, remembering Young for it alone is but an extremely poor recognition of his achievements. By the time he died at nearly fifty-six years of age, Young had not only proved that light is a wave, but, among other things, he had also demonstrated how the eye focuses on objects, discovering at the same time the phenomenon of astigmatism; he had advanced the three-colour theory of human vision, which was confirmed experimentally in the mid-twentieth century; he had invented ‘Young’s modulus’, an important measure of the elasticity of materials; and he had made foundational contributions to the decipherment of Egyptian hieroglyphs, in addition to deciphering another ancient Egyptian writing system, the demotic script. Besides these major accomplishments in physics, physiology, engineering and Egyptology, Young was also an experienced physician, a distinguished linguist and antiquarian, and a scholarly authority on an astonishingly wide variety of subjects, from astronomy and calculus to carpentry and life insurance. Rather than approaching all these subjects as a mere diversion, Young mastered and made original contributions to each of them. The extraordinary breadth of his knowledge was arguably on par with that of Leonardo Da Vinci; and it is fair to say, indeed, that Thomas Young might have been the world’s last ‘Renaissance man’.

As the writer Andrew Robinson explains in his superb biography of Young, The Last Man Who Knew Everything, Young not only enjoyed a magnificent intellect, but also possessed the attributes which we now associate with the notion of a ‘good scientist’. In doing his scientific and scholarly work, he did not aspire mainly to fame, wealth or social recognition, but rather to the pure satisfaction that accompanies the pursuit of knowledge. In fact, having been trained as a physician, Young published many of his non-medical works anonymously, in the fear that his extraordinarily broad interests might dissuade patients from attending his medical practice, opting to consult more ‘centred’ doctors instead. Moreover, his knowledge of science and his awareness of the flaws of nineteenth-century medicine precluded him from adopting the air of overconfident authority which was expected of physicians, ironically giving the impression that he lacked in expertise. Young was fanatically committed to truthfulness and transparency in his research, and was swift to acknowledge and praise the work of his colleagues and predecessors in every field he studied. Notably, he also was, in both his professional and his personal life, a distinctly modest and self-deprecating man, attaching plenty of significance to the role of chance in his career. In a letter to his lifelong friend, the antiquary and politician Hudson Gurney, he wrote: “It is well for me that I have not to live over again; I doubt if I should make so good a use of my time as mere accident has compelled me to do”. In Robinson’s words, “Young was keen on the idea that what one man had done, another man could also do; he had only a small belief in individual genius”.

Born in 1773 into a large Quaker family in Somerset, England, Young soon gave early evidence of his intellectual voracity: he could read fluently by the age of two, and before he was four he had already read the Bible twice. As a schoolboy, he learned Greek, Latin, French and Italian, and he independently went on to tackle Hebrew, Arabic, Persian, Chaldee, Syriac and Samaritan, developing a familiarity with languages that would prove invaluable in his adult research. With the assistance of some neighbours and family acquaintances who were appreciative of his precociousness, he also built telescopes and microscopes and conducted chemical experiments. Even at this early age, Young had a clear ambition of mastering as many areas of knowledge as he could reach; and, even more remarkably, such curiosity and determination would not abandon him until his dying day. As the majority of child prodigies, he acquired most of his knowledge directly from books: in a letter to his brother, he remarked that “Masters and mistresses are very necessary to compensate for want of inclination and exertion: but whoever would arrive at excellence must be self-taught”. Perhaps one of his most impressive feats as a boy, besides his study of dozens of languages, is the fact that, by the age of seventeen, he had studied Newton’s great scientific treatises, the Principia and the Opticks, in full depth — and there is evidence that he was able to follow their advanced mathematics. This showcases the extreme versatility of mind that would characterise the adult Young; as the writer Isaac Asimov noted, “He was the best kind of infant prodigy, the kind that matures into an adult prodigy”.

At the age of nineteen, Young moved to London in order to being his medical training at one of the city’s private anatomy schools. There, after a dissection of an ox’s eye, he became interested in the process by which the eye focuses on objects located at different distances, known as accommodation. He read all the previous literature on the subject, including the theories of Johannes Kepler and René Descartes. The former had proposed that accommodation is effected by the movement of the crystalline lens (the lens found inside the eye) back and forth along the horizontal axis of the eye, just like the lens of a camera. Descartes, in contrast, had argued that the crystalline lens is fixed, and that accommodation occurs not through its movement, but through a change in its shape. Young’s examination of the ox’s eye led him to conclude that Descartes’s theory was correct, and that the crystalline lens was able to alter its own curvature because it was muscular. This work soon led to Young’s first scientific publication, a paper titled ‘Observations on Vision’, which was read to the Royal Society of London by his great-uncle, the physician Richard Brocklesby, and published in the society’s Philosophical Transactions when Young was still nineteen years old. Today, we know that Young was right in concluding that accommodation occurs through a change in the curvature of the crystalline lens; but the latter is not, in fact, muscular, as he then claimed, being instead surrounded by a set of radial muscles which effect the deformation.



Diagram of the parts of an ox’s eye from Young’s first article (Young, 1793).

The following year, Young was elected a fellow of the Royal Society, one of the highest scientific honours in Britain. Although his work on vision was certainly extraordinary for someone his age, it should be borne in mind that the standards for admission into the society were less strict at the time. As Robinson notes, “It is inconceivable today that even a young man as gifted as Young could be elected a fellow of the Royal Society on the evidence of one scientific publication”. Despite his appreciation of this honour, Young’s lifelong shunning of official titles is patent in the letter where he informs his mother of his election: “I hope I am not thoughtless enough to be dazzled with empty titles which are often conferred on weak heads and on corrupted hearts”.


At the turn of the nineteenth century, university degrees were increasingly important for trained physicians to distinguish themselves from quacks and charlatans, which were not in short supply in London. Hence, in spite of having no special interest in attending university, Young went on to study medicine at the universities of Edinburgh, Göttingen and Cambridge. Driven by his multifarious interests, however, he also took the opportunity to improve his knowledge and skills in plenty of domains other than medicine; writing from Edinburgh to his mother, he made clear that he “by no means wish to confine the cultivation of my mind to what is absolutely necessary for a trading physician”. While in Edinburgh and Göttingen, Young made the acquaintance of classical scholars and took lessons in music, drawing, dancing, flute playing and horsemanship. In 1796, after a total of four years of training, he defended his thesis in Göttingen and became a doctor of medicine. Nevertheless, upon his return to England he discovered that he did not yet qualify to be a licentiate of the Royal College of Physicians, which now asked candidates to have studied for at least two years at the same university. Since Young had not spent enough time in either Edinburgh or Göttingen, he was forced to return to university for another two years. He chose to pursue the degree of bachelor of medicine at Emmanuel College, Cambridge. As he considered the ancient university to offer him little in terms of medical training which he had not already acquired, he spent most of his time reading, writing and carrying out experiments in his college rooms, as well as making the acquaintance of a variety of scholars from across the university. He certainly did not go unnoticed in Emmanuel College, although few fellows were pleased to meet a student who was able to challenge their knowledge of their own disciplines.

Young returned to London in 1800; now finally able to practice medicine, he opened a private practice and started to look for a consultant position at a hospital. Crucially, he had received a considerable inheritance after the death of his great-uncle in 1797, which alleviated his financial dependence on patients, thus allowing him to extend his earlier research on vision. In a lengthy paper titled ‘On the Mechanism of the Eye’, read to the Royal Society in 1800, Young conclusively established how the eye focuses, and also diagnosed and measured astigmatism for the first time — in his own eyes. To achieve this, he first improved an existing instrument for measuring the focal distance of an eye, known as an optometer. He then performed an extremely ingenious — and sometimes disturbing — series of experiments to ascertain whether the eye alters its length or its curvature during accommodation. To discover if his eye’s length changed, he inserted the ring of a metal key into his eye socket, and fixed it against the back of his eye: “The key was forced in as far as the sensibility of the integuments would admit, and was wedged, by a moderate pressure, between the eye and the bone”. In this position, the pressure of the key against his retina caused him to see a bright spot, or ‘phantom’; even a slight change in the eye’s length, he argued, would modify the pressure against the key, and hence the size of this phantom. In this way, he showed that the eye does not alter its length when focusing on objects at different distances. To see whether the eye changed its curvature, he closely examined the shape of a candle’s reflection on another person’s cornea, concluding that the eye’s curvature was also unaltered during accommodation. Finally, to verify that it was the shape of the crystalline lens itself that mattered, Young used his optometer to test the power of accommodation of five people whose crystalline lens had been removed as a treatment for cataracts. This showed that “in an eye deprived of the crystalline lens, the actual focal distance is totally unchangeable”: people without a crystalline lens could not focus their eyes on objects, and needed to use a series of spectacles for looking at objects at different distances. Nevertheless, Young was careful not to reiterate his earlier hypothesis that the lens itself was muscular, of which he was no longer convinced. In fact, the ciliary muscles that cause the crystalline lens to change its curvature would not be discovered for several decades.



Illustration from Young’s second paper on vision, presenting different images as perceived by the author himself during his experiments (Young, 1800).

In addition to his experiments on the eye, Young immersed himself in an investigation of the nature of light, which would lead to his defence of the wave theory of light in two papers read to the Royal Society in 1801 and 1803. In the early nineteenth century, the leading theory of light was still Newton’s ‘corpuscular’ theory, which proposed light to be a stream of particles that move in straight lines through empty space. Competing against this was the ‘undulatory’ or wave theory of the astronomer Christiaan Huygens, according to which light was a wave that spread through an invisible medium known as the ether. Both theories were equally capable of explaining the reflection of light on surfaces; the corpuscular theory, however, was more successful at explaining the rectilinear propagation of light, while the wave theory was better suited to explain refraction (the bending of light rays when passing from one medium to another).

Young’s means for conclusively demonstrating that light behaves as a wave was a phenomenon known as interference. This is easiest to picture using the example of waves in water: if two stones are dropped simultaneously on a quiet pond, they produce two sets of waves on the pond’s surface, which cross each other as they spread. At the points where the crests (or the throughs) of two waves coincide, their effects reinforce each other to produce a higher crest (or a lower trough); while at the points where the crest of one wave coincides with the trough of another, their effects cancel each other and the surface remains level. These two types of interaction are called constructive and destructive interference. Young realised that, if light were a wave, the interference between two light rays would produce an alternating pattern of light and darkness. Such a phenomenon, where light added to light can result in shadow, would be impossible to explain for the corpuscular theory. In a bold leap of intuition, Young went on to propose that the colours of light correspond to waves of different frequency (or wavelength); this immediately allowed his principle of interference to explain the puzzling iridescent colours emitted by certain objects, such as soap films and some insects’ wings. In his 1803 paper, Young presented an experiment where he directed a beam of light through a small aperture, and then split it into two beams using the edge of a card. Although this was not yet his celebrated double-slit experiment, it showed that the interference between the light rays passing through each side of the card gave rise to parallel fringes of light and shadow on a screen. Due to the enormous weight of Newton’s theory, however, few people accepted Young’s conclusions in 1803. Despite this, he was confident of his work; in a letter to a friend, he wrote: “The theory of light and colours, though it did not occupy a large portion of time, I conceive to be of more importance than all that I have ever done, or ever shall do besides”. Indeed, his demonstration that light behaves as a wave is considered to be his most significant contribution to science.


Diagram illustrating the interference between two sets of waves in water, produced using a device of Young’s invention known as a ripple tank (Young, 1807).

Young’s adherence to the wave theory of light, in turn, led to his second major contribution to the understanding of vision: his theory of three-colour vision, advanced in his 1801 paper. In this case, his proposition was closer to a powerful intuition than to a formal theory. It had by then become established that the palette of colours in light was derived from a small number of so-called primary colours, possibly three or five. Young’s breakthrough, derived from his association of colour with wavelength, was to imagine that the brain could perceive light using three distinct types of ‘receptors’ in the retina: one receptor for red light, corresponding to a long wavelength, another for yellow light, with a middle wavelength, and a third for blue light, with a short wavelength. Intermediate colours (with intermediate wavelengths), such as green, would stimulate two types of receptors to a similar degree, resulting in a composite signal which the brain would interpret as green. In this way, Young implicitly advanced the first theory of vision which suggested that the brain not only receives information, but also processes it in order to generate the sensations that we perceive. This idea is one of the cornerstones of modern neurology, proving just how far ahead of his time Young’s intellect was. In fact, Young’s three-colour theory remained entirely forgotten until the 1850s, when it was rediscovered by the physiologist and physicist Hermann Helmholtz, who developed it into a full-fledged theory that would later be extended by the physicist James Clerk Maxwell. It was only in 1959 that two groups of scientists in the United States experimentally demonstrated that colour is perceived through three kinds of receptors which cover the retina. Notably, Young even went as far as to suggest, correctly, that colour blindness is caused by the dysfunction of one of the three types of receptor.

In the period between 1801 and 1803, Young not only worked as a physician and investigated light and vision, but he was also a public lecturer at the Royal Institution of London, where he was appointed professor of natural philosophy in 1801. In fact, this period was possibly the most strenuous in Young’s life: in 1802, he wrote to a friend that “an immediate repetition of the labour and anxiety that I have undergone for the past twelve months would at least make me an invalid for life”. The Royal Institution, founded in 1799 to promote the application of science to society, already had a tradition of holding public lectures on scientific subjects, which included live demonstrations of phenomena like chemical reactions, electricity and magnetism. Young accepted to deliver a course of lectures which would cover virtually all of the physical sciences, and on whose preparation he toiled feverishly for the best part of a year. Over 1802–03, he delivered more than a hundred lectures at the Royal Institution; one of his particular ambitions in doing so was to educate interested people who had no access to education, including women. As he later observed in the introduction to the written version of his lectures, “the Royal Institution may in some degree supply the place of a subordinate university, to those whose sex or situation in life has denied them the advantage of an academical education in the national seminaries of learning”. According to contemporary accounts, however, Young’s facility as a writer did not translate into an engaging style of lecturing, and he did not distinguish himself in this role, especially when compared to eminent Royal Institution lecturers like Michael Faraday and Sir Humphry Davy.

Young’s lectures were published in 1807, as an imposing two-volume book titled A Course of Lectures on Natural Philosophy and the Mechanical Arts. In terms of its scope, depth and degree of original insight, this work remains unsurpassed by any other general lecture course written by a single author. Remarkably, the Lectures included not only Young’s lectures from 1802–03, but also a magnificent historical catalogue listing some twenty thousand scientific works in a wide variety of languages, and spanning everything from ancient Greece to his own time. As Robinson rightly states in his biography, “Only Young, among the scientists of his day, would have had the command of foreign languages, combined with the range, judgement and industry to compile such a monumental bibliography”. Ironically, although Young was more than satisfied with the book, his publisher went bankrupt shortly after its publication, leaving him no reward for such a colossal amount of work.


The contents of the Lectures include only too many examples of its author’s tremendous intuition and foresight. First of all, the book contains a description of the experiment for which Young is best remembered today, the double-slit experiment that confirmed the wave theory of light. Here, instead of using the edge of a card (as in his 1803 paper), he cut two narrow slits on a piece of card, which he used to split a beam of light into two beams and observe the fringes of light and darkness produced by their interference. In addition to this, the book includes the first recorded use of the word ‘energy’ in its modern scientific sense (a measure of a system’s capacity to perform work), the first experimental estimate of a molecule’s diameter (whose prescience is underscored by the fact that the existence of atoms and molecules was rejected by most physicists at the time), and an early proposal of the modern notion that different forms of radiation belong to a single spectrum of wavelength, ranging from ultraviolet light on one end, through the colours of visible light, to infrared light (which, moreover, he correctly linked to heat) on the other end. Thus the Lectures, which constitute Young’s greatest written work, evidence that the claim that he was well ahead of his time is no exaggeration.

A selection of figures from Young’s Lectures, including illustrations of the double-slit experiment (top left) and a colour palette (top right) (Young, 1807).

Notwithstanding his trailblazing work in physics and physiology, and the monumental achievement of his Lectures, Young, who was barely thirty years old, was well aware that he still needed to secure a reputation as a practicing physician in order to procure a stable income for him and his wife Eliza, whom he had married in 1804. He tried to attain this by further feats of scholarship: in 1813 and 1815 he published two exhaustive medical volumes, An Introduction to Medical Literature and A Practical and Historical Treatise on Consumptive Diseases. Just as he had done for science, he not only condensed contemporary medical knowledge, but also catalogued the literature of the previous two thousand years. Nevertheless, instead of granting him reputation as a respectable physician, these two books promoted an undesirable image of Young as a ‘cold man of science’, and antagonised his colleagues by offering too clear a view of the abundant flaws and failures of nineteenth-century medicine. The disappointment caused by the reception of his books was probably the main factor which gradually pushed him away from his ambition to become a leading physician, leaving increasing room for his vast array of more absorbing interests.

One such interest was the quest to decipher the writings of ancient Egypt, in which Young would be involved from 1814 onwards. The main driver of the decipherment effort was the legendary Rosetta Stone, discovered by Napoleon’s army in Egypt in 1799. The crucial feature of the Rosetta Stone is that it carries an inscription in three different scripts: Egyptian hieroglyphs, a second Egyptian script known as demotic, and ancient Greek. The Greek inscription was soon translated, revealing that the other two inscriptions contained the same text; this meant that it might be possible to identify equivalent words in Greek and Egyptian, and employ them to crack the hieroglyphic and demotic scripts. Given his vast experience with languages modern and ancient, Young was excellently equipped for this task. By studying the inscriptions in the Rosetta Stone, besides tirelessly copying and comparing hieroglyphic and demotic inscriptions from a myriad of other sources, he was able to notice subtle similarities and patterns which had been overlooked by other scholars. In particular, Young was the first to notice parallels between some hieroglyphic signs and their equivalent demotic characters, and he went on to show that the two scripts were not unrelated, with demotic being actually derived from hieroglyphic. From this insight, he realised that the demotic script comprised “imitations of the hieroglyphics … mixed with letters of the alphabet”; it was, in other words, a mixture of symbolic characters representing concepts, and phonetic characters representing sounds.

In 1819, Young published a historic article titled ‘Egypt’ in the Encyclopaedia Britannica, which contained the first systematic attempt at deciphering ancient Egyptian writings. In over thirty thousand words, the article presented Young’s results since he began studying the scripts in 1814, including a dictionary with proposed translations for more than four hundred hieroglyphic and demotic words, as well as a tentative ‘alphabet’ for the demotic script. These unprecedented advances were made possible by an earlier suggestion that non-Egyptian names in the inscriptions might be spelled phonetically, in both the demotic and hieroglyphic scripts. Young proved that this was the case by translating the hieroglyphic inscriptions for the names of King Ptolemy and Queen Berenice (though not all his phonetic guesses were correct). Most notably, this article was published anonymously, as Young had by then started to conceal his non-medical research to avoid damaging his reputation as a physician. And, despite having been the indisputable leader of the decipherment effort until then, his endeavour to remain anonymous would prove more harmful than beneficial once the French Egyptologist Jean-François Champollion came onto the scene in 1821.

A letter written by Young in 1818, where he advances meanings for certain groups of hieroglyphs (including the names of Ptolemy and Berenice), most of which were correct (The British Museum).

Champollion and Young were bound to become rivals. For a start, they had opposite personalities: Champollion, who is now considered the father of Egyptology, was passionately devoted to the civilisation of ancient Egypt, and had long wished to visit the Mediterranean country and explore its monumental ruins. His temper, moreover, matched his zeal: he was prone to displays of extreme emotions, and harboured a burning desire for the glory of deciphering the hieroglyphs. Young could hardly have been more different: an incorrigible polymath, his interest in the scripts of ancient Egypt never extended beyond the itch to crack a philological puzzle; he had a calm and candid disposition and, according to his friend Gurney, he “could not bear, in the most common conversation, the slightest degree of exaggeration”. Significantly, it was Young’s own self-deprecation and his anonymity as a researcher which enabled Champollion to claim the sole credit for the decipherment of the hieroglyphs, despite the plain fact that his technique was built upon Young’s earlier findings and his tentative Egyptian dictionary. In fact, a former teacher of Champollion, Sylvestre de Sacy, warned Young as early as 1815 that he should be careful in sharing his discoveries with the French scholar, for “he may hereafter make pretension to the priority”.

Just how much Champollion benefitted from Young’s work can be appreciated by examining his major publications. The first of these appeared in 1821, while he was still oblivious of Young’s 1819 article. Two facts about this publication are very notable: first, Champollion put forward the seriously mistaken notion that the demotic script was composed entirely of conceptual symbols (while Young had already shown that it included phonetic symbols as well); second, once he had come across Young’s article in Paris, it seems that Champollion made a herculean effort to withdraw every single copy of his own article, and was careful not to refer to it in his subsequent publications of 1822 and 1824. Most tellingly, he also avoided any mention of Young’s previous identification of the meanings of many hieroglyphs, including his partly correct deciphering of the names of King Ptolemy and Queen Berenice, as well as other crucial findings, such as the use of certain symbols to indicate female names. When making use of these previous discoveries in his research, Champollion simply referred to them as part of his deductive process, thus implying that they were either well-known facts or his own findings. In reality, the insights gathered by other scholars served him as an essential stepping stone that allowed him to finally decipher the entire hieroglyphic script; what is most disturbing is not the fact that he built on these earlier results — which is a natural part of research — but rather that he adamantly refused to concede any recognition to their original authors. An understandably irritated Young was swift to point out that Champollion had attained his goal “not by any means as superseding my system, but as fully confirming and extending it”. Their irksome dispute notwithstanding, Young never failed to laud Champollion’s crucial contributions to the decipherment; he simply wanted his own contributions recognised. With the benefit of hindsight, it is clear that Champollion was doing himself no favour by insisting on claiming all the credit for the decipherment of the hieroglyphs: the breakthroughs that he achieved in 1822–24, his pioneering explorations of Egyptian ruins and monuments, and his publication of the definitive statement of the decipherment, would undoubtedly have sufficed to secure his legacy as the founder of Egyptology. Instead, Champollion’s egotism became an indelible stain on his reputation; brilliant and industrious as he was, he is also remembered as an arrogant and somewhat dishonest scholar.

Despite the manner in which Champollion had overtaken him and seized the hieroglyphic laurels, Young did not cease to work on the writings of ancient Egypt; after all, the demotic script remained undeciphered, and he now seemed to be in a position to crack it. This was largely due to a providentially helpful papyrus which he encountered in 1822, containing a Greek translation of a demotic text that Young had already spent much time trying to decipher. Thus he expressed his exhilaration at the sheer improbability of this event: “a most extraordinary chance had brought into my possession a document which was not very likely, in the first place, ever to have existed, still less to have been preserved uninjured, for my information, through a period of near two thousand years: but that this very extraordinary translation should have been brought safely to Europe, to England, and to me, at the very moment when it was most of all desirable to me to possess it…”. Notably, Champollion himself, possibly more relaxed after having become a prestigious curator at the Louvre Museum in 1826, offered Young the use of his private notes on the demotic script. With these new resources in hand, Young finally completed the decipherment, becoming the first person to read a demotic text in more than a thousand years. From that moment until his death, he continued to work on what would be his final opus, Rudiments of an Egyptian Dictionary in the Ancient Enchorial Character, published posthumously in 1831.

Three pages from Young’s Rudiments of an Egyptian Dictionary, presenting the meanings of groups of demotic characters (Young, 1831).

It would be easy to believe that the study of Egyptian writing systems, combined with his medical obligations, absorbed all of Young’s time after 1814; but nothing could be farther from the truth. In fact, his polymathic tendencies became even more evident during this period. To begin with, between 1816 and 1825 Young contributed a total of 63 articles to the Encyclopaedia Britannica, writing on an astonishing variety of topics including languages, ocean tides, hydraulics, bridges, Egypt, carpentry, road-making, steam engines and integrals. Some of these articles went beyond mere reviews of existing knowledge, presenting some notable original insights. In addition to the pioneering work on the hieroglyphs in his article on Egypt, Young’s article on languages is particularly noteworthy. In its thirty-three thousand words, he applied his philological knowledge to examine and compare some four hundred ancient and modern languages from across the globe, and classified them into families on the basis of their degree of similarity. In this analysis, he coined the now-popular term ‘Indo-European’ for the family of languages comprising most of the Indian, West Asiatic and European tongues. Young, however, made anonymity a condition of his contributions to the Encyclopaedia; he would not agree to attach his name to his writings until 1823, by which time he had abandoned his ambition of becoming a leading physician.

One factor, besides the underwhelming reception of his books, which prompted Young to gradually steer away from his medical aspirations, was the increasing financial security brought by the multiple government-funded positions that he fulfilled from 1811 onwards. The bodies in which he was asked to serve included a Royal Navy committee to evaluate the adoption of an improved method for the construction of ships; a Royal Society committee requested by the government to assess the safety of introducing coal gas in London; a government commission for comparing the French and English unit systems, and considering the adoption of a more consistent system throughout the British Empire; and the government’s Board of Longitude, which was in charge of a scheme of prizes for solutions to the problem of determining longitude at sea. Notably, in 1820 Young used the influence of his position at the Board to convince the government to establish a major astronomical observatory at the Cape of Good Hope in South Africa. It was because of this array of services to his country that he felt confident enough to write, with distinctive wittiness: “But I do not owe the public much, and I suppose I shall never be paid much of what the public owes me”. And even all this does not capture the entirety of Young’s activities during the 1820s: he also published technical papers on such disparate subjects as the shape and density of the Earth and the theory of life insurance; and he was hired as ‘inspector of calculations’ and physician to a newly founded life insurance company — a position so well paid that he asked for his salary to be reduced. More remarkably, Young was also considered as a candidate for the presidency of the Royal Society (which he had served as foreign secretary since 1804), and had he been interested in the position — or “if I were foolish enough to wish for the office” — he would certainly have been elected.

After an adult life of notable good health, in 1828 Young felt an unaccountable fatigue while visiting Geneva. Early in the following year, he started suffering apparent attacks of asthma, and developed progressive difficulty to breathe and weakness. Even when confined to bed, he nonetheless continued to work on the final proofs of his Rudiments of an Egyptian Dictionary, up to the point where he had to resort to a pencil for being too weak to hold a pen. According to George Peacock, a contemporary biographer of Young, when a friend advised the dying man not to fatigue himself with this work, “he replied that it was no fatigue, but a great amusement to him”. He had almost finished correcting the proofs of his book when he passed away on 10 May 1829, just a month short of his fifty-sixth birthday. An autopsy of his body revealed ‘ossification of the aorta’, today known as advanced atherosclerosis: his aorta had become calcified, hard and narrow, which in the end probably caused progressive kidney failure and pulmonary edema. Why Young suffered from such an advanced form of this disease in his middle age remains a mystery.

Young’s death attracted very little public response. Eulogies were read at the Royal Society and the National Institute of France (which in 1827 had elected Young as foreign associate, an extremely prestigious honour), and a terse note reporting his death was published in the medical journal The Lancet. It was only thanks to the campaigning of Young’s widow Eliza, and his lifelong friend Hudson Gurney, that a memorial plaque was eventually installed in London’s Westminster Abbey, granting Young an immortal place among some of the greatest scientists and artists in British history. Eliza Young is also to be thanked for convincing Peacock to tackle the daunting task of writing a biography of her late husband.

With an unparalleled range of serious academic interests and original contributions to science and scholarship, there can be no doubt that Young was the greatest polymath of his time, even by admission of many of his own contemporaries. It is truly difficult even to grasp how much knowledge he acquired over his five decades of life. Had Nobel prizes existed in the nineteenth century, Young would probably have been awarded one in physics for his demonstration of the wave theory of light, and possibly a second one in physiology for his work on human vision. History, however, is notoriously unsympathetic to polymaths, and Young is often summarised simply as a ‘physician and physicist’ — or even just one of the two. His lifelong attitude toward science is perhaps best expressed in a letter to his friend Gurney: “Scientific investigations are a sort of warfare, carried on in the closet or on the couch against all one’s contemporaries and predecessors; I have often gained a signal victory when I have been half asleep, but more frequently found, on being thoroughly awake, that the enemy had still the advantage of me when I thought I had him fast in a corner — and all this, you see, keeps one alive”.

Just as extraordinary as his intellectual motivation is the fact that Young, unlike some of the greatest scientists of the last three centuries, was a sociable and sensitive individual, with a genuine interest in the arts and a distinct fondness of human company. Robinson sums him up as “a lively, occasionally caustic, letter writer, a fair conversationalist, a knowledgeable musician, a respectable dancer, a tolerable versifier, an accomplished horseman and gymnast, and, throughout his life, a participant in the leading society of London”. At the same time, Young was deeply private about his personal life; almost nothing is known about his wife Eliza, for instance, although their marriage is reported to have been a happy one. Eliza was probably a major reason why Young did not become embittered by the many disappointments, offences, disputes and rejections which marked his professional life.

Given the gradual professionalisation and specialisation of every branch of science over the last two hundred years, it is unlikely that we shall see the like of Thomas Young again. His life, however, remains an awe-inspiring testament to the unbounded potential of the human mind, and a prime example of the original meaning of the word ‘philosopher’. For it was his sheer love of knowledge, his unremitting longing to understand the world, which above all defined him, and ‘kept him alive’.



References
Robinson, A. The Last Man Who Knew Everything. Pi Press/Oneworld Publications, 2006.
Peacock, G. Life of Thomas Young: M.D., F.R.S., &c. John Murray, 1855.
Young, T. Observations on Vision. Philosophical Transactions of the Royal Society of London, 1793.
Young, T. On the Mechanism of the Eye. Philosophical Transactions of the Royal Society of London, 1800.
Young, T. A Course of Lectures on Natural Philosophy and the Mechanical Arts. Joseph Johnson, 1807.
Young, T. Rudiments of an Egyptian Dictionary in the Ancient Enchorial Character. J. and A. Arch, 1831.