Cuneiform script was one of the earliest writing systems, developed by the Sumerians of Mesopotamia (modern-day Iraq) in the late fourth millennium BC. Some of its signs represent syllables, while others are logograms, representing entire words. We still use certain logograms today, such as the symbols $, % and &. (Museum of Anatolian Civilizations, Ankara.) |
AS CITIZENS OF A WORLD dominated by technology, we are used to manipulating information on a daily basis; yet few of us ever wonder about information itself. In fact, information has been a very hard notion for humans to grasp, and it has been only very recently that we have begun to figure out its true nature and the extraordinary role it plays in our universe.
Human history readily reveals that our proficiency in manipulating and structuring data is mainly a product of a tremendous revolution which has unfolded over the last two centuries. The first steps of humanity’s conscious use of abstract forms of information, however, date back to a much earlier epoch. The starting point lies some five thousand years in the past, in what is arguably the greatest invention in history, yet one of the simplest — writing. Thanks to symbols representing either whole words (logograms, such as Chinese characters) or individual sounds (phonograms, like the letters in an alphabet), humanity’s ideas, emotions and events could suddenly be extracted from the human brain and stored physically on durable materials like clay, stone or papyrus. Such disruptive, otherworldly technology immediately sparked a tremendous cultural revolution, bringing prehistory to an end and setting human history in motion. This was but the very beginning of a long journey toward mastering information.
For the following five thousand years, writing was pretty much the only way of manipulating and recording information known to humans. The next great milestone in our relationship with information would not arrive until the nineteenth century’s Industrial Revolution. It came from a brilliant French industrialist and inventor named Joseph Marie Jacquard, who in 1804 patented what was the most complex apparatus designed to that day. The Jacquard loom, with its apparently rudimentary wooden structure, was in fact the first commercial programmable device, having the stunning ability to weave any desired pattern in silk without human intervention. This was achieved by means of thousands of punched cards (paper slips with patterns of holes and blank spaces in them), which together carried the information needed to precisely create a particular fabric design. By transforming the information from drawings to abstract patterns of holes and blanks, and empowering the machine to translate the unintelligible patterns into tangible, incredibly detailed silk designs, the strenuous process of weaving was sped up to an extent that was simply unimaginable before, consolidating France as the world capital of silk production. Given a large enough number of cards, symbols as simple as holes and blanks could capture the information of even the most complicated pattern — or, in fact, of anything we can think of. For the first time, information abstraction and programmable machines had shown their potential for superhuman performance in repetitive, labour-intensive tasks. Today, nearly any repetitive manufacturing task is mindlessly performed by machines.
Equally transformative was the irruption of two landmark technological inventions of the nineteenth century, Samuel Morse’s electric telegraph — preceded by other, overly complicated telegraph designs — and his eponymous coding system, the Morse code. Thanks to these, messages could easily be encoded into electrical pulses and rapidly sent along wires. The world was soon shrouded in a global, dense telecommunications network — something that we see as a given today. It was the very dawn of the ‘information age’, which would be characterised by spectacularly fast, reliable and virtually limitless electronic communication systems.
However, the invention that we would perhaps recognise as most influential to modern human life (indeed, many of us have been staring at one for most of the day) was not to arrive until the mid-twentieth century, by the hand of one of the most brilliant minds in history. In 1936, the then-24-year-old British mathematician Alan Turing published an article that addressed a highly abstract mathematical problem; it was in this work that the term computing machine, meant as a theoretical version of the modern computer, first saw the light. An unforeseen consequence of Turing’s theoretical work, the computer was to become arguably the most fundamental practical invention of the last century, lionising its inventor as the undisputed father of informatics. Notably, other kinds of computing machines had already been devised before the twentieth century; the ‘difference engine’ and the ‘analytical engine’, two mind-boggling mechanical calculators designed by the British mathematician Charles Babbage, are widely regarded as the first computer prototypes, albeit they were never completely built in their inventor’s lifetime. However, it was Turing’s theoretical and practical work that directly led to the invention of the first electronic computer during World War II. Turing’s now-famous suicide at the age of 41, triggered by the depression he was thrown into after the British authorities forced him to undertake hormonal therapy as a ‘treatment’ for his homosexuality, entailed a tragic and incalculable loss for mankind. The breakthroughs he could have ushered into the field of computing are now left to the imagination.
With the combination of symbolic abstractions to represent information, instant communications, and breath-taking computing machines, one may think that humans had finally unleashed information’s power to its full extent. This remains far from true today; in fact, we did not even know what information really is until some decades ago, when information theory, a new scientific discipline devoted to exploring the most abstract facets of information, was founded. In 1948, the father of information theory, Claude Shannon, formally coined a name for the elementary unit of information, the bit (a contraction of ‘binary digit’). Bits can be thought of as the atoms of information, since they represent the smallest possible quantity of it. A bit can only hold one of two possible values: zero (usually meaning ‘off’, ‘false’, ‘no’) and one (‘on’, ‘true’, ‘yes’). Combining multiple bits gives rise to increasingly powerful, and familiar, measures of information, such as bytes, megabytes and terabytes.
Even more bewildering was the realisation that information is actually far from a purely abstract human invention. Nature itself was already a master at handling and exploiting information billions of years ago. By the end of the last century, it had become clear to science that the living cell is continuously reading, processing and responding to information from its inner and outer environment. Similar to a computer, the cell employs sets of rules to react to information; but instead of electronic circuits, it relies on intricate networks of chemical reactions between specialised signalling molecules, in order to transfer information from its receptors — which recognise certain chemical, mechanical or electrical signals — to the molecular ‘machines’ that perform the required action. Crucially, these signalling networks are not only able to transmit information, like a wire, but they can also process it, like a computer’s processor. This is possible thanks to the existence of different possible states (for example, ‘active’ and ‘inactive’) between which some molecules and chemical reactions in the cell can switch, just like the minute transistors in a computer do. Such computational abilities enable the cell to make vital decisions such as self-replicating, transforming into a more specialised type of cell, or even committing suicide. It is thanks to signalling networks that, for example, neurons in the brain fire in response to special molecules termed neurotransmitters, or epithelial cells in the skin sense the presence of an open wound, and start replicating ceaselessly until the gash is fully closed.
Human history readily reveals that our proficiency in manipulating and structuring data is mainly a product of a tremendous revolution which has unfolded over the last two centuries. The first steps of humanity’s conscious use of abstract forms of information, however, date back to a much earlier epoch. The starting point lies some five thousand years in the past, in what is arguably the greatest invention in history, yet one of the simplest — writing. Thanks to symbols representing either whole words (logograms, such as Chinese characters) or individual sounds (phonograms, like the letters in an alphabet), humanity’s ideas, emotions and events could suddenly be extracted from the human brain and stored physically on durable materials like clay, stone or papyrus. Such disruptive, otherworldly technology immediately sparked a tremendous cultural revolution, bringing prehistory to an end and setting human history in motion. This was but the very beginning of a long journey toward mastering information.
For the following five thousand years, writing was pretty much the only way of manipulating and recording information known to humans. The next great milestone in our relationship with information would not arrive until the nineteenth century’s Industrial Revolution. It came from a brilliant French industrialist and inventor named Joseph Marie Jacquard, who in 1804 patented what was the most complex apparatus designed to that day. The Jacquard loom, with its apparently rudimentary wooden structure, was in fact the first commercial programmable device, having the stunning ability to weave any desired pattern in silk without human intervention. This was achieved by means of thousands of punched cards (paper slips with patterns of holes and blank spaces in them), which together carried the information needed to precisely create a particular fabric design. By transforming the information from drawings to abstract patterns of holes and blanks, and empowering the machine to translate the unintelligible patterns into tangible, incredibly detailed silk designs, the strenuous process of weaving was sped up to an extent that was simply unimaginable before, consolidating France as the world capital of silk production. Given a large enough number of cards, symbols as simple as holes and blanks could capture the information of even the most complicated pattern — or, in fact, of anything we can think of. For the first time, information abstraction and programmable machines had shown their potential for superhuman performance in repetitive, labour-intensive tasks. Today, nearly any repetitive manufacturing task is mindlessly performed by machines.
Equally transformative was the irruption of two landmark technological inventions of the nineteenth century, Samuel Morse’s electric telegraph — preceded by other, overly complicated telegraph designs — and his eponymous coding system, the Morse code. Thanks to these, messages could easily be encoded into electrical pulses and rapidly sent along wires. The world was soon shrouded in a global, dense telecommunications network — something that we see as a given today. It was the very dawn of the ‘information age’, which would be characterised by spectacularly fast, reliable and virtually limitless electronic communication systems.
However, the invention that we would perhaps recognise as most influential to modern human life (indeed, many of us have been staring at one for most of the day) was not to arrive until the mid-twentieth century, by the hand of one of the most brilliant minds in history. In 1936, the then-24-year-old British mathematician Alan Turing published an article that addressed a highly abstract mathematical problem; it was in this work that the term computing machine, meant as a theoretical version of the modern computer, first saw the light. An unforeseen consequence of Turing’s theoretical work, the computer was to become arguably the most fundamental practical invention of the last century, lionising its inventor as the undisputed father of informatics. Notably, other kinds of computing machines had already been devised before the twentieth century; the ‘difference engine’ and the ‘analytical engine’, two mind-boggling mechanical calculators designed by the British mathematician Charles Babbage, are widely regarded as the first computer prototypes, albeit they were never completely built in their inventor’s lifetime. However, it was Turing’s theoretical and practical work that directly led to the invention of the first electronic computer during World War II. Turing’s now-famous suicide at the age of 41, triggered by the depression he was thrown into after the British authorities forced him to undertake hormonal therapy as a ‘treatment’ for his homosexuality, entailed a tragic and incalculable loss for mankind. The breakthroughs he could have ushered into the field of computing are now left to the imagination.
With the combination of symbolic abstractions to represent information, instant communications, and breath-taking computing machines, one may think that humans had finally unleashed information’s power to its full extent. This remains far from true today; in fact, we did not even know what information really is until some decades ago, when information theory, a new scientific discipline devoted to exploring the most abstract facets of information, was founded. In 1948, the father of information theory, Claude Shannon, formally coined a name for the elementary unit of information, the bit (a contraction of ‘binary digit’). Bits can be thought of as the atoms of information, since they represent the smallest possible quantity of it. A bit can only hold one of two possible values: zero (usually meaning ‘off’, ‘false’, ‘no’) and one (‘on’, ‘true’, ‘yes’). Combining multiple bits gives rise to increasingly powerful, and familiar, measures of information, such as bytes, megabytes and terabytes.
Even more bewildering was the realisation that information is actually far from a purely abstract human invention. Nature itself was already a master at handling and exploiting information billions of years ago. By the end of the last century, it had become clear to science that the living cell is continuously reading, processing and responding to information from its inner and outer environment. Similar to a computer, the cell employs sets of rules to react to information; but instead of electronic circuits, it relies on intricate networks of chemical reactions between specialised signalling molecules, in order to transfer information from its receptors — which recognise certain chemical, mechanical or electrical signals — to the molecular ‘machines’ that perform the required action. Crucially, these signalling networks are not only able to transmit information, like a wire, but they can also process it, like a computer’s processor. This is possible thanks to the existence of different possible states (for example, ‘active’ and ‘inactive’) between which some molecules and chemical reactions in the cell can switch, just like the minute transistors in a computer do. Such computational abilities enable the cell to make vital decisions such as self-replicating, transforming into a more specialised type of cell, or even committing suicide. It is thanks to signalling networks that, for example, neurons in the brain fire in response to special molecules termed neurotransmitters, or epithelial cells in the skin sense the presence of an open wound, and start replicating ceaselessly until the gash is fully closed.
Diagrammatic representation of a cellular signalling network known as the mTOR network. Each green rectangle in the diagram corresponds to a different protein. (Credit: Mol. Syst. Biol. 6:453.) |
Information is not merely something created and wielded by both the natural world and human ingenuity, but very much more. It is a real, fundamental property of the physical universe which we inhabit. What we usually call information is just our simplified representation of the actual information concealed in the world around us. Think, for example, of a photograph of some object. The photograph is a precise graphical representation of the object in two dimensions and, as such, contains some of its information, such as shape, colour and texture. And yet, it is missing almost all the information present in the actual object. A simple example of this is that it is often impossible to figure out the real size of an object on the basis of a picture, unless the scene includes another object whose size we already know. We then combine the information coming from our previous experience of the world with that in the picture, to make an inference about the actual object’s information.
If we go down to the microscopic level, the amount of information in the physical world becomes simply fathomless. Consider again the case of a living cell — perhaps in your own body. As in the example above, we can measure different types of information about the cell, such as its shape, its size, or the amount of DNA in it, and represent such information in various ways. However, the physical cell still harbours much, much more information; for instance, the spatial arrangement of the organelles that compose it; the location and structure of every one of its enzymes, lipids, proteins, sugars and nucleic acids; the chemical dynamics of the signalling networks that allow it to react to its environment; the sum of all the hereditary information encoded in its genetic material; the energy level of every electron of every atom of every single particle in it; and the nuclear forces binding them all together as a single system, the living cell. In other words, something as small as a microscopic cell holds an amount of real, physical information that surpasses all the symbolic information produced by humanity over the whole course of history. We can take a look around and try to imagine how much information our brains constantly and subconsciously extract from our surroundings; and how much information we would be able to measure from every object and living thing around us, if only we had access to the right tools — and sufficient storage space.
Humanity’s effort to represent and exploit information for its own benefit has seen the invention of ever more-ingenious ways of representing, transmitting and storing the latter. We have discovered, mastered and taken advantage of the unique properties granted by each type of symbolic representation (written symbols, holes, electrical signals, bits…) and physical medium (stone, paper, wires, radio waves, hard disk drives…); yet our understanding of information as an inherent property of the cosmos is still incomplete. Furthermore, novel means of storing and transmitting information, from quantum particles through to DNA, are continuously being explored. The journey towards achieving true control of information is far from finished; in fact, we may be just at the brink of our real information revolution.
If we go down to the microscopic level, the amount of information in the physical world becomes simply fathomless. Consider again the case of a living cell — perhaps in your own body. As in the example above, we can measure different types of information about the cell, such as its shape, its size, or the amount of DNA in it, and represent such information in various ways. However, the physical cell still harbours much, much more information; for instance, the spatial arrangement of the organelles that compose it; the location and structure of every one of its enzymes, lipids, proteins, sugars and nucleic acids; the chemical dynamics of the signalling networks that allow it to react to its environment; the sum of all the hereditary information encoded in its genetic material; the energy level of every electron of every atom of every single particle in it; and the nuclear forces binding them all together as a single system, the living cell. In other words, something as small as a microscopic cell holds an amount of real, physical information that surpasses all the symbolic information produced by humanity over the whole course of history. We can take a look around and try to imagine how much information our brains constantly and subconsciously extract from our surroundings; and how much information we would be able to measure from every object and living thing around us, if only we had access to the right tools — and sufficient storage space.
Humanity’s effort to represent and exploit information for its own benefit has seen the invention of ever more-ingenious ways of representing, transmitting and storing the latter. We have discovered, mastered and taken advantage of the unique properties granted by each type of symbolic representation (written symbols, holes, electrical signals, bits…) and physical medium (stone, paper, wires, radio waves, hard disk drives…); yet our understanding of information as an inherent property of the cosmos is still incomplete. Furthermore, novel means of storing and transmitting information, from quantum particles through to DNA, are continuously being explored. The journey towards achieving true control of information is far from finished; in fact, we may be just at the brink of our real information revolution.
Special thanks are due to Máire Ní Leathlobhair for her invaluable help with composition.
References:
Order and Disorder: The Story of Information. BBC documentary (2012).
Tyson, J.J., Novak, B. Control of cell growth, division and death: information processing in living cells. Interface Focus (2014).
Azeloglu, E.U., Iyengar, R. Signaling Networks: Information Flow, Computation, and Decision Making. Cold Spring Harbor Perspectives in Biology (2015).
References:
Order and Disorder: The Story of Information. BBC documentary (2012).
Tyson, J.J., Novak, B. Control of cell growth, division and death: information processing in living cells. Interface Focus (2014).
Azeloglu, E.U., Iyengar, R. Signaling Networks: Information Flow, Computation, and Decision Making. Cold Spring Harbor Perspectives in Biology (2015).