Fundamentals Page 5
TIME’S ABUNDANCE
As we find for space, so it is also for time: There’s plenty of it, both outside and inside. Though the immensities of cosmic time dwarf us, yet we contain immensities of time within.
In his visionary cosmic history Starmaker, Olaf Stapledon, a pioneering genius of science fiction, writes, “Thus the whole duration of humanity, with its many sequent species and its incessant downpour of generations, is but a flash in the lifetime of the cosmos.” The Roman philosopher Seneca expressed the opposite of this thought in “On the Shortness of Life.” “Why do we complain about nature?” he writes. “It has acted generously; life, if you know how to use it, is long.”
As we’ll see, both Stapledon and Seneca got it right.
WHAT IS TIME?
Lest we drown in vagueness and nonsense, let us pause to take a deep breath and address a very basic question: What is time?
Time seems, as a matter of psychology, less tangible than space. We can’t move around freely in time, or even revisit a chosen moment. Once a moment is passed, it is past. It is not, then it is, then it is not again.
Saint Augustine, a very powerful thinker, articulated a common feeling of puzzlement: “What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.”
One witty but unserious answer often has been misattributed to Einstein, though it originates from the science-fiction writer Ray Cummings: “Time is what prevents everything from happening at once.”
Another pithy response, which at first may sound no more serious, is that “time is what clocks measure.” Yet that, I believe, is the germ of the correct answer. It is the thought we’ll build on.
There are many phenomena in nature that repeat regularly. The cycles of day and night, of the Moon’s waxing and waning, of the seasons, and of the beating of human and animal hearts are obvious features of common experience. If we compare the rate of one person’s resting heartbeat with another’s, we find that roughly the same ratio persists over many beats. We find, too, that each cycle of lunar phases—each lunar month— contains very nearly the same number of days.
The cycle of seasons seems hazier, on the face of it, due to the vagaries of weather. To refine their predictions of seasons, people in several civilizations developed a technology of astronomical timekeeping. They hit on the idea of monitoring changes in the path of the Sun’s march across the sky—where it rises, where it sets, and how high it rises, day by day. The changes in those positions are much more regular than seasonal changes in weather patterns, which fluctuate unpredictably. By monitoring the Sun, people achieved a much more precise and useful definition of seasons and years. (Seasons are officially defined as intervals between solstices, which mark the most extreme solar excursions, and equinoxes, which mark the most rapid daily changes. Solstices also mark the extreme divisions of day and night, while equinoxes mark their equality. Years are the intervals that pass between complete cycles of change.) Having made those precise definitions, people observed that each season contains the same number of days, or of lunar months, year after year. They constructed calendars, which helped them in many aspects of life, such as deciding when to plant crops, anticipating when they’d need to harvest, and, for hunters, when to expect animal migrations.
In short, we find that many different cyclical processes, physiological and astronomical, are synchronized. They march to the same drummer. We can use any of them to measure the others.* The observation that there is a shared, universal pace is a deep fact about the way the physical world works. To express the pace itself, we say that there is something that all the world’s cycles tap into, which tells them when to repeat. That something, by definition, is time. Time is the drummer to which change marches.
Two other manifestations of time are central to human experience. One is its role in music. In playing music together, or in dancing or singing, we rely on our expectation that everyone involved will stay in sync. While that experience is so familiar that we tend to take it for granted, it provides convincing evidence that we share, with high accuracy, a common notion of the passage of time.
Another manifestation of time, perhaps the most important of all for humans, relates to life history. Almost all babies develop on roughly the same schedule, beginning to walk, talk, and achieve other milestones after a certain number of months (or days or weeks). People grow in height, reach puberty, thrive, and decline according to predictable patterns, closely connected to the number of years they’ve lived. Each of us is a clock, albeit one that’s hard to read accurately.
As the arc of human life history illustrates, time controls the progress of noncyclical events, as well as cyclical ones. As people became scientifically sophisticated and studied motion and other kinds of change in the physical world systematically, they found again and again—in every case, so far—that all change proceeds according to a common rhythm. Changes in the positions of astronomical bodies, changes in the positions of bodies in response to forces, the unfolding of chemical reactions, the progress of light beams through space—those changes, and many more, all evolve to the tempo of a single time.
Putting it another way: There is a quantity, usually written as t, which appears in our fundamental description of how change takes place in the physical world. It is also what people are talking about when they ask, “What time is it?” That is what time is. Time is what clocks measure, and everything that changes is a clock.
HISTORICAL TIME: WHAT WE KNOW AND HOW WE KNOW IT
We took the measure of cosmic time already, in the preceding chapter, when we looked back to the big bang. Since then, 13.8 billion years have passed. On the scale of human longevity, that is a very long time, indeed. It encompasses hundreds of millions of human lifetimes.
It is a mind-boggling figure, 13.8 billion years, but the big bang is remote from our experience. To appreciate the abundance of time, we should also consider deep history closer to home. There are two approaches to measuring very long times: radioactive dating and stellar astrophysics. Let’s discuss them in turn.
Radioactive dating is based on the existence of nuclear isotopes. These are atomic nuclei that contain the same number of protons but different numbers of neutrons. Such nuclei give rise to atoms that have nearly identical chemical properties. But many kinds of atomic nuclei are unstable, and decay, each with a characteristic lifetime. Often isotopes of the same chemical element have radically different lifetimes. Those two features—same chemistry, different lifetimes—are what we exploit to do radioactive dating.
To keep things concrete, let’s focus on one important example of radioactive dating, which uses carbon. The most common isotope of carbon is 12C (“carbon-12”), which contains six protons and six neutrons. 12C nuclei are highly stable. But there is also another significant isotope of carbon, 14C (“carbon-14”), which is unstable, or “radioactive.”
14C has a half-life of about 5,730 years, meaning that if you have a sample of material containing 14C atoms, in 5,730 years half of them will be gone. What happens is that the 14C nuclei convert into nitrogen nuclei (14N) while emitting electrons and antineutrinos. We’ll be discussing processes of this sort—radioactivity and the weak force—more deeply later. For present purposes, the details aren’t crucial.
Of course, we don’t have to wait 5,730 years to check that picture out. Because even small samples of organic matter contain many carbon atoms, we can detect many decays within small intervals of time. What we observe, when we monitor the outflow of electrons, is that in equal intervals of time an equal proportion of the surviving 14C nuclei decay.
Since the universe is much older than 5,730 years, the question arises: Why is any 14C left? The key fact here is that new 14C nuclei are being created in Earth’s atmosphere, through the action of cosmic rays. That creation compensates for the decays and maintains a balance between 14C versus 12C in the atmosphere.
&n
bsp; Living things take in carbon either directly from the atmosphere or shortly after it dissolves from the atmosphere into water. The carbon they ingest reflects the current atmospheric 14C/12C balance. But once it is incorporated into their bodies, the decaying 14C is no longer replenished. After that its fraction decreases with time, in a predictable way. Thus, by measuring the ratio of 14C to 12C in a sample of biological origin, one can determine when the source of the sample was last alive and capturing carbon.
There are two practical ways to measure the ratio. Since there are always far more 12C nuclei than 14C, we can get a good estimate of 12C abundance simply by weighing the total carbon. To get the 14C abundance, we can measure the radioactivity— that is, the rate of electron emission. Since we know what proportion of 14C decays in an interval of time, we can leverage that measurement to infer the 14C content.
A more modern method is to take the sample to an accelerator, where you can physically separate the 14C and 12C, by exploiting their different motions in strong electric and magnetic fields. The two methods yield consistent results.
Carbon dating is widely used in archaeology and paleobiology. It has been used to date ancient Egyptian and Neanderthal artifacts, for example, including mummies. We can check some of those Egyptian artifacts against historical records, and we find agreement. The Neanderthals didn’t keep records, but thanks to carbon dating we know that they flourished in Europe for several hundred thousand years, and as recently as forty thousand years ago.
We can also date bones and artifacts of early modern humans (Homo sapiens). From those remains, we infer that our species has been around for about three hundred thousand years. The early record is sparse, indicating that populations were small: Homo sapiens was not a particularly successful species early on.
It is important to emphasize that there are many ways to validate the ages obtained by carbon dating. We can construct a time ladder, similar in spirit to the distance ladders we discussed earlier. A simple, classic, and particularly beautiful example involves old trees. Trees add a ring to their bark each year, as the wood deposited during different seasons looks different, providing contrast. We can check that carbon dating reproduces the correct relative ages for the different bands, as well as yielding the overall age.
There are many other isotope pairs besides carbon 14C and 12C, with a wide range of half-lives. Using essentially the same techniques, we can use them to measure much longer times than carbon dating reaches. For example, isotopes of uranium and lead have been used to obtain the age of mineral samples (gneiss) from western Greenland. They give concordant ages in the neighborhood of 3.6 billion years. Thus, we infer that those rocks formed 3.6 billion years ago, and have undergone little chemical processing since. In this way, we learn that Earth has existed as a solid planet for a significant fraction—more than a quarter—of the lifetime of the visible universe.
The astrophysical theory of stars suggests a method to determine their ages. Stars generate energy by burning nuclear fuel. As the fuel is consumed, they change their size, shape, and color. Our Sun, for example, is predicted to become a red giant in about five billion years. Then its body will consume Mercury and Venus, and things will get quite nasty on Earth. Roughly a billion years later, according to theory, the Sun will blow off its extended atmosphere and settle down into a hot, Earth-sized white dwarf. Then it will slowly cool and eventually, over several billion years, fade to black.
There are many ways to test the theory of stellar evolution. For example, we can look at groups of stars that gather closely together in a cluster. It is reasonable to think that many of those stars will have formed at roughly the same time (on cosmic scales). If so, then they should all have the same age. As stars age, they evolve in predictable ways, changing their color and brightness. Using the theory of stellar evolution, we can compute the age of each star separately. Astronomers have found in many cases that the computed ages within a cluster do agree with one another, thus both vindicating the theory and dating the cluster.
We find in this way that some of the oldest stars are almost as old as the visible universe. In other words, star formation commenced within one or two billion years after the big bang. On the other hand, some stars are quite young, and we also observe regions where stars are still forming.
Summarizing, we can say that:
The universe commenced forming stars and planets quite early in its history, about thirteen billion years ago. New stars continue to form, though at a diminishing rate.
The Sun and Earth have been around in something close to their present form for about five billion years.
Humans have been around in something close to their present form for a much briefer time, about three hundred thousand years. This amounts to about ten thousand generations, or five thousand human lifetimes.
INNER TIME: WHAT WE KNOW AND HOW WE KNOW IT
The inner abundance of time appears when we compare the span of a human lifetime with the speed of the basic electrical and chemical processes that enable thought. That comparison reveals that a lifetime can support immensities of individual experiences and insights.
The Speed of Thought
Wolfgang Amadeus Mozart died at thirty-five years of age; Franz Schubert at thirty-one; Évariste Galois, the great mathematician, at twenty; James Clerk Maxwell, the great physicist, at forty-eight. Evidently, it is possible to squeeze a lot of creative thoughts into a human lifetime. How many?
No single measure of speed applies to the bewildering variety of brain processes, so there is some vagueness in the question. Still, I think it is possible to give a rough but meaningful answer.
One fundamental limitation to human signal processing is the downtime (latency) between the pulses of electrical activity (action potentials) that neurons use to communicate with one another. This recovery period limits the number of pulses to a few tens or hundreds per second, depending on the neuron type. It is probably no accident that the “frame rate” at which we can start to distinguish that movies are actually a sequence of stills is about forty per second, just adequate to accommodate a modest number of pulses. That frame rate is an objective measure of how fast we can process visual signals into forms that our brains can make use of. It means that we process, and “understand,” about a hundred billion distinct scenes in a lifetime.
The number of conscious thoughts we can entertain is probably significantly less than that, yet still enormous. Average speech rates, for example, are about two words per second. If we estimate that five words represent a significant thought, then a lifetime has room for about a billion thoughts.
Those estimates testify that we’re gifted with over a billion opportunities to experience the world. In that important sense, there’s plenty of inner time. That estimate might even be too conservative, since our brains support parallel processing, whereby several different thoughts can be running—mostly subconsciously—at once.
T. S. Eliot, in “The Love Song of J. Alfred Prufrock,” had a more ironic take on the same conclusion: “In a minute there is time / For decisions and revisions which a minute will reverse.”
Helped by our ancestors and our machines, we can augment our thought resources greatly. We need not rediscover from scratch how to fulfill basic needs like staying warm or obtaining food and drink. On a more elevated plane, we need not rediscover calculus, or the foundations of modern science and technology. Nor, thanks to modern computers and the internet, need we spend precious thought cycles on laborious calculations, or on memorizing masses of information. By bringing in those helpers, we can outsource immensities of thinking and free up more of our internal time for other uses.
Nature is not limited by the speed of human thought. Events can happen much faster than our processing rate of forty per second, even though our vision can’t resolve them. Notably, the “clock rate” for modern information processors, such as the CPU of a high-end laptop
, is approaching 10 gigahertz, corresponding to ten billion operations per second. Computers can work much faster than brains, because transistors use the electrically driven motion of electrons, instead of the much slower processes of diffusion and chemical change that neurons rely on. By this natural measure, the limiting speed of thought for artificial intelligence is roughly a billion times faster than the speed of thought for natural intelligence.
MEASURING TIME
The history of clocks and the measurement of time brings in much of the history of physics. Early clocks include instruments based on the position of the Sun (sundials), hourglasses based on the flow of sand, and related devices based on the flow of water, candles, and others. Legendary figures such as Galileo and Christian Huygens developed mechanical pendulum clocks, which were improved over many decades, and set the standard for accuracy until well into the twentieth century.
The twentieth century brought in more reliable clocks, based on entirely different physical principles. At the frontier of clock making, swinging pendulums and unwinding springs got replaced by vibrating crystals, and then by vibrating atoms. Those smaller oscillators are less exposed to buffeting from the external world, and they operate with very little friction. As a result, today’s most accurate atomic clocks are extraordinarily stable—within a part in 10−18, to be precise. Thus, two such clocks, operating over the span of the lifetime of the universe, would continue to agree within about one second. Today, relatively cheap, compact (chip-size) atomic clocks can keep time with 10−13 accuracy. They gain or lose a few seconds every million years.
Those extraordinary accuracies might seem extravagant, but actually they are extremely useful. For one thing, they translate, in the Global Positioning System, to precise distance measurements. (Such measurements make it possible, for example, to align large machines precisely.) Note that even tiny errors in time, when multiplied by the speed of light, can translate into noticeable errors in distance.