figure[Getty Images]

figurePhysicists with the NIST-F2 cesium fountain clock in Boulder, which has served as a U.S. civilian time and frequency standard since 2014. [NIST]

In November 2018, representatives of 54 countries met in Versailles, France, to approve a major revision to the International System of Units (SI). Delegates at the General Conference on Weights and Measures (CGPM) agreed to redefine four of the seven base units—meaning that, for the first time in history, all units refer to constants of nature. Notably, the kilogram had until that point been defined by the mass of a platinum-iridium cylinder held in a basement on the outskirts of Paris.

On this occasion, the second was not in the spotlight, even though it remains the cornerstone of the SI system. It was redefined in 1967 as “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom.” Bar a little rewording, that definition remains in place today. But behind the scenes, metrologists are busy planning a makeover. The idea is to replace the microwave definition with one based on a far higher optical frequency.

The overhaul could lead to new and improved applications of precise timekeeping, such as better navigation of spacecraft and more sensitive searches for any variations in the fundamental physical constants. The push comes from rapid advances in technology, with the best optical clocks now significantly outperforming the cesium devices that serve as the standards for international time.

Last year (February 2018), a committee of metrologists from the International Committee for Weights and Measures (CIPM) published a “roadmap” laying out a series of milestones that the committee says need to be passed in order to redefine the second. Since then, two groups claim to have taken the important first step along that road—demonstrating optical clocks that would be at least 100 times as accurate as the world’s best cesium timepieces, were the second to be defined in terms of visible light.

Fritz Riehle of Germany’s PTB national metrology institute in Braunschweig, and one of the roadmap’s authors, says that industry and most other users “will sleep well enough” with the microwave definition in place for the time being. But he reckons that it makes sense to redefine within the next decade, in part to save money that would be used to make cesium clocks more accurate. “We would spend a lot of time making inferior clock technology better,” he says.

Indeed, according to Tetsuya Ido of Japan’s National Institute of Information and Communications Technology, scientists’ enthusiasm for optical clocks is hampering the maintenance of the most accurate microwave devices, known as cesium fountains. In fact, he believes that within a couple of years the PTB and its French equivalent SYRTE in Paris might be the only labs in the world operating stable fountain clocks, saying, “We may have to rush to make a new definition.”

figure[Illustration by Phil Saunders] [Enlarge image]

The limits of cesium

All clocks rely on some kind of regular, periodic “tick” to keep track of passing time. A clock’s accuracy—how close its measured second is to the “real” value—usually depends on the kind of tick employed and how well the device is built. Ultimately, however, it is limited by how the second itself is defined. In the past, that quantity was stipulated to be 1/86,400 of the length of a day. But because the Earth’s rotation on its axis varies very slightly, so did the length of a second—by around one part in 107. The definition was then reformulated in terms of Earth’s rotation around the Sun, which made it more accurate but quite unwieldy.

Then came atomic time, and the accuracy of the second subsequently shot up by many orders of magnitude. This fact is exploited today in many technologies that rely on precision timekeeping, including cellphone networks, electricity grids, satellite navigation and more. But, even here, the second’s definition imposes a cap on accuracy.

Cesium fountains use lasers to cool a gas of cesium atoms and then launch them upwards through a tube—like droplets of water in a fountain. As the atoms rise and then fall back to Earth, they pass through a field of microwaves, often generated by a crystal oscillator, which excites some of them and creates fluorescence (after stimulation by a suitable laser). By varying the microwave frequency slightly from one launch to the next and monitoring the resulting change in fluorescence, the device is able to lock the oscillator frequency to the hyperfine transition—since at this point the output peaks.

A strontium lattice clock at the PTB metrology lab in Braunschweig, Germany. [Physikalisch-Technische Bundesanstalt]

In search of stability

Any atomic clock probes its timekeeping atoms by scanning the radiation from an oscillator across a fairly narrow range of frequencies and measuring the atoms’ response. Accurately locating the peak in that output involves slightly de-tuning the oscillator until the output drops to 50 percent—a spread in frequency known as the linewidth.

Maximizing a clock’s precision means reducing the linewidth as far as possible, but there is a fundamental limit imposed by quantum noise. In other words, there is a finite chance that any given atom won’t be excited. This random noise can be averaged away by interrogating multiple copies of the same atom and by repeating the measurement many times. However, this takes a while in microwave clocks due to their much lower frequency.

Because each measurement lasts about a second in both cesium fountains and optical clocks—set by the time of atoms’ free fall and the coherence time of the probing laser, respectively—the linewidth is about 1 Hz. Since noise decreases as the square root of both the number of atoms and number of measurements, a cesium clock with a transition frequency of 1 gigahertz that contains a million atoms will yield a peak with a precision of one part in 1013 after a single measurement. Reaching a precision of 10−16 means nonstop measurements for 1 million seconds—at least a week.

In contrast, probing just 10,000 atoms using radiation at 1015 Hz yields a quantum-limited precision of 10−17 for a single measurement. This would average down to 10−18 in a couple of minutes. However, according to Jérôme Lodewyck at SYRTE, France, other sources of noise, particularly fluctuations in the laser frequency, mean that the precision of a single measurement is actually about 10−16. Still, he says, it is possible to achieve 10−18 after just “a few hours of measurement.”

The best cesium fountains today have an accuracy—in other words, how close their frequency is to the true, unperturbed value of the hyperfine transition—of 1.6 parts in 1016. This means, were longevity not an issue, they would neither lose nor gain more than a second in about 200 million years. That uncertainty can, in principle, be further reduced by eliminating all of the systematic effects that very slightly shift the frequency, such as stray electromagnetic fields. However, the fact that microwaves have a relatively low frequency makes any real improvement an uphill battle.

That’s because frequency determines statistical precision. All else being equal, the time needed to reduce instability—the internal variation in ticking rate—below a certain level gets longer as the transition frequency goes down. (See “In search of stability,” p. 46.) To reach 10−16, cesium clocks must run for about a week, so tweaking a clock to make it more accurate can involve months or years of work, making uncertainties below 10−16 an impractical goal, according to Jérôme Lodewyck, a metrologist at SYRTE, part of the Paris Observatory.

Forging ahead

Optical clocks, with their much higher frequencies, in principle can be far more accurate. But the first such devices only started operating in 2001, once physicists had overcome a number of technical hurdles, such as developing lasers with very stable frequencies. Also key was the invention of the frequency comb, which compares the frequency of two electromagnetic waves—yielding an absolute value (in hertz) for an optical clock’s output when referenced to a cesium timekeeper and a simple ratio when measured against another optical device.

Despite the late start, optical clocks quickly outperformed their microwave counterparts. While seconds remain defined in terms of cesium, no optical clock can be more accurate than the best cesium clock by definition, so scientists instead talk in terms of “systematic uncertainty.” And by 2008, optical clocks had achieved lower uncertainties than the best microwave devices—some 10−15.

This rapid progress led the CIPM to set up a list of “secondary representations of the second”—a number of mainly optical transitions whose frequencies and uncertainties are monitored against those of cesium so that optical clocks can also be used to keep time. When one of these (or perhaps a transition still to be exploited) then becomes the frequency used to define the second, cesium-133 will be relegated to secondary status. It was to ensure that this transition is as smooth as possible that the CIPM published its roadmap in February 2018.

The first milestone on the roadmap now appears close to completion. It states that at least three different optical clocks, either in different labs or of different types, must exceed the systematic uncertainty of the best cesium clocks by “about two orders of magnitude.” Two groups claim to have achieved that, through quite different approaches.

Andrew Ludlow and colleagues at the National Institute of Standards and Technology (NIST) in Boulder, Colo., USA, reported in July 2018 that they had reached an uncertainty of 1.4×10−18 using what is known as an optical lattice clock. First demonstrated by Hidetoshi Katori and colleagues at Tokyo University, Japan, this involves using tightly-focused lasers to cool and trap thousands of neutral atoms in a lattice of standing waves before using a “clock laser” (pre-tuned inside an optical cavity) to excite the atoms. Ludlow’s team has done so using ytterbium atoms.

figureThe electrode system of the trap used to store an ytterbium ion in an optical clock at the PTB metrology lab in Braunschweig, Germany. [Physikalisch-Technische Bundesanstalt]

That work has now been ever so slightly bettered by a rival group, also at NIST, which uses the other main type of optical technology—trapped ions. David Leibrandt and colleagues trap a single ion of aluminum in an oscillating electric field and cool it via Coulomb interaction with a magnesium ion. Over the past five years, the group has reduced the already tiny Doppler shift of the aluminum ion’s frequency by better controlling the trapping field. In a paper targeted for submission in January 2019, the team reports a systematic uncertainty of 9×10−19.

Meanwhile, Jun Ye and colleagues at the JILA research institute, just a couple of kilometers down the road in Boulder, have achieved 2×10−18 with a strontium lattice clock, while Ekkehard Peik and co-workers at PTB in Germany have reached 3×10−18 using an ytterbium ion.

Faith in uncertainties

However, while individual groups reach and even go beyond the CIPM’s uncertainty threshold, more will have to be done before the second can be redefined. Both Ludlow’s group at NIST and Peik’s at PTB have made two versions of their respective clocks and compared their output to make sure they keep time to within their combined uncertainties. However, as Lodewyck points out, stray fields or other systematic shifts could conceivably throw off two clocks within a given lab by the same amount. “Only when you compare clocks built by different teams,” he says, “can you prove that clocks are correct.”

While individual groups reach and even go beyond the CIPM’s uncertainty threshold, more will have to be done before the second can be redefined.

One way of doing this is to measure the ratio of frequencies from two different types of optical clock and then compare that to the same ratio measured in another lab. Alternatively, two clocks of the same type can be compared across labs, either by using portable clocks as a go-between or by hooking the clocks up to one another via fiber optic cable.

In Europe, scientists have set up new fiber links between the PTB and SYRTE (and, more recently, the NPL). To avoid the excessive attenuation of visible light, researchers at both ends of the link split the beam from a 1.5-micron (infrared) laser and then use a frequency comb to compare the wavelength from one half of that beam with the output of their optical clock while sending the other half down the fiber. The two halves of the different 1.5-micron beams then meet in the middle (Strasbourg, France), where their frequencies are compared.

The latest measurements were carried out last June. Lodewyck says that although most of the (six) clocks being tested agreed with one another to better than “a few parts in 1017,” there were typically a couple that disagreed by more than the uncertainties attributed to them. For instance, he says, the frequency of a strontium lattice clock at SYRTE shifted because residual air molecules interacted with atoms in the system. “We had underestimated this systematic effect before,” he says.

It is for exactly this reason that metrologists tread carefully when it comes to published uncertainties. A CIPM working group on frequency standards meets periodically to generate best estimates for the values and systematic uncertainties of the various secondary representations of the second and, to be on the safe side, often boosts the combined uncertainties by a factor of two or three.

The nightmare scenario is a step change in the value of the second. This would occur if, after a redefinition, it turned out that the claimed uncertainty for the optical transition in question was too low—so much so that the measured frequency of the cesium-133 transition then becomes very slightly more or less than its (previously defined) whole number of hertz, when considering up to six decimal places. “You can find yourself in trouble if you are not careful,” says Patrick Gill of the National Physical Laboratory in the U.K., who until last year was working group co-chair (with Riehle).

figure[Adapted from Meynadier/Le Targat/Pottie/LNE-Syrte / Inset schematics by Phil Saunders] [Enlarge image]

The gift of time

To ensure that there is, in fact, a smooth transition between microwave and optical definitions, the CIPM roadmap also stipulates that the frequency of the chosen optical technology has to be compared against that of the best cesium fountains. If the optical clocks work as claimed, then the accuracy of such measurements would be entirely limited by the fountain clocks.

But there is one more thing that metrologists want to see before they unleash a revamped second: optical clocks contributing regularly to global timekeeping. National labs currently generate the real-time signals for (approximate) Coordinated Universal Time (UTC) using hydrogen masers and commercial cesium clocks that are, in some cases, corrected periodically by cesium fountains and are compared via satellite links. Official UTC is in fact a “paper” timescale that the CIPM’s secretariat works out once a month by taking a weighted average from hundreds of clocks at participating labs. Scientists hope that over the next few years several optical clocks will start to contribute to UTC. (Until recently, only a strontium lattice device at SYRTE had done so.)

In Japan, Ido and colleagues have shown they can deliver an ongoing timescale by using a strontium lattice clock to stabilize and calibrate a continuously running hydrogen maser (masers being very reliable and stable for a few hours but tending to drift over longer periods). By running the optical clock for just three hours a week, they were able to generate a signal that deviated from UTC by just 0.8 nanoseconds over the course of six months—a fractional difference of only 5×10−17.

Ido hopes that within a few years one of the three masers used by his institute to generate Japan Standard Time can be “steered” by an optical clock instead of an ensemble of cesium clocks. But the longer-term goal, he says, is all-optical timekeeping, in which strontium lattice clocks, for example, run continuously—replacing both masers and cesium devices.

However, Lodewyck says that optical clocks’ complexity means that they are likely to run intermittently for some time yet, and that masers may remain the workhorses of UTC even after the second is redefined. He points out that the roadmap does not require ditching masers altogether, and that future timekeeping is likely to be limited by the satellite links between national metrology labs, which are good enough to compare microwave signals but not optical ones. Extending fiber links across oceans, on the other hand, will be technically demanding and expensive, he says.

One fundamental issue that needs to be resolved is choosing the type of atom—and hence type of clock—that will come to define the second.

Another, more fundamental, problem comes from general relativity: the different ticking rates across the Earth’s surface due to variations in height and hence gravitational field. In fact, a difference in elevation of just 1 cm changes ticking by 10−18. A possible solution is to put one or more master clocks in space, but that would require major leaps in satellite technology and so likely remains a long-term ambition, says Lodewyck.

Picking a winner

In truth, scientists disagree about how urgent the need for a redefinition is. Riehle argues that an early change would give scientists and engineers a chance to develop new applications. Conversely, his PTB colleague Peik reckons that in the absence of demanding new applications, the existing cesium infrastructure is perfectly adequate. Nevertheless, all agree that the next CGPM, due in 2022, will be too early, making the one after that—in 2026—the first realistic opportunity to approve the overhaul.

One fundamental issue that needs to be resolved is choosing the type of atom—and hence type of clock—that will come to define the second. The strong suit of atomic lattices is stability, given the large number of atoms—each in effect a time keeper—that are involved. According to Gill, lattice clocks are typically about 10 times as stable (after a given running period) as ion clocks.

figureLaser beams are used to create an atomic clock involving a tiny three-dimensional cube of strontium atoms. [G.E. Marti/JILA]

Indeed, Ye and colleagues at JILA have made a clock from a quantum gas of strontium atoms trapped in a 3-D optical lattice that can reach a precision of 3×10−19 after only about two hours of operation—thanks to a high density of atoms within the trap. That, in turn, makes him optimistic that they can lower systematic uncertainties as well. But he admits that there are a couple of frequency shifts that could prove difficult—those due to the trapping light and blackbody interference.

Leibrandt reckons that the cleaner environment of a single ion—largely free from interactions with other particles—could prove decisive in the push for accuracy. He says it might be possible to further reduce the systematic uncertainty of aluminum ions by a factor of 10 over the next five years. Meanwhile, Tanja Mehlstäubler and others at PTB are developing symmetrical traps that they say could allow several tens, if not hundreds, of ions to be probed with systematic uncertainties as low as 10−19, raising the prospect of very stable ion clocks.

Gill maintains that it might still take several years for a winner to emerge between lattice and ion clocks (although he says that their superior stability might eventually give lattice clocks the edge). He points out that as well as being accurate and stable, optical clocks must also be easy to use are if they are to become widespread. Nevertheless, he reckons, if the uncertainties of rival clocks differ by no more than a factor of 10, then choosing between them will be tough. “The community has to come to a consensus,” he says. “There will be a lot of to-ing and fro-ing.”


Edwin Cartlidge is a freelance science journalist based in Rome.

References and Resources

  • F. Riehle et al. “The CIPM list of recommended frequency standard values: Guidelines and procedures,” Metrologia 55, 188 (2018).

  • W.F. McGrew et al. “Atomic clock performance beyond the geodetic limit,” https://arxiv.org/abs/1807.11282 (2018).

  • C. Lisdat et al. “A clock network for geodesy and fundamental science,” Nat. Commun. 7, 12443 (2016).

  • H. Hachisu et al. “Months-long real-time generation of a time scale based on an optical clock,” Sci. Rep. 8, 4243 (2018).

  • S.L. Campbell et al. “A Fermi-degenerate three-dimensional optical lattice clock,” Science 358, 90 (2017).

  • J. Keller et al. “Probing time dilation in Coulomb crystals in a high-precision ion trap,” https://arxiv.org/abs/1712.02335 (2018).