It is an underappreciated irony that talking about complex systems is difficult. “There is no clear definition of complex systems,” says Kunihiko Kaneko, a physicist at the University of Tokyo. “But roughly speaking, there are many interacting elements, and they often show chaotic or dynamic behavior.”
This year, for the first time, the Nobel Prize in Physics was explicitly awarded for research in complex systems—including climate change. Half of the prize went to Syukuro “Suki” Manabe of Princeton University and Klaus Hasselmann of the Max Planck Institute for Meteorology in Hamburg, Germany, “for the physical modelling of Earth’s climate, quantifying variability and reliably predicting global warming.” The other half went to Giorgio Parisi of Sapienza University of Rome “for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales.”
For those who work in overlooked fields, the recognition was deeply meaningful. “I was very touched—I nearly cried actually—because I think it’s really a big moment,” says physicist Marc Mézard, head of the École Normale Supérieure in France and a colleague of Parisi’s. Climate scientists had similar thoughts about the award. “If anything, it’s long overdue,” says Shang-Ping Xie, an oceanographer at the Scripps Institution of Oceanography in San Diego.
Unfortunately, by grouping seemingly unrelated research under the vague umbrella of complex systems, the Nobel Committee for Physics puzzled many observers and led to hazy headlines such as “Physics Nobel Rewards Work on Climate Change, Other Forces.” What links these very different discoveries is, at first, far from clear. But a close examination reveals some connections—about the aims of science and how scientists can tackle seemingly intractable problems.
Throughout the 19th and much of the 20th century, physicists gained a greater appreciation for complicated microscopic and macroscopic systems full of random motion and disorder. The tools they developed (many of which Manabe, Hasselmann, Parisi and their peers still rely on) had broad applications, from calculating how much of the sun’s energy Earth absorbed to describing the movement of tiny grains of pollen on water to exploring the magnetic properties of theoretical materials.
The connection also has a philosophical element. Toward the end of its paper on the scientific background for the 2021 physics prize, the Nobel Committee for Physics concluded:
Without soberly probing the origins of variability we cannot understand the behavior of any system. Therefore, only after having considered these origins do we understand that global warming is real and attributable to our own activities, that a vast array of the phenomena we observe in nature emerge from an underlying disorder, and that embracing the noise and uncertainty is an essential step on the road towards predictability.
If that remains unsatisfying, it is worth considering that the committee itself is a complex system, full of uncertainty and disorder.
During the early days of quantum mechanics in the 1920s, physicists developed a simple model to describe magnets like the ones we stick to fridges today. In this “Ising model,” magnets are composed of a lattice of atoms, and every atom acts like a tiny magnet with a direction that is either up or down. If all the atomic magnets are aligned in one direction, they comprise a ferromagnet. If they alternate direction, they comprise an antiferromagnet.
But nature had more in store than just two kinds of magnetism. In 1975, after several metal alloys were discovered to have strange magnetic behavior, the late theorists Philip Anderson and Sam Edwards proposed a new kind of magnet in which some pairs had atomic magnets aligned while others were randomly antialigned. They called the new class of magnet a “spin glass” because the disordered orientation of magnetic behavior was thought to be analogous to the disorder in glass crystals.
Consider a group of three atomic magnets in a triangle: if adjacent magnets must be antialigned, two can fulfill the condition, but the third is left in limbo. This impossible situation meant spin glass was “frustrated,” lacking a clear way to find order. The inherent disorder could manifest infinitudes of nigh unpredictable states, so physicists instead calculated spin glass’s properties by averaging many copies of the system: a so-called replica trick.
“The result turned out to violate various thermodynamic principles,” says New York University physicist Daniel Stein. “So clearly, that was not the correct solution.” The problem was that the replicas were not equivalent. Their symmetry, or sameness, was broken.
In 1979 Parisi made a breakthrough with “replica symmetry breaking.” The math is heady, so Stein uses a physical example: Imagine you have a strand of protein in a solution. As you lower the temperature, that same protein can freeze out and crystallize into a vast number of “ground states,” or configurations, each subtly different from the others. Strangely enough, accounting for the infinite number of ground states of spin glass worked, and Parisi’s calculations made sense.
“Then people got very excited,” Stein says. “Has this cracked the problem of disordered systems?” Researchers in a variety of other disciplines—computer science, neuroscience and even evolutionary biology—found Parisi’s solution compelling because it proposed a rigorous, novel way to think about the many configurations of disordered systems. For example, it gave a new look to optimization questions, such as the traveling salesman problem, and the science of neural connections.
The solution is an example of order from disorder. “[Spin glass] is as random as you can get. And yet from that comes a kind of order that I think nobody would have guessed,” Stein says. The ground states are all different, but they are connected to one another in an orderly way because they all descend from one higher energy state.
Parisi did not close the book on spin glass, and many questions about its properties remain, including about how well replica symmetry breaking works when there are a finite number of ground states.
Our world is not just disordered but chaotic. Small changes to the initial conditions of systems such as the weather can have profound effects. In the famous adage about the butterfly effect, the flap of wings from a butterfly in, say, Africa can aerodynamically affect the formation of a hurricane in, say, the Atlantic.
“[Manabe] was trained as a meteorologist,” says Tony Broccoli, an atmospheric scientist at Rutgers University. “He was thinking about these complex systems.” When Manabe began working on climate modeling in the 1960s, he had to grapple with simplifying many such systems into something the computers of the day could handle.
In 1967 Manabe and Richard Wetherald published the first computer model of climate sensitivity to fluctuating atmospheric levels of carbon dioxide, the main culprit in human-made global warming. To approximate the climate, they simulated a single column of air and looked at how convection told the story of varying temperature.
“You can get a lot of misleading results by just thinking about the energy balance of the surface of Earth,” Broccoli says. “So taking into account the entire air column was really crucial for getting the right answer.” With their simple model, Manabe and Wetherald predicted that doubling the atmospheric concentration of carbon dioxide would result in a 2.4-degree-Celsius increase in global temperatures. Even though it was a limited model without complex feedback mechanisms, such as those for clouds, their answer was remarkably similar to modern predictions made via far more sophisticated methods.
A few years later Manabe introduced the first computerized global model of Earth’s climate, which has applications far beyond sensitivity to carbon dioxide and has been used to predict phenomena such as El Niño.
Whereas Manabe worked to minimize the effects of noisy weather in climate models, Hasselmann instead brought that noise to the fore. He was inspired, in part, by the work of 19th-century Scottish botanist Robert Brown, who in 1827 reported the bizarre dance of pollen grains in a quiescent water droplet viewed through a microscope. Eight decades later Albert Einstein supplied a mechanism for this “Brownian motion”: despite the water’s apparent stillness, the grains moved because they were constantly jostled to and fro by innumerable tiny, random collisions with atoms and molecules.
Hasselmann wondered if the climate was a bit like those pollen grains and if weather was like the ceaselessly restless atoms. If this was true, then the climate had an internal variability because of random weather, independent of any external force such as the warming rays from the sun. In 1976 Hasselmann demonstrated that the climate responded to random variability. Critically, by accounting for natural climate variability, his work helped climate scientists characterize how much warming was truly anthropogenic.
“If you don’t understand internal variability, it’s really hard to say that you understand how the climate changes,” says Jin-Song von Storch, a climate scientist at the University of Hamburg in Germany.
The effect of this internal variability can be large: Xie estimates that in some cases, without accounting for the sorts of variability Hasselmann’s work helped constrain, calculations of warming could be off by as much as 25 percent.
Alfred Nobel’s will states that his prizes should go to those who “have conferred the greatest benefit to humankind.” In addition to their focus on white, European and American men, most of this past century’s physics Nobels have rewarded advances—such as the discovery of dark energy or the Higgs boson—that deeply inform our sense of place in the universe while offering little if any apparent practical value.
This year’s announcement suggests another possibility. “Physics applied to the greatest benefits of humanity, to me, is fundamental,” Xie says.