Showing posts with label Universe. Show all posts
Showing posts with label Universe. Show all posts

BREAKING: Scientists have just found the 'missing' 70% of the universe

It makes up most of the universe, yet hardly anything is known about the so-called Dark Energy that envelopes us.


The unusual 'something' – one of the great mysteries of cosmology – is believed to be an unknown force that is pushing things apart more strongly than gravity and causing the universe's expansion to accelerate.


Now, scientists from Imperial College London believe they may finally have an explanation for the source of unknown energy — black holes.


Dr Chris Pearson, study co-author, said: 'If the theory holds, then this is going to revolutionize the whole of cosmology.


'At last we've got a solution for the origin of dark energy that's been perplexing cosmologists and theoretical physicists for more than 20 years.'

 


Breakthrough: Scientists have found the first evidence that black holes are the source of dark energy. They studied galaxies and the supermassive black holes at the heart of them. Pictured is NGC 1316, a lenticular galaxy about 60 million light-years away in the constellation Fornax


Astronomers have always believed that Dark Energy could revolutionize physics as we know it, because its discovery 20 years ago threatened to blow Albert Einstein's research out of the water.


The legendary physicist can rest easy for now, however, because the team of international researchers have found the first evidence that Dark Energy really does fit with his Theory of Relativity after all.


The main stumbling block to Einstein's science – black holes and how their extremely strong gravity could be opposed by a secret force expanding the universe – has been dismissed by experts at the University of Hawaii and Imperial.


They say that the Theory of General Relativity is correct because black holes actually contain Dark Energy, or the energy from space that Einstein originally predicted.


It also means that nothing 'new' or undiscovered has to be added to our picture of the universe to account for the 'missing' 68 per cent that Dark Energy equates to.


Not only that, but the idea of black holes having a 'singularity' at their centre where nothing – not even light – can escape, has been brought into question, too.


The new theory provides a way to 'circumvent' this mathematical problem, the researchers say, by making the idea of a singularity 'go away'.


Confused? 


Essentially, the Big Bang theory of the creation of our universe originally predicted that its expansion would slow down – or even begin to contract – because of the pull of gravity.


But in 1998, astronomers were surprised to find that not only was the universe still expanding, this expansion was also accelerating. 


To account for this discovery, it was proposed that a 'Dark Energy' was responsible for pushing things apart more strongly than gravity. 


This was linked to a concept Einstein had proposed but later discarded — a 'cosmological constant' that opposed gravity and kept the universe from collapsing. 


Black holes posed a problem though — their extremely strong gravity is hard to oppose, especially at their centres, where everything seems to break down in a phenomenon called a 'singularity'.


To dig deeper into the problem, a team of 17 researchers from nine countries studied nine billion years of black hole evolution. 


They observed ancient and dormant galaxies and found that black holes gain mass in a way that is consistent with them containing vacuum energy, or Dark Energy.


In fact, the size of the universe at different points in time fitted closely with the mass of supermassive black holes at the heart of galaxies.


In other words, the amount of Dark Energy in the universe can be accounted for by black hole vacuum energy — meaning black holes are the source of dark energy.



Mystery: The idea of black holes having a 'singularity' at their centre where nothing – not even light – can escape, has been brought into question by the research, too. Pictured is the supermassive black hole at the heart of galaxy M87


It means that nothing 'new' or undiscovered has to be added to our picture of the universe to account for the 'missing' 68 per cent that Dark Energy equates to. Pictured is NGC 524, a lenticular galaxy in the constellation Pisces about 90 million light-years away from Earth



The scientists observed ancient and dormant galaxies and found that black holes gain mass in a way that is consistent with them containing vacuum energy, or Dark Energy. Pictured is NGC 4150, an elliptical galaxy 45 million light years away in the constellation Coma Berenices


Such is the excitement and significance of the discovery, the scientists responsible for the new research said they 'may have found the answer to one of the biggest problems in cosmology'.

Study co-author Dr Dave Clements, from the Department of Physics at Imperial, said: 'This is a really surprising result. We started off looking at how black holes grow over time, and may have found the answer to one of the biggest problems in cosmology.'


The research is the first observational evidence that black holes actually contain vacuum energy and are 'coupled' to the expansion of the universe, increasing in mass as the universe expands — a phenomenon called 'cosmological coupling'. 


Black holes are formed when massive stars come to the end of their life and are known as supermassive ones when found at the centres of galaxies. 


These contain millions to billions of times the mass of our sun inside them in a comparatively small space, creating extremely strong gravity.


Black holes can increase in size by accreting matter, such as by swallowing stars that get too close, or by merging with other black holes.


However, the researchers found that this increase in mass was 'significantly bigger' than what could be explained purely by astrophysical processes such as merging, which led them to the theory that black holes contain Dark Energy and are linked to the expansion of the universe.


If further observations confirm it, cosmological coupling will redefine our understanding of what a black hole is, the astronomers added.


Study author Duncan Farrah, from the University of Hawaii, said: 'We're really saying two things at once: that there's evidence the typical black hole solutions don't work for you on a long, long timescale, and we have the first proposed astrophysical source for dark energy.

'What that means, though, is not that other people haven't proposed sources for dark energy, but this is the first observational paper where we're not adding anything new to the universe as a source for dark energy: black holes in Einstein's theory of gravity are the dark energy.'


The work is published in two papers in the journals The Astrophysical Journal and The Astrophysical Journal Letters.

NASA Admits To Discovering 'Something Weird' Happening To Our Universe


The Hubble Space Telescope has reportedly reached a new milestone in its quest to measure the speed at which the universe is expanding, and it strongly suggests that something strange is going on in our cosmos.


Astronomers have recently utilised telescopes like Hubble to measure how quickly the cosmos is expanding.


However, as the data were refined, a peculiar finding was made. There is a considerable gap between evidence from the immediate aftermath of the Big Bang and the universe's current rate of expansion.


Scientists have been unable to explain the discrepancy. However, it shows that "something weird" is going on in our cosmos, which might be the product of undiscovered, new physics, according to NASA.


Hubble has spent the last 30 years collecting data on a set of "milepost markers" in space and time that can be used to trace the expansion rate of the cosmos as it moves away from us.


According to NASA, it has now calibrated more than 40 of the markers, allowing for even more precision than previously.


In a statement, Nobel Laureate Adam Riess of the Space Telescope Science Institute (STScI) and Johns Hopkins University in Baltimore, Maryland, stated, "You are getting the most precise measure of the expansion rate for the universe from the gold standard of telescopes and cosmic mile markers."


He is the leader of a group of scientists who have released a new research paper detailing the largest and most likely last significant update from the Hubble Space Telescope, tripling the previous set of mile markers and reanalysing existing data.


The hunt for a precise estimate of how quickly space was expanding began when American astronomer Edwin Hubble saw that galaxies beyond our own appeared to be moving away from us – and moving faster the further away they are. Since then, scientists have been working to gain a deeper understanding of that growth.


In honour of the astronomer's effort, both the rate of expansion and the space telescope that has been studying it are named Hubble.


When the space telescope began gathering data on the universe's expansion, it was discovered to be faster than models had expected. Astronomers anticipate that it should be approximately 67.5 kilometres per second per megaparsec, plus or minus 0.5, while measurements suggest that it is closer to 73.


Astronomers have a one in a million probability of getting it incorrect. Instead, it implies that the universe's growth and expansion are more intricate than we previously thought, and that there is still much to discover about how the cosmos is changing.


The newly launched James Webb Space Telescope, which will soon send back its first observations, will be used by scientists to delve deeper into this difficulty. They should be able to see more recent, far-off, and detailed mileposts as a result.


Reference(s): NASA

This Star is older than the Universe


The oldest known star appears to be older than the cosmos itself, but a new study is helping to solve this apparent mystery.


An earlier study estimated that the so-called "Methuselah star" in the Milky Way galaxy is up to 16 billion years old. This is a concern because most scientists think that the Big Bang that created the universe occurred approximately 13.8 billion years ago. A team of astronomers has now calculated a new, less absurd age for the Methuselah star by combining data on its distance, brightness, composition, and structure.


"Put all of those constituents together, and you get an age of 14.5 billion years, with a remaining doubt that makes the star's age compatible with the age of the cosmos," said study lead author Howard Bond of Pennsylvania State University and the Space Telescope Science Institute in Baltimore in a statement.


Bond refers to an uncertainty of 800 million years, which indicates the star may be 13.7 billion years old – younger than the cosmos as it is currently understood, but only just.


A mysterious and swiftly moving star:


Bond and his colleagues studied the Methuselah star, also known as HD 140283, using NASA's Hubble Space Telescope. HD 140283 has been known to scientists for more than a century, as it travels across the sky at a quite fast rate. According to astronomers, the star moves at roughly 800,000 mph (1.3 million km/h) and covers the width of the full moon in the sky every 1,500 years or so.



The star is simply passing through Earth's galactic neck of the woods and will eventually rocket back out to the Milky Way's halo, a population of early stars that surrounds the galaxy's well-known spiral disc. According to astronomers, the Methuselah star, which is just now forming into a red giant, was likely born in a dwarf galaxy that the young Milky Way devoured more than 12 billion years ago. The star's extended, looping orbit could be a leftover from that furious act of cannibalism.


The difference is determined by distance:


The astrophysicists were able to refine the distance to HD 140283 by employing the principle of parallax, which states that a change in an observer's location — in this case, Hubble's fluctuating position in Earth orbit — results as a shift in the false position of an object.


Methuselah is 190.1 light-years away, they discovered. With the star's distance more precisely determined, the team was able to calculate Methuselah's intrinsic brightness, which was required for measuring its age.


The researchers also used current theory to understand more about the burn rate, composition, and interior structure of the Methuselah star, which offered light on its likely age. For example, HD 140283 has a relatively high oxygen-to-iron ratio, which reduces the star's age from some previous estimates, according to astronomers.


Finally, astrophysicists calculated that HD 140283 was born 14.5 billion years ago, plus or minus 800 million years. Additional research could help reduce the age of the Methuselah star even further, making it demonstrably younger than the cosmos, according to scientists.

BREAKING: The Physics Nobel Prize Winner of 2022 just Proved that the "Universe is actually not real"


The fact that the universe is not locally real is one of the more disquieting discoveries of the last half-century. "Real," which indicates that objects have defined attributes independent of observation—for example, an apple can be red even when no one is looking; "local," which means that objects can only be impacted by their surroundings, and that any effect cannot travel faster than light. Quantum physics researchers have discovered that these concepts cannot both be true. Instead, the evidence suggests that objects are not only influenced by their surroundings, and that they may lack distinct properties prior to measurement. "Do you honestly believe the moon isn't there when you're not gazing at it?" Albert Einstein famously asked a friend.


This is, of course, deeply contrary to our everyday experiences. To paraphrase Douglas Adams, the demise of local realism has made a lot of people very angry and been widely regarded as a bad move.


The accomplishment has now been attributed to three physicists: John Clauser, Alain Aspect, and Anton Zeilinger. They were awarded the Nobel Prize in Physics in 2022 in equal parts "for experiments with entangled photons, establishing the violation of Bell inequalities, and pioneering quantum information science." ("Bell inequalities" alludes to the early 1960s pioneering work of Northern Irish physicist John Stewart Bell, who established the groundwork for this year's Physics Nobel.) Colleagues felt that the trio deserved this punishment for upending reality as we know it. "This is wonderful news. "It had been a long time coming," Sandu Popescu, a quantum physicist at the University of Bristol, says. "There is no doubt that the award is well-deserved."

“The experiments beginning with the earliest one of Clauser and continuing along, show that this stuff isn’t just philosophical, it’s real—and like other real things, potentially useful,” says Charles Bennett, an eminent quantum researcher at IBM. 

“Each year I thought, ‘oh, maybe this is the year,’” says David Kaiser, a physicist and historian at the Massachusetts Institute of Technology. “This year, it really was. It was very emotional—and very thrilling.”


Quantum foundations’ journey from fringe to favor was a long one. From about 1940 until as late as 1990, the topic was often treated as philosophy at best and crackpottery at worst. Many scientific journals refused to publish papers in quantum foundations, and academic positions indulging such investigations were nearly impossible to come by. In 1985, Popescu’s advisor warned him against a Ph.D. in the subject. 


“He said ‘look, if you do that, you will have fun for five years, and then you will be jobless,’” Popescu says.


Today, quantum information science is among the most vibrant and impactful subfields in all of physics. It links Einstein’s general theory of relativity with quantum mechanics via the still-mysterious behavior of black holes. It dictates the design and function of quantum sensors, which are increasingly being used to study everything from earthquakes to dark matter. And it clarifies the often-confusing nature of quantum entanglement, a phenomenon that is pivotal to modern materials science and that lies at the heart of quantum computing.


“What even makes a quantum computer ‘quantum’?” Nicole Yunger Halpern, a National Institute of Standards and Technology physicist, asks rhetorically. “One of the most popular answers is entanglement, and the main reason why we understand entanglement is the grand work participated in by Bell and these Nobel Prize–winners. Without that understanding of entanglement, we probably wouldn’t be able to realize quantum computers.”


WHOM DOES THE BELL RING?


The trouble with quantum mechanics was never that it made the wrong predictions—in fact, the theory described the microscopic world splendidly well right from the start when physicists devised it in the opening decades of the 20th century.


What Einstein, Boris Podolsky and Nathan Rosen took issue with, laid out in their iconic 1935 paper, was the theory’s uncomfortable implications for reality. Their analysis, known by their initials EPR, centered on a thought experiment meant to illustrate the absurdity of quantum mechanics; to show how under certain conditions the theory can break—or at least deliver nonsensical results that conflict with everything else we know about reality. A simplified and modernized version of EPR goes something like this: Pairs of particles are sent off in different directions from a common source, targeted for two observers, Alice and Bob, each stationed at opposite ends of the solar system. Quantum mechanics dictates that it is impossible to know the spin, a quantum property of individual particles prior to measurement. When Alice measures one of her particles, she finds its spin to be either up or down. Her results are random, and yet, when she measures up, she instantly knows Bob’s corresponding particle must be down. At first glance, this is not so odd; perhaps the particles are like a pair of socks—if Alice gets the right sock, Bob must have the left.


But under quantum mechanics, particles are not like socks, and only when measured do they settle on a spin of up or down. This is EPR’s key conundrum: If Alice’s particles lack a spin until measurement, how then when they whiz past Neptune do they know what Bob’s particles will do as they fly out of the solar system in the other direction? Each time Alice measures, she effectively quizzes her particle on what Bob will get if he flips a coin: up, or down? The odds of correctly predicting this even 200 times in a row are 1 in 1060—a number greater than all the atoms in the solar system. Yet despite the billions of kilometers that separate the particle pairs, quantum mechanics says Alice’s particles can keep correctly predicting, as though they were telepathically connected to Bob’s particles.


Although intended to reveal the imperfections of quantum mechanics, when real-world versions of the EPR thought experiment are conducted the results instead reinforce the theory’s most mind-boggling tenets. Under quantum mechanics, nature is not locally real—particles lack properties such as spin up or spin down prior to measurement, and seemingly talk to one another no matter the distance.


Physicists skeptical of quantum mechanics proposed that there were “hidden variables,” factors that existed in some imperceptible level of reality beneath the subatomic realm that contained information about a particle’s future state. They hoped in hidden-variable theories, nature could recover the local realism denied to it by quantum mechanics.


“One would have thought that the arguments of Einstein, Podolsky and Rosen would produce a revolution at that moment, and everybody would have started working on hidden variables,” Popescu says.


Einstein’s “attack” on quantum mechanics, however, did not catch on among physicists, who by and large accepted quantum mechanics as is. This was often less a thoughtful embrace of nonlocal reality, and more a desire to not think too hard while doing physics—a head-in-the-sand sentiment later summarized by the physicist David Mermin as a demand to “shut up and calculate.”


The lack of interest was driven in part because John von Neumann, a highly regarded scientist, had in 1932 published a mathematical proof ruling out hidden-variable theories. (Von Neumann’s proof, it must be said, was refuted just three years later by a young female mathematician, Grete Hermann, but at the time no one seemed to notice.)


Quantum mechanics’ problem of nonlocal realism would languish in a complacent stupor for another three decades until being decisively shattered by Bell. From the start of his career, Bell was bothered by the quantum orthodoxy and sympathetic toward hidden variable theories. Inspiration struck him in 1952, when he learned of a viable nonlocal hidden-variable interpretation of quantum mechanics devised by fellow physicist David Bohm—something von Neumann had claimed was impossible. Bell mulled the ideas over for years, as a side project to his main job working as a particle physicist at CERN.


In 1964, Bell rediscovered the same flaws in von Neumann’s argument that Hermann had. And then, in a triumph of rigorous thinking, Bell concocted a theorem that dragged the question of hidden variables from its metaphysical quagmire onto the concrete ground of experiment.


Normally, hidden-variable theories and quantum mechanics predict indistinguishable experimental outcomes. What Bell realized is that under precise circumstances, an empirical discrepancy between the two can emerge. In the eponymous Bell test (an evolution of the EPR thought experiment), Alice and Bob receive the same paired particles, but now they each have two different detector settings—A and a, B and b. These detector settings allow Alice and Bob to ask the particles different questions; an additional trick to throw off their apparent telepathy. In local hidden-variable theories, where their state is preordained and nothing links them, particles cannot outsmart this extra step, and they cannot always achieve the perfect correlation where Alice measures spin down when Bob measures spin up (and vice versa). But in quantum mechanics, particles remain connected and far more correlated than they could ever be in local hidden-variable theories. They are, in a word, entangled.


Measuring the correlation multiple times for many particle pairs, therefore, could prove which theory was correct. If the correlation remained below a limit derived from Bell’s theorem, this would suggest hidden variables were real; if it exceeded Bell’s limit, then the mind-boggling tenets of quantum mechanics would reign supreme. And yet, in spite of its potential to help determine the very nature of reality, after being published in a relatively obscure journal Bell’s theorem languished unnoticed for years.


THE BELL IS RINGING FOR THEE


In 1967, John Clauser, then a graduate student at Columbia University, accidentally stumbled across a library copy of Bell’s paper and became enthralled by the possibility of proving hidden-variable theories correct. Clauser wrote to Bell two years later, asking if anyone had actually performed the test. Clauser’s letter was among the first feedback Bell had received.


With Bell’s encouragement, five years later Clauser and his graduate student Stuart Freedman performed the first Bell test. Clauser had secured permission from his supervisors, but little in the way of funds, so he became, as he said in a later interview, adept at “dumpster diving” to secure equipment—some of which he and Freedman then duct-taped together. In Clauser’s setup—a kayak-sized apparatus requiring careful tuning by hand—pairs of photons were sent in opposite directions toward detectors that could measure their state, or polarization.


Unfortunately for Clauser and his infatuation with hidden variables, once he and Freedman completed their analysis, they could not help but conclude that they had found strong evidence against them. Still, the result was hardly conclusive, because of various “loopholes” in the experiment that conceivably could allow the influence of hidden variables to slip through undetected. The most concerning of these was the locality loophole: if either the photon source or the detectors could have somehow shared information (a plausible feat within the confines of a kayak-sized object), the resulting measured correlations could still emerge from hidden variables. As Kaiser puts it pithily, if Alice tweets at Bob which detector setting she’s in, that interference makes ruling out hidden variables impossible.


Closing the locality loophole is easier said than done. The detector setting must be quickly changed while photons are on the fly—“quickly” meaning in a matter of mere nanoseconds. In 1976, a young French expert in optics, Alain Aspect, proposed a way for doing this ultra-speedy switch. His group’s experimental results, published in 1982, only bolstered Clauser’s results: local hidden variables looked extremely unlikely. 


“Perhaps Nature is not so queer as quantum mechanics,” Bell wrote in response to Aspect’s initial results. “But the experimental situation is not very encouraging from this point of view.”


Other loopholes, however, still remained—and, alas, Bell died in 1990 without witnessing their closure. Even Aspect’s experiment had not fully ruled out local effects because it took place over too small a distance. Similarly, as Clauser and others had realized, if Alice and Bob were not ensured to detect an unbiased representative sample of particles, they could reach the wrong conclusions.


No one pounced to close these loopholes with more gusto than Anton Zeilinger, an ambitious, gregarious Austrian physicist. In 1998, he and his team improved on Aspect’s earlier work by conducting a Bell test over a then-unprecedented distance of nearly half a kilometer. The era of divining reality’s nonlocality from kayak-sized experiments had drawn to a close. Finally, in 2013, Zeilinger’s group took the next logical step, tackling multiple loopholes at the same time.


“Before quantum mechanics, I actually was interested in engineering. I like building things with my hands,” says Marissa Giustina, a quantum researcher at Google who worked with Zeilinger.  “In retrospect, a loophole-free Bell experiment is a giant systems-engineering project.” 


One requirement for creating an experiment closing multiple loopholes was finding a perfectly straight, unoccupied 60-meter tunnel with access to fiber optic cables. As it turned out, the dungeon of Vienna’s Hofburg palace was an almost ideal setting—aside from being caked with a century’s worth of dust. Their results, published in 2015, coincided with similar tests from two other groups that also found quantum mechanics as flawless as ever.


BELL'S TEST GOES TO THE STARS


One great final loophole remained to be closed, or at least narrowed. Any prior physical connection between components, no matter how distant in the past, has the possibility of interfering with the validity of a Bell test’s results. If Alice shakes Bob’s hand prior to departing on a spaceship, they share a past. It is seemingly implausible that a local hidden-variable theory would exploit these loopholes, but still possible.


In 2017, a team including Kaiser and Zeilinger performed a cosmic Bell test. Using telescopes in the Canary Islands, the team sourced its random decisions for detector settings from stars sufficiently far apart in the sky that light from one would not reach the other for hundreds of years, ensuring a centuries-spanning gap in their shared cosmic past. Yet even then, quantum mechanics again proved triumphant.


One of the principal difficulties in explaining the importance of Bell tests to the public—as well as to skeptical physicists—is the perception that the veracity of quantum mechanics was a foregone conclusion. After all, researchers have measured many key aspects of quantum mechanics to a precision of greater than 10 parts in a billion. 


“I actually didn’t want to work on it. I thought, like, ‘Come on; this is old physics. We all know what’s going to happen,’” Giustina says. 


But the accuracy of quantum mechanics could not rule out the possibility of local hidden variables; only Bell tests could do that.


“What drew each of these Nobel recipients to the topic, and what drew John Bell himself, to the topic was indeed [the question], ‘Can the world work that way?’” Kaiser says. “And how do we really know with confidence?” What Bell tests allow physicists to do is remove the bias of anthropocentric aesthetic judgments from the equation; purging from their work the parts of human cognition that recoil at the possibility of eerily inexplicable entanglement, or that scoff at hidden-variable theories as just more debates over how many angels may dance on the head of a pin. The award honors Clauser, Aspect and Zeilinger, but it is testament to all the researchers who were unsatisfied with superficial explanations about quantum mechanics, and who asked their questions even when doing so was unpopular.

“Bell tests,” Giustina concludes, “are a very useful way of looking at reality.”


Image Description: John Stewart Bell (1928-1990), the Northern Irish physicist whose work sparked a quiet revolution in quantum physics. Credit: Peter Menzel/Science Source

Study claims an ‘anti-universe’ where time is backwards may exist next to ours

According to a new study, a “anti-universe” where time goes backwards could exist alongside ours.

The idea is based on the fact that nature includes fundamental symmetries, which experts believe could apply to the entire cosmos. The hypothesis was discussed in the Annals of Physics publication.

It goes over a common physics concept known as Charge, Parity, and Time (CPT) symmetry. CPT symmetry is recognized as one of the laws of physics and means things like charge and time can be inverted after a transformative process.

Scientists think the Big Bang, which created the universe, was definitely transformational, and so the CPT theory could apply. The researchers believe that before the Big Bang, there was a ‘backwards universe,’ which was later inverted to form the cosmos we live in today.

They explained: “We investigate the idea that the universe before the Big Bang is the CPT reflection of the universe after the bang, both classically and quantum mechanically.”

And, added: “The universe before the bang and the universe after the bang may be viewed as a universe/anti-universe pair, emerging directly into the hot, radiation-dominated era we observe in our past.

“This, in turn, leads to a remarkably economical explanation of the cosmological dark matter.”

According to the wild theory, the presence of dark matter in our world isn’t so mysterious after all. The scientists propose that dark matter is made up of right-handed neutrinos, a type of ghostly particle. Neutrino particles exist in our universe, but they only spin left.


Scientists find it strange that they haven’t discovered a version that spins to the right, as the principles of physics suggest they should.


The researchers believe that the appropriate spinning particles may be invisible in our universe and are simply represented by dark matter.


Little is known about dark matter in the cosmos, except that it is invisible but has a physical influence on other objects.


Researchers think investigating neutrino particles and dark matter further could potentially be a step in proving the ‘anti-universe’ theory.


Scientists do not believe humans will be able to visit the suggested ‘anti-universe’ because it occurred before the Big Bang.

The Real Matrix: Physicist Says Our Universe Is Likely a Neural Network

We don't come across papers that aim to alter reality every day.

However, in a preprint sent to arXiv this summer, Vitaly Vanchurin, a physics professor at the University of Minnesota Duluth, attempts to reframe reality in a particularly eye-opening way, implying that we live inside a huge neural network that governs everything around us. In other words, he wrote in the paper, it’s a “possibility that the entire universe on its most fundamental level is a neural network.”

For years, physicists have attempted to reconcile quantum mechanics and general relativity. The first posits that time is universal and absolute, while the latter argues that time is relative, linked to the fabric of space-time.

In his paper, Vanchurin argues that artificial neural networks can “exhibit approximate behaviours” of both universal theories. Since quantum mechanics “is a remarkably successful paradigm for modeling physical phenomena on a wide range of scales,” he writes, “it is widely believed that on the most fundamental level the entire universe is governed by the rules of quantum mechanics and even gravity should somehow emerge from it.”

“We are not just saying that the artificial neural networks can be useful for analyzing physical systems or for discovering physical laws, we are saying that this is how the world around us actually works,” reads the paper’s discussion. “With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong.”


The concept is so bold that most physicists and machine learning experts we reached out to declined to comment on the record, citing skepticism about the paper’s conclusions. But in a Q&A with Futurism, Vanchurin leaned into the controversy — and told us more about his idea.


Futurism: Your paper argues that the universe might fundamentally be a neural network. How would you explain your reasoning to someone who didn’t know very much about neural networks or physics?


Vitaly Vanchurin: There are two ways to answer your question.


The first way is to start with a precise model of neural networks and then to study the behavior of the network in the limit of a large number of neurons. What I have shown is that equations of quantum mechanics describe pretty well the behavior of the system near equilibrium and equations of classical mechanics describes pretty well how the system further away from the equilibrium. Coincidence? May be, but as far as we know quantum and classical mechanics is exactly how the physical world works.


The second way is to start from physics. We know that quantum mechanics works pretty well on small scales and general relativity works pretty well on large scales, but so far we were not able to reconcile the two theories in a unified framework. This is known as the problem of quantum gravity. Clearly, we are missing something big, but to make matters worse we do not even know how to handle observers. This is known as the measurement problem in context of quantum mechanics and the measure problem in context of cosmology.


Then one might argue that there are not two, but three phenomena that need to be unified: quantum mechanics, general relativity and observers. 99% of physicists would tell you that quantum mechanics is the main one and everything else should somehow emerge from it, but nobody knows exactly how that can be done. In this paper I consider another possibility that a microscopic neural network is the fundamental structure and everything else, i.e. quantum mechanics, general relativity and macroscopic observers, emerges from it. So far things look rather promising.


What first gave you this idea?


First I just wanted to better understand how deep learning works and so I wrote a paper entitled “Towards a theory of machine learning”. The initial idea was to apply the methods of statistical mechanics to study the behavior of neural networks, but it turned out that in certain limits the learning (or training) dynamics of neural networks is very similar to the quantum dynamics we see in physics. At that time I was (and still is) on a sabbatical leave and decided to explore the idea that the physical world is actually a neural network. The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen.


In the paper you wrote that to prove the theory was wrong, “all that is needed is to find a physical phenomenon which cannot be described by neural networks.” What do you mean by that? Why is such a thing “easier said than done?”


Well, there are many “theories of everything” and most of them must be wrong. In my theory, everything you see around you is a neural network and so to prove it wrong all that is needed is to find a phenomenon which cannot be modeled with a neural network. But if you think about it it is a very difficult task manly because we know so little about how the neural networks behave and how the machine learning actually works. That was why I tried to develop a theory of machine learning on the first place.


The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen.


How does your research relate to quantum mechanics, and does it address the observer effect?


There are two main schools of thinking on quantum mechanics: Everett's (or many-worlds) interpretation and Bohm's (or hidden variables) interpretation. I have nothing new to say regarding the many-worlds interpretation, but I believe I can contribute to the theories of hidden variables. The hidden variables in the emergent quantum mechanics that I studied are the states of individual neurons, and the trainable variables (such as bias vector and weight matrix) are quantum variables. It should be noted that the concealed variables can be very non-local, hence violating Bell's inequalities. Although an approximated space-time locality is assumed, every neuron can be connected to every other neuron, hence the system does not need to be local.


Would you mind explaining how this idea connects to natural selection? Natural selection has a role in the evolution of complex structures/biological cells.


What I'm saying is straightforward. There are more stable structures (or subnetworks) of the microscopic neural network, and there are less stable structures. The more stable structures would survive evolution, whereas the less stable structures would perish. On the smallest scales, I predict natural selection to yield structures with very minimal complexity, such as chains of neurons, but on greater scales, the structures will be more sophisticated. I see no reason why this process should be limited to a specific length scale, therefore the assertion is that everything we see around us (particles, atoms, cells, observers, etc.) is the result of natural selection.


Your first email piqued my interest when you mentioned that you might not comprehend everything yourself. What exactly did you mean? Were you referring to the neural network's complexity or something more philosophical?


Yes, I'm talking about the intricacy of neural networks. I didn't even have time to consider the philosophical implications of the outcomes.


I have a question: does this theory imply that we are living in a simulation?


No, we live in a brain network, but we may never realise it.


Reference(s): arXiv 

Simulation Suggests 68 Percent of the Universe May Not Actually Exist

 According to the Lambda Cold Dark Matter (Lambda-CDM) model, which is the current accepted standard for how the universe began and evolved, the ordinary matter we encounter every day only makes up around five percent of the universe's density, with dark matter comprising 27 percent, and the remaining 68 percent made up of dark energy, a so-far theoretical force driving the expansion of the universe.


Image result for Simulation Suggests 68 Percent of the Universe May Not Actually Exist

But a new study has questioned whether dark energy exists at all, citing computer simulations that found that by accounting for the changing structure of the cosmos, the gap in the theory, which dark energy was proposed to fill, vanishes.

Published in 1915, Einstein's general theory of relativity forms the basis for the accepted origin story of the universe, which says that the Big Bang kicked off the expansion of the universe about 13.8 billion years ago.

The problem is, the equations at work are incredibly complicated, so physicists tend to simplify parts of them so they're a bit more practical to work with. When models are then built up from these simplified versions, small holes can snowball into huge discrepancies.


"Einstein's equations of general relativity that describe the expansion of the universe are so complex mathematically, that for a hundred years no solutions accounting for the effect of cosmic structures have been found, we know from very precise supernova observations that the universe is accelerating, but at the same time we rely on coarse approximations to Einstein's equations which may introduce serious side effects, such as the need for dark energy, in the models designed to fit the observational data." says Dr László Dobos, co-author of the new paper.


Image result for Simulation Suggests 68 Percent of the Universe May Not Actually Exist

Dark energy has never been directly observed, and can only be studied through its effects on other objects. Its properties and existence are still purely theoretical, making it a placeholder plug for holes in current models.

The mysterious force was first put forward as a driver of the universe's accelerated expansion in the 1990s, based on the observation of Type Ia supernovae. 

Sometimes called "standard candles," these bright spots are known to shine at a consistent peak brightness, and by measuring the brightness of that light by the time it reaches Earth, astronomers are able to figure out just how far away the object is.


This research was instrumental in spreading acceptance of the idea that dark energy is accelerating the expansion of the universe, and it earned the scientists involved the Nobel Prize in Physics in 2011. But other studies have questioned the validity of that conclusion, and some researchers are trying to develop a more accurate picture of the cosmos with software that can better handle all the wrinkles of the general theory of relativity.

A comparison of three models of universal expansion: top left, in red, is the Lambda-CDM model, including dark energy; middle, in blue, is the new Avera model, which accounts for the structure and doesn't require dark energy; and right, in green, is the original Einstein-de Sitter model, which also doesn't include dark energy (Credit: István Csabai et al)

According to the new study from Eötvös Loránd University in Hungary and the University of Hawaii, the discrepancy that dark energy was "invented" to fill might have arisen from the parts of the theory that were glossed over for the sake of simplicity. The researchers set up a computer simulation of how the universe formed, based on its large-scale structure. That structure apparently takes the form of "foam," where galaxies are found on the thin walls of each bubble, but large pockets in the middle are mostly devoid of both normal and dark matter.

The team simulated how gravity would affect matter in this structure and found that, rather than the universe expanding in a smooth, uniform manner, different parts of it would expand at different rates. Importantly, though, the overall average rate of expansion is still consistent with observations, and points to accelerated expansion. The end result is what the team calls the Avera model.

"The theory of general relativity is fundamental in understanding the way the universe evolves, we do not question its validity; we question the validity of the approximate solutions. Our findings rely on a mathematical conjecture which permits the differential expansion of space, consistent with general relativity, and they show how the formation of complex structures of matter affects the expansion. These issues were previously swept under the rug but taking them into account can explain the acceleration without the need for dark energy." says Dobos.

If the research stands up to scrutiny, it could change the direction of the study of physics away from chasing the ghost of dark energy.

The research was published in the Monthly Notices of the Royal Astronomical Society, and an animation below compares the different models.

NASA releases 12-year time-lapse video of the entire sky

 NASA has produced a breathtaking 12-year time lapse movie of the entire sky, demonstrating how the world around us has changed.

A NASA satellite observatory has developed a time-lapse film that shows how the sky has changed over the course of a decade. We can see cosmic wonders in photographs, but we can feel them in motion images. 

NASA's NEOWISE space telescope is creating fascinating videos of the sky's movement and change. NASA's Near-Earth Object Wide Field Infrared Survey Explorer, or NEOWISE, captures photographs in all directions as it orbits the Sun every six months. 

When the photos are stitched together, hundreds of millions of things appear on this map. In essence, scientists made a time-lapse movie utilizing 18 all-sky maps (the 19th and 20th will be revealed in March 2023), demonstrating developments over the last ten years.

Each map has a wealth of information about the universe for astronomers. However, when viewed as a time-lapse, they become even more helpful resources for helping students understand it. The maps can be compared using time-domain astronomy to detect distant objects that have changed brightness or position over time. 

"The night sky appears to alter relatively minimally," said Amy Mainzer, principle investigator of NEOWISE at the University of Arizona. "Stars are constantly erupting and flashing throughout the sky." Asteroids fly by. Black holes are tearing up stars. "The universe is a tremendously lively and busy place."

from WISE to NEOWISE

WISE was launched in 2009 as an observatory charged with scouring our galaxy for and studying things outside our solar system. NEOWISE began as a data processing project to retrieve WISE asteroid detections. 

Cryogenically cooled detectors on the spaceship detected infrared light. Some of the world's brightest galaxies and cool, nearby stars emit infrared light, which the human eye cannot see. The WISE mission ended in 2011 when the coolant on board ran out, but the spacecraft and part of its infrared detectors remained operational. 

As a result, NASA reconfigured the device in 2013 to track asteroids and other near-Earth objects. Both the mission and the spacecraft were renamed NEOWISE. Despite the shift, astronomers continue to investigate objects outside our solar system using data from infrared telescopes.

As part of the CatWISE project, a catalogue of objects from 12 NEOWISE all-sky maps was provided in 2020. Brown dwarfs, which are distributed around the galaxy and lurk in the dark near to our Sun, are studied in this collection. 

Even though they originate like stars, brown dwarfs do not undergo the fusion process that causes stars to light. Brown dwarfs appear to travel quicker than more distant stars moving at the same speed because of their proximity to Earth. 

Brown dwarfs can be found in the catalogue by looking for things that move among billions of others. Backyard Worlds: Planet 9 is a companion project to CatWISE that invites citizen scientists to pore through NEOWISE data.

Brown Dwarf Mapping

Using WISE's original two all-sky scans, some 200 brown dwarfs were discovered within 65 light-years of our Sun. As a result of the new maps, 60 more Y-dwarfs, the coldest brown dwarfs, have been detected. 

Warmer brown dwarfs could provide a different story about how and when they formed than Y-dwarfs. As a result of these discoveries, our solar neighbourhood is now ornamented with a variety of objects. A more complete census of brown dwarfs close to the Sun allows scientists to calculate how efficient star creation is in our galaxy.

A decade of studying the sky has also helped scientists comprehend how stars emerge. Dusty blankets shroud protostars as they evolve into stars, allowing NEOWISE to look inside their obscuring cocoons. 

Over time, the dust clouds that surround protostars gather mass, causing them to flicker and flare. To gain a better understanding of how stars originate, astronomers utilize NEOWISE to track the lifecycles of almost 1,000 protostars through time.

NEOWISE data has also contributed to a better understanding of black holes. As part of the original WISE study, millions of supermassive black holes were identified at the centres of distant galaxies. Recently, NEOWISE data and an echo mapping approach were utilized to determine the size of hot gas discs surrounding faraway black holes. 

A telescope cannot see these objects because they are too small. Overall, NEOWISE has made significant contributions to our understanding of the solar system, neighbouring stars, as well as distant, hidden objects like supermassive black holes.


Reference(s): NASA

Our Universe May Have Emerged from a Black Hole in a Higher Dimensional Universe

The big bang poses a big question: if it was indeed the cataclysm that blasted our universe into existence 13.7 billion years ago, what sparked it?

Three Perimeter Institute researchers have a new idea about what might have come before the big bang. It’s a bit perplexing, but it is grounded in sound mathematics, testable, and enticing enough to earn the cover story in Scientific American, called “The Black Hole at the Beginning of Time.” 

What we perceive as the big bang, they argue, could be the three-dimensional “mirage” of a collapsing star in a universe profoundly different than our own.

“Cosmology’s greatest challenge is understanding the big bang itself,” write Perimeter Institute Associate Faculty member Niayesh Afshordi, Affiliate Faculty member and University of Waterloo professor Robert Mann, and PhD student Razieh Pourhasan. 

Conventional understanding holds that the big bang began with a singularity – an unfathomably hot and dense phenomenon of spacetime where the standard laws of physics break down. Singularities are bizarre, and our understanding of them is limited.

“For all physicists know, dragons could have come flying out of the singularity,” Afshordi says in an interview with Nature. 

The problem, as the authors see it, is that the big bang hypothesis has our relatively comprehensible, uniform, and predictable universe arising from the physics-destroying insanity of a singularity. It seems unlikely. So perhaps something else happened. Perhaps our universe was never singular in the first place.

Their suggestion: our known universe could be the three-dimensional “wrapping” around a four-dimensional black hole’s event horizon. In this scenario, our universe burst into being when a star in a four-dimensional universe collapsed into a black hole.

In our three-dimensional universe, black holes have two-dimensional event horizons – that is, they are surrounded by a two-dimensional boundary that marks the “point of no return.” In the case of a four-dimensional universe, a black hole would have a three-dimensional event horizon.

In their proposed scenario, our universe was never inside the singularity; rather, it came into being outside an event horizon, protected from the singularity. It originated as – and remains – just one feature in the imploded wreck of a four-dimensional star.

The researchers emphasize that this idea, though it may sound “absurd,” is grounded firmly in the best modern mathematics describing space and time. Specifically, they’ve used the tools of holography to “turn the big bang into a cosmic mirage.” Along the way, their model appears to address long-standing cosmological puzzles and – crucially – produce testable predictions.

Of course, our intuition tends to recoil at the idea that everything and everyone we know emerged from the event horizon of a single four-dimensional black hole. We have no concept of what a four-dimensional universe might look like. We don’t know how a four-dimensional “parent” universe itself came to be.

But our fallible human intuitions, the researchers argue, evolved in a three-dimensional world that may only reveal shadows of reality.

They draw a parallel to Plato’s allegory of the cave, in which prisoners spend their lives seeing only the flickering shadows cast by a fire on a cavern wall.

“Their shackles have prevented them from perceiving the true world, a realm with one additional dimension,” they write. “Plato’s prisoners didn’t understand the powers behind the sun, just as we don’t understand the four-dimensional bulk universe. But at least they knew where to look for answers.”

BREAKING: The Physics Nobel Prize Winner of 2022 just Proved that the "Universe is actually not real"

The fact that the universe is not locally real is one of the more disquieting discoveries of the last half-century. 

"Real," which indicates that objects have defined attributes independent of observation—for example, an apple can be red even when no one is looking; "local," which means that objects can only be impacted by their surroundings, and that any effect cannot travel faster than light. 

Quantum physics researchers have discovered that these concepts cannot both be true. Instead, the evidence suggests that objects are not only influenced by their surroundings, and that they may lack distinct properties prior to measurement. "Do you honestly believe the moon isn't there when you're not gazing at it?" Albert Einstein famously asked a friend.

This is, of course, deeply contrary to our everyday experiences. To paraphrase Douglas Adams, the demise of local realism has made a lot of people very angry and been widely regarded as a bad move.

The accomplishment has now been attributed to three physicists: John Clauser, Alain Aspect, and Anton Zeilinger. They were awarded the Nobel Prize in Physics in 2022 in equal parts "for experiments with entangled photons, establishing the violation of Bell inequalities, and pioneering quantum information science." ("Bell inequalities" alludes to the early 1960s pioneering work of Northern Irish physicist John Stewart Bell, who established the groundwork for this year's Physics Nobel.) Colleagues felt that the trio deserved this punishment for upending reality as we know it. "This is wonderful news. "It had been a long time coming," Sandu Popescu, a quantum physicist at the University of Bristol, says. "There is no doubt that the award is well-deserved."

“The experiments beginning with the earliest one of Clauser and continuing along, show that this stuff isn’t just philosophical, it’s real—and like other real things, potentially useful,” says Charles Bennett, an eminent quantum researcher at IBM. 

“Each year I thought, ‘oh, maybe this is the year,’” says David Kaiser, a physicist and historian at the Massachusetts Institute of Technology. “This year, it really was. It was very emotional—and very thrilling.”

Quantum foundations’ journey from fringe to favor was a long one. From about 1940 until as late as 1990, the topic was often treated as philosophy at best and crackpottery at worst. Many scientific journals refused to publish papers in quantum foundations, and academic positions indulging such investigations were nearly impossible to come by. In 1985, Popescu’s advisor warned him against a Ph.D. in the subject. 

“He said ‘look, if you do that, you will have fun for five years, and then you will be jobless,’” Popescu says.

Today, quantum information science is among the most vibrant and impactful subfields in all of physics. It links Einstein’s general theory of relativity with quantum mechanics via the still-mysterious behavior of black holes. It dictates the design and function of quantum sensors, which are increasingly being used to study everything from earthquakes to dark matter. And it clarifies the often-confusing nature of quantum entanglement, a phenomenon that is pivotal to modern materials science and that lies at the heart of quantum computing.

“What even makes a quantum computer ‘quantum’?” Nicole Yunger Halpern, a National Institute of Standards and Technology physicist, asks rhetorically. “One of the most popular answers is entanglement, and the main reason why we understand entanglement is the grand work participated in by Bell and these Nobel Prize–winners. Without that understanding of entanglement, we probably wouldn’t be able to realize quantum computers.”

WHOM DOES THE BELL RING?

The trouble with quantum mechanics was never that it made the wrong predictions—in fact, the theory described the microscopic world splendidly well right from the start when physicists devised it in the opening decades of the 20th century.

What Einstein, Boris Podolsky and Nathan Rosen took issue with, laid out in their iconic 1935 paper, was the theory’s uncomfortable implications for reality. Their analysis, known by their initials EPR, centered on a thought experiment meant to illustrate the absurdity of quantum mechanics; to show how under certain conditions the theory can break—or at least deliver nonsensical results that conflict with everything else we know about reality. 

A simplified and modernized version of EPR goes something like this: Pairs of particles are sent off in different directions from a common source, targeted for two observers, Alice and Bob, each stationed at opposite ends of the solar system. Quantum mechanics dictates that it is impossible to know the spin, a quantum property of individual particles prior to measurement. When Alice measures one of her particles, she finds its spin to be either up or down. Her results are random, and yet, when she measures up, she instantly knows Bob’s corresponding particle must be down. At first glance, this is not so odd; perhaps the particles are like a pair of socks—if Alice gets the right sock, Bob must have the left.

But under quantum mechanics, particles are not like socks, and only when measured do they settle on a spin of up or down. This is EPR’s key conundrum: If Alice’s particles lack a spin until measurement, how then when they whiz past Neptune do they know what Bob’s particles will do as they fly out of the solar system in the other direction? 

Each time Alice measures, she effectively quizzes her particle on what Bob will get if he flips a coin: up, or down? The odds of correctly predicting this even 200 times in a row are 1 in 1060—a number greater than all the atoms in the solar system. Yet despite the billions of kilometers that separate the particle pairs, quantum mechanics says Alice’s particles can keep correctly predicting, as though they were telepathically connected to Bob’s particles.

Although intended to reveal the imperfections of quantum mechanics, when real-world versions of the EPR thought experiment are conducted the results instead reinforce the theory’s most mind-boggling tenets. Under quantum mechanics, nature is not locally real—particles lack properties such as spin up or spin down prior to measurement, and seemingly talk to one another no matter the distance.

Physicists skeptical of quantum mechanics proposed that there were “hidden variables,” factors that existed in some imperceptible level of reality beneath the subatomic realm that contained information about a particle’s future state. They hoped in hidden-variable theories, nature could recover the local realism denied to it by quantum mechanics.

“One would have thought that the arguments of Einstein, Podolsky and Rosen would produce a revolution at that moment, and everybody would have started working on hidden variables,” Popescu says.

Einstein’s “attack” on quantum mechanics, however, did not catch on among physicists, who by and large accepted quantum mechanics as is. This was often less a thoughtful embrace of nonlocal reality, and more a desire to not think too hard while doing physics—a head-in-the-sand sentiment later summarized by the physicist David Mermin as a demand to “shut up and calculate.”

The lack of interest was driven in part because John von Neumann, a highly regarded scientist, had in 1932 published a mathematical proof ruling out hidden-variable theories. (Von Neumann’s proof, it must be said, was refuted just three years later by a young female mathematician, Grete Hermann, but at the time no one seemed to notice.)

Quantum mechanics’ problem of nonlocal realism would languish in a complacent stupor for another three decades until being decisively shattered by Bell. From the start of his career, Bell was bothered by the quantum orthodoxy and sympathetic toward hidden variable theories. 

Inspiration struck him in 1952, when he learned of a viable nonlocal hidden-variable interpretation of quantum mechanics devised by fellow physicist David Bohm—something von Neumann had claimed was impossible. Bell mulled the ideas over for years, as a side project to his main job working as a particle physicist at CERN.

In 1964, Bell rediscovered the same flaws in von Neumann’s argument that Hermann had. And then, in a triumph of rigorous thinking, Bell concocted a theorem that dragged the question of hidden variables from its metaphysical quagmire onto the concrete ground of experiment.

Normally, hidden-variable theories and quantum mechanics predict indistinguishable experimental outcomes. What Bell realized is that under precise circumstances, an empirical discrepancy between the two can emerge. In the eponymous Bell test (an evolution of the EPR thought experiment), Alice and Bob receive the same paired particles, but now they each have two different detector settings—A and a, B and b. 

These detector settings allow Alice and Bob to ask the particles different questions; an additional trick to throw off their apparent telepathy. In local hidden-variable theories, where their state is preordained and nothing links them, particles cannot outsmart this extra step, and they cannot always achieve the perfect correlation where Alice measures spin down when Bob measures spin up (and vice versa). But in quantum mechanics, particles remain connected and far more correlated than they could ever be in local hidden-variable theories. They are, in a word, entangled.

Measuring the correlation multiple times for many particle pairs, therefore, could prove which theory was correct. If the correlation remained below a limit derived from Bell’s theorem, this would suggest hidden variables were real; if it exceeded Bell’s limit, then the mind-boggling tenets of quantum mechanics would reign supreme. And yet, in spite of its potential to help determine the very nature of reality, after being published in a relatively obscure journal Bell’s theorem languished unnoticed for years.

THE BELL IS RINGING FOR THEE

In 1967, John Clauser, then a graduate student at Columbia University, accidentally stumbled across a library copy of Bell’s paper and became enthralled by the possibility of proving hidden-variable theories correct. Clauser wrote to Bell two years later, asking if anyone had actually performed the test. Clauser’s letter was among the first feedback Bell had received.

With Bell’s encouragement, five years later Clauser and his graduate student Stuart Freedman performed the first Bell test. Clauser had secured permission from his supervisors, but little in the way of funds, so he became, as he said in a later interview, adept at “dumpster diving” to secure equipment—some of which he and Freedman then duct-taped together. In Clauser’s setup—a kayak-sized apparatus requiring careful tuning by hand—pairs of photons were sent in opposite directions toward detectors that could measure their state, or polarization.

Unfortunately for Clauser and his infatuation with hidden variables, once he and Freedman completed their analysis, they could not help but conclude that they had found strong evidence against them. Still, the result was hardly conclusive, because of various “loopholes” in the experiment that conceivably could allow the influence of hidden variables to slip through undetected. 

The most concerning of these was the locality loophole: if either the photon source or the detectors could have somehow shared information (a plausible feat within the confines of a kayak-sized object), the resulting measured correlations could still emerge from hidden variables. As Kaiser puts it pithily, if Alice tweets at Bob which detector setting she’s in, that interference makes ruling out hidden variables impossible.

Closing the locality loophole is easier said than done. The detector setting must be quickly changed while photons are on the fly—“quickly” meaning in a matter of mere nanoseconds. In 1976, a young French expert in optics, Alain Aspect, proposed a way for doing this ultra-speedy switch. His group’s experimental results, published in 1982, only bolstered Clauser’s results: local hidden variables looked extremely unlikely. 

“Perhaps Nature is not so queer as quantum mechanics,” Bell wrote in response to Aspect’s initial results. “But the experimental situation is not very encouraging from this point of view.”

Other loopholes, however, still remained—and, alas, Bell died in 1990 without witnessing their closure. Even Aspect’s experiment had not fully ruled out local effects because it took place over too small a distance. Similarly, as Clauser and others had realized, if Alice and Bob were not ensured to detect an unbiased representative sample of particles, they could reach the wrong conclusions.

No one pounced to close these loopholes with more gusto than Anton Zeilinger, an ambitious, gregarious Austrian physicist. In 1998, he and his team improved on Aspect’s earlier work by conducting a Bell test over a then-unprecedented distance of nearly half a kilometer. 

The era of divining reality’s nonlocality from kayak-sized experiments had drawn to a close. Finally, in 2013, Zeilinger’s group took the next logical step, tackling multiple loopholes at the same time.

“Before quantum mechanics, I actually was interested in engineering. I like building things with my hands,” says Marissa Giustina, a quantum researcher at Google who worked with Zeilinger.  “In retrospect, a loophole-free Bell experiment is a giant systems-engineering project.” 

One requirement for creating an experiment closing multiple loopholes was finding a perfectly straight, unoccupied 60-meter tunnel with access to fiber optic cables. As it turned out, the dungeon of Vienna’s Hofburg palace was an almost ideal setting—aside from being caked with a century’s worth of dust. Their results, published in 2015, coincided with similar tests from two other groups that also found quantum mechanics as flawless as ever.

BELL'S TEST GOES TO THE STARS

One great final loophole remained to be closed, or at least narrowed. Any prior physical connection between components, no matter how distant in the past, has the possibility of interfering with the validity of a Bell test’s results. If Alice shakes Bob’s hand prior to departing on a spaceship, they share a past. It is seemingly implausible that a local hidden-variable theory would exploit these loopholes, but still possible.

In 2017, a team including Kaiser and Zeilinger performed a cosmic Bell test. Using telescopes in the Canary Islands, the team sourced its random decisions for detector settings from stars sufficiently far apart in the sky that light from one would not reach the other for hundreds of years, ensuring a centuries-spanning gap in their shared cosmic past. Yet even then, quantum mechanics again proved triumphant.

One of the principal difficulties in explaining the importance of Bell tests to the public—as well as to skeptical physicists—is the perception that the veracity of quantum mechanics was a foregone conclusion. After all, researchers have measured many key aspects of quantum mechanics to a precision of greater than 10 parts in a billion. 

“I actually didn’t want to work on it. I thought, like, ‘Come on; this is old physics. We all know what’s going to happen,’” Giustina says. 

But the accuracy of quantum mechanics could not rule out the possibility of local hidden variables; only Bell tests could do that.

“What drew each of these Nobel recipients to the topic, and what drew John Bell himself, to the topic was indeed [the question], ‘Can the world work that way?’” Kaiser says. “And how do we really know with confidence?” What Bell tests allow physicists to do is remove the bias of anthropocentric aesthetic judgments from the equation; purging from their work the parts of human cognition that recoil at the possibility of eerily inexplicable entanglement, or that scoff at hidden-variable theories as just more debates over how many angels may dance on the head of a pin. The award honors Clauser, Aspect and Zeilinger, but it is testament to all the researchers who were unsatisfied with superficial explanations about quantum mechanics, and who asked their questions even when doing so was unpopular.

“Bell tests,” Giustina concludes, “are a very useful way of looking at reality.”

Image Description: John Stewart Bell (1928-1990), the Northern Irish physicist whose work sparked a quiet revolution in quantum physics. Credit: Peter Menzel/Science Source