Toggle light / dark theme

How Ramanujan’s formulae for pi connect to modern high energy physics

Most of us first hear about the irrational number π (pi)—rounded off as 3.14, with an infinite number of decimal digits—in school, where we learn about its use in the context of a circle. More recently, scientists have developed supercomputers that can estimate up to trillions of its digits.

Now, physicists at the Center for High Energy Physics (CHEP), Indian Institute of Science (IISc) have found that pure mathematical formulas used to calculate the value of pi 100 years ago has connections to fundamental physics of today—showing up in theoretical models of percolation, turbulence, and certain aspects of black holes.

The research is published in the journal Physical Review Letters.

Cracking the code of Parkinson’s: How supercomputers are pointing to new treatments

More than 1 million Americans live with tremors, slowed movement and speech changes caused by Parkinson’s disease—a degenerative and currently incurable condition, according to the Parkinson’s Foundation and the Mayo Clinic. Beyond the emotional toll on patients and families, the disease also exerts a heavy financial burden. In California alone, researchers estimate that Parkinson’s costs the state more than 6 billion dollars in health care expenses and lost productivity.

Scientists have long sought to understand the deeper brain mechanisms driving Parkinson’s symptoms. One long-standing puzzle involved an unusual surge of brain activity known as beta waves—electrical oscillations around 15 Hertz observed in patients’ motor control centers. Now, thanks to supercomputing resources provided by the U.S. National Science Foundation’s ACCESS program, researchers may have finally discovered what causes these waves to spike.

Using ACCESS allocations on the Expanse system at the San Diego Supercomputer Center—part of UC San Diego’s new School of Computing, Information, and Data Sciences—researchers with the Aligning Science Across Parkinson’s (ASAP) Collaborative Research Network modeled how specific brain cells malfunction in Parkinson’s disease. Their findings could pave the way for more targeted treatments.

TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science

As we expected, the “Vista” supercomputer that the Texas Advanced Computing Center installed last year as a bridge between the current “Stampede-3” and “Frontera” production system and its future “Horizon” system coming next year was indeed a precursor of the architecture that TACC would choose for the Horizon machine.

What TACC does – and doesn’t do – matters because as the flagship datacenter for academic supercomputing at the National Science Foundation, the company sets the pace for those HPC organizations that need to embrace AI and that have not only large jobs that require an entire system to run (so-called capability-class machines) but also have a wide diversity of smaller jobs that need to be stacked up and pushed through the system (making it also a capacity-class system). As the prior six major supercomputers installed at TACC aptly demonstrate, you can have the best of both worlds, although you do have to make different architectural choices (based on technology and economics) to accomplish what is arguably a tougher set of goals.

Some details of the Horizon machine were revealed at the SC25 supercomputing conference last week, which we have been mulling over, but there are still a lot of things that we don’t know. The Horizon that will be fired up in the spring of 2026 is a bit different than we expected, with the big change being a downshift from an expected 400 petaflops of peak FP64 floating point performance down to 300 petaflops. TACC has not explained the difference, but it might have something to do with the increasing costs of GPU-accelerated systems. As far as we know, the budget for the Horizon system, which was set in July 2024 and which includes facilities rental from Sabey Data Centers as well as other operational costs, is still $457 million. (We are attempting to confirm this as we write, but in the wake of SC25 and ahead of the Thanksgiving vacation, it is hard to reach people.)

Mexico Reveals 314-Petaflop Supercomputer Named After Aztec Goddess

The Mexican government will build a supercomputer with a processing capacity seven times greater than the current most powerful computer in Latin America, officials responsible for the project said Wednesday.

Named Coatlicue, after a goddess in Aztec mythology representing the source of power and life, the computer will have a processing capacity of 314 petaflops.

“We want it to be a public supercomputer, a supercomputer for the people,” President Claudia Sheinbaum told reporters.

Laude × CSGE: Bill Joy — 50 Years of Advancements: Computing and Technology 1975–2025 (and beyond)

From the rise of numerical and symbolic computing to the future of AI, this talk traces five decades of breakthroughs and the challenges ahead.


Bill is the author of Berkeley UNIX, cofounder of Sun Microsystems, author of “Why the Future Doesn’t Need Us” (Wired 2000), ex-cleantech VC at Kleiner Perkins, investor in and unpaid advisor to Nodra. AI.

Talk Details.
50 Years of Advancements: Computing and Technology 1975–2025 (and beyond)

I came to UC Berkeley CS in 1975 as a graduate student expecting to do computer theory— Berkeley CS didn’t have a proper departmental computer, and I was tired of coding, having written a lot of numerical code for early supercomputers.

But it’s hard to make predictions, especially about the future. Berkeley soon had a Vax superminicomputer, I installed a port of UNIX and was upgrading the operating system, and the Internet and Microprocessor boom beckoned.

Supercomputers decode the strange behavior of Enceladus’s plumes

Supercomputers are rewriting our understanding of Enceladus’ icy plumes and the mysterious ocean that may harbor life beneath them. Cutting-edge simulations show that Enceladus’ plumes are losing 20–40% less mass than earlier estimates suggested. The new models provide sharper insights into subsurface conditions that future landers may one day probe directly.

In the 17th century, astronomers Christiaan Huygens and Giovanni Cassini pointed some of the earliest telescopes at Saturn and made a surprising discovery. The bright structures around the planet were not solid extensions of the world itself, but separate rings formed from many thin, nested arcs.

Centuries later, NASA’s Cassini-Huygens (Cassini) mission carried that exploration into the space age. Starting in 2005, the spacecraft returned a flood of detailed images that reshaped scientists’ view of Saturn and its moons. One of the most dramatic findings came from Enceladus, a small icy moon where towering geysers shot material into space, creating a faint sub-ring around Saturn made of the ejected debris.

Diamond quantum sensors improve spatial resolution of MRI

This accomplishment breaks the previous record of 48 qubits set by Jülich scientists in 2019 on Japan’s K computer. The new result highlights the extraordinary capabilities of JUPITER and provides a powerful testbed for exploring and validating quantum algorithms.

Simulating quantum computers is essential for advancing future quantum technologies. These simulations let researchers check experimental findings and experiment with new algorithmic approaches long before quantum hardware becomes advanced enough to run them directly. Key examples include the Variational Quantum Eigensolver (VQE), which can analyze molecules and materials, and the Quantum Approximate Optimization Algorithm (QAOA), used to improve decision-making in fields such as logistics, finance, and artificial intelligence.

Recreating a quantum computer on conventional systems is extremely demanding. As the number of qubits grows, the number of possible quantum states rises at an exponential rate. Each added qubit doubles the amount of computing power and memory required.

Although a typical laptop can still simulate around 30 qubits, reaching 50 qubits requires about 2 petabytes of memory, which is roughly two million gigabytes. ‘Only the world’s largest supercomputers currently offer that much,’ says Prof. Kristel Michielsen, Director at the Jülich Supercomputing Centre. ‘This use case illustrates how closely progress in high-performance computing and quantum research are intertwined today.’

The simulation replicates the intricate quantum physics of a real processor in full detail. Every operation – such as applying a quantum gate – affects more than 2 quadrillion complex numerical values, a ‘2’ with 15 zeros. These values must be synchronized across thousands of computing nodes in order to precisely replicate the functioning of a real quantum processor.


The JUPITER supercomputer set a new milestone by simulating 50 qubits. New memory and compression innovations made this breakthrough possible. A team from the Jülich Supercomputing Centre, working with NVIDIA specialists, has achieved a major milestone in quantum research. For the first time, they successfully simulated a universal quantum computer with 50 qubits, using JUPITER, Europe’s first exascale supercomputer, which began operation at Forschungszentrum Jülich in September.

Rejuvenating the blood: New pharmacological strategy targets RhoA in hematopoietic stem cells

Aging is defined as the deterioration of function over time, and it is one of the main risk factors for numerous chronic diseases. Although aging is a complex phenomenon affecting the whole organism, it is proved that the solely manifestation of aging in the hematopoietic system affects the whole organism. Last September, Dr. M. Carolina Florian and her team revealed the significance of using blood stem cells to pharmacologically target aging of the whole body, thereby suggesting rejuvenating strategies that could extend healthspan and lifespan.

Now, in a Nature Aging, they propose rejuvenating aged blood stem cells by treating them with the drug Rhosin, a small molecule that inhibits RhoA, a protein that is highly activated in aged hematopoietic stem cells. This study combined in vivo and in vitro assays at IDIBELL together with innovative machine learning techniques by the Barcelona Institute for Global Health (ISGlobal), a center supported by the “la Caixa” Foundation, and the Barcelona Supercomputing Center.

/* */