Toggle light / dark theme

Large Language Models in Molecular Biology

Will we ever decipher the language of molecular biology? Here, I argue that we are just a few years away from having accurate in silico models of the primary biomolecular information highway — from DNA to gene expression to proteins — that rival experimental accuracy and can be used in medicine and pharmaceutical discovery.

Since I started my PhD in 1996, the computational biology community had embraced the mantra, “biology is becoming a computational science.” Our ultimate ambition has been to predict the activity of biomolecules within cells, and cells within our bodies, with precision and reproducibility akin to engineering disciplines. We have aimed to create computational models of biological systems, enabling accurate biomolecular experimentation in silico. The recent strides made in deep learning and particularly large language models (LLMs), in conjunction with affordable and large-scale data generation, are propelling this aspiration closer to reality.

LLMs, already proven masters at modeling human language, have demonstrated extraordinary feats like passing the bar exam, writing code, crafting poetry in diverse styles, and arguably rendering the Turing test obsolete. However, their potential for modeling biomolecular systems may even surpass their proficiency in modeling human language. Human language mirrors human thought providing us with an inherent advantage, while molecular biology is intricate, messy, and counterintuitive. Biomolecular systems, despite their messy constitution, are robust and reproducible, comprising millions of components interacting in ways that have evolved over billions of years. The resulting systems are marvelously complex, beyond human comprehension. Biologists often resort to simplistic rules that work only 60% or 80% of the time, resulting in digestible but incomplete narratives. Our capacity to generate colossal biomolecular data currently outstrips our ability to understand the underlying systems.

Study proves the difficulty of simulating random quantum circuits for classical computers

Quantum computers, technologies that perform computations leveraging quantum mechanical phenomena, could eventually outperform classical computers on many complex computational and optimization problems. While some quantum computers have attained remarkable results on some tasks, their advantage over classical computers is yet to be conclusively and consistently demonstrated.

Ramis Movassagh, a researcher at Google Quantum AI, who was formerly at IBM Quantum, recently carried out a theoretical study aimed at mathematically demonstrating the notable advantages of quantum computers. His paper, published in Nature Physics, mathematically shows that simulating random quantum circuits and estimating their outputs is so-called #P-hard for classical computers (i.e., meaning that is highly difficult).

“A key question in the field of quantum computation is: Are quantum computers exponentially more powerful than classical ones?” Ramis Movassagh, who carried out the study, told Phys.org. “Quantum supremacy conjecture (which we renamed to Quantum Primacy conjecture) says yes. However, mathematically it’s been a major open problem to establish rigorously.”

Navigating The Biases In LLM Generative AI: A Guide To Responsible Implementation

The adoption of artificial intelligence (AI) and generative AI, such as ChatGPT, is becoming increasingly widespread. The impact of generative AI is predicted to be significant, offering efficiency and productivity enhancements across industries. However, as we enter a new phase in the technology’s lifecycle, it’s crucial to understand its limitations before fully integrating it into corporate tech stacks.

Large language model (LLM) generative AI, a powerful tool for content creation, holds transformative potential. But beneath its capabilities lies a critical concern: the potential biases ingrained within these AI systems. Addressing these biases is paramount to the responsible and equitable implementation of LLM-based technologies.

Prudent utilization of LLM generative AI demands an understanding of potential biases. Here are several biases that can emerge during the training and deployment of generative AI systems.

The AI market will be worth $600 billion, Nvidia exec says

At the Goldman Sachs Communacopia and Tech Conference, tech execs pointed to why AI has rightfully gained so much attention on Wall Street.

Speaking to a standing-room-only crowd at the Goldman Sachs Communacopia and Tech Conference.

According to Das, the total addressable market for AI will consist of $300 billion in chips and systems, $150 billion in generative AI software, and $150 billion in omniverse enterprise software. These figures represent growth over the “long term,” Das said, though he did not specify a target date.


According to Nvidia, $600 billion is tied to a major bet on accelerated computing.

Experts alone can’t handle AI — social scientists explain why the public needs a seat at the table

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

Ex-Google executive fears AI will be used to create ‘more lethal pandemics’

A former Google executive who helped pioneer the company’s foray into artificial intelligence fears the technology will be used to create “more lethal pandemics.”

Mustafa Suleyman, co-founder and former head of applied AI at Google’s DeepMind, said the use of artificial intelligence will enable humans to access information with potentially deadly consequences.

“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible,” Suleyman said The Diary of a CEO podcast on Monday.

China approves chatbots for public use in tech war with US

ChatGPT is not officially available in China.

China can’t buy US chips required for advanced artificial intelligence models, but that’s not stopping the country from churning out AI models. After receiving regulatory approval from the country’s internet watchdog, several Chinese tech companies launched their respective AI chatbots last week. This comes in response to OpenAI’s ChatGPT, which, since its launch, has prompted rival tech companies around the globe to launch their own chatbots.

According to a Reuters report, Baidu CEO Robin Li has claimed that over 70 large language models (LLMs) with over 1 billion parameters have been released in China.


US sanctions on semiconductors are choking China’s ability to advance in generative AI. But as per latest reports, China has given nod to over 70 LLMs, showcasing its growth in the AI space.

Researchers use AI to find new magnetic materials without critical elements

A team of scientists from Ames National Laboratory has developed a new machine learning model for discovering critical-element-free permanent magnet materials. The model predicts the Curie temperature of new material combinations. It is an important first step in using artificial intelligence to predict new permanent magnet materials. This model adds to the team’s recently developed capability for discovering thermodynamically stable rare earth materials. The work is published in Chemistry of Materials.

High performance magnets are essential for technologies such as , , electric vehicles, and magnetic refrigeration. These magnets contain critical materials such as cobalt and rare earth elements like neodymium and dysprosium. These materials are in high demand but have limited availability. This situation is motivating researchers to find ways to design new magnetic materials with reduced critical materials.

Machine learning (ML) is a form of . It is driven by computer algorithms that use data and trial-and-error algorithms to continually improve its predictions. The team used experimental data on Curie temperatures and theoretical modeling to train the ML algorithm. Curie temperature is the maximum temperature at which a material maintains its magnetism.

/* */