Toggle light / dark theme

The development of increasingly sophisticated sensors can facilitate the advancement of various technologies, including robots, security systems, virtual reality (VR) equipment and sophisticated prosthetics. Multimodal tactile sensors, which can pick up different types of touch-related information (e.g., pressure, texture and type of material), are among the most promising for applications that can benefit from the artificial replication of the human sense of touch.

When exploring their surroundings, communicating with others and expressing themselves, humans can perform a wide range of body motions. The ability to realistically replicate these motions, applying them to human and humanoid characters, could be highly valuable for the development of video games and the creation of animations, content that can be viewed using virtual reality (VR) headsets and training videos for professionals.

Researchers at Peking University’s Institute for Artificial Intelligence (AI) and the State Key Laboratory of General AI recently introduced new models that could simplify the generation of realistic motions for human characters or avatars. The work is published on the arXiv preprint server.

Their proposed approach for the generation of human motions, outlined in a paper presented at CVPR 2025, relies on a data augmentation technique called MotionCutMix and a diffusion model called MotionReFit.

The use of virtual reality haptic simulators can enhance skill acquisition and reduce stress among dental students during preclinical endodontic training, according to a new study published in the International Endodontic Journal. The study was based on collaboration between the University of Eastern Finland, the University of Health Sciences and the University of Ondokuz Mayıs in Turkey as well as Grande Rio University in Brazil.

The study aimed to evaluate the influence of virtual reality (VR) haptic simulators on skill acquisition and stress reduction in endodontic preclinical education of dental students.

During preclinical training, dental students develop manual dexterity, psychomotor skills and confidence essential in clinical practice. VR and haptic technology are increasingly used alongside conventional methods, enabling more repetition and standardised feedback, among other things.

A new approach to streaming technology may significantly improve how users experience virtual reality and augmented reality environments, according to a study from NYU Tandon School of Engineering.

The research—presented in a paper at the 16th ACM Multimedia Systems Conference (ACM MMSys 2025) on April 1, 2025—describes a method for directly predicting visible content in immersive 3D environments, potentially reducing bandwidth requirements by up to 7-fold while maintaining visual quality.

The technology is being applied in an ongoing NYU Tandon project to bring point cloud video to dance education, making 3D dance instruction streamable on standard devices with lower bandwidth requirements.

Researchers have succeeded, for the first time, in displaying three-dimensional graphics in mid-air that can be manipulated with the hands. The team includes Doctor Elodie Bouzbib, from Public University of Navarra (UPNA), together with Iosune Sarasate, Unai Fernández, Manuel López-Amo, Iván Fernández, Iñigo Ezcurdia and Asier Marzo (the latter two, members of the Institute of Smart Cities).

“What we see in films and call holograms are typically volumetric displays,” says Bouzbib, the first author of the work. “These are graphics that appear in mid-air and can be viewed from various angles without the need for wearing virtual reality glasses. They are called true-3D graphics.

They are particularly interesting as they allow for the ‘come-and-interact’ paradigm, meaning that the users simply approach a device and start using it.

In the paper accompanying the launch of R1, DeepSeek explained how it took advantage of techniques such as synthetic data generation, distillation, and machine-driven reinforcement learning to produce a model that exceeded the current state-of-the-art. Each of these approaches can be explained another way as harnessing the capabilities of an existing AI model to assist in the training of a more advanced version.

DeepSeek is far from alone in using these AI techniques to advance AI. Mark Zuckerberg predicts that the mid-level engineers at https://fortune.com/company/facebook/” class=””>Meta may soon be replaced by AI counterparts, and that Llama 3 (his company’s LLM) “helps us experiment and iterate faster, building capabilities we want to refine and expand in Llama 4.” https://fortune.com/company/nvidia/” class=””>Nvidia CEO Jensen Huang has spoken at length about creating virtual environments in which AI systems supervise the training of robotic systems: “We can create multiple different multiverses, allowing robots to learn in parallel, possibly learning in 100,000 different ways at the same time.”

This isn’t quite yet the singularity, when intelligent machines autonomously self-replicate, but it is something new and potentially profound. Even amidst such dizzying progress in AI models, though, it’s not uncommon to hear some observers talk about the potential slowing of what’s called the “scaling laws”—the observed principles that AI models increase in performance in direct relationship to the quantity of data, power, and compute applied to them. The release from DeepSeek, and several subsequent announcements from other companies, suggests that arguments of the scaling laws’ demise may be greatly exaggerated. In fact, innovations in AI development are leading to entirely new vectors for scaling—all enabled by AI itself. Progress isn’t slowing down, it’s speeding up—thanks to AI.

From virtual reality to rehabilitation and communication, haptic technology has revolutionized the way humans interact with the digital world. While early haptic devices focused on single-sensory cues like vibration-based notifications, modern advancements have paved the way for multisensory haptic devices that integrate various forms of touch-based feedback, including vibration, skin stretch, pressure, and temperature.

Recently, a team of experts, including Rice University’s Marcia O’Malley and Daniel Preston, graduate student Joshua Fleck, alumni Zane Zook ‘23 and Janelle Clark ‘22 and other collaborators, published an in-depth review in Nature Reviews Bioengineering analyzing the current state of wearable multisensory , outlining its challenges, advancements, and real-world applications.

Haptic devices, which enable communication through touch, have evolved significantly since their introduction in the 1960s. Initially, they relied on rigid, grounded mechanisms acting as user interfaces, generating force-based feedback from virtual environments.

Accurate and robust 3D imaging of specular, or mirror-like, surfaces is crucial in fields such as industrial inspection, medical imaging, virtual reality, and cultural heritage preservation. Yet anyone who has visited a house of mirrors at an amusement park knows how difficult it is to judge the shape and distance of reflective objects.

This challenge also persists in science and engineering, where the accurate 3D imaging of specular surfaces has long been a focus in both optical metrology and computer vision research. While specialized techniques exist, their inherent limitations often confine them to narrow, domain-specific applications, preventing broader interdisciplinary use.

In a study published in the journal Optica, University of Arizona researchers from the Computational 3D Imaging and Measurement (3DIM) Lab at the Wyant College of Optica l Sciences present a novel approach that significantly advances the 3D imaging of specular surfaces.

Imagine navigating a virtual reality with contact lenses or operating your smartphone underwater: This and more could soon be a reality thanks to innovative e-skins.

A research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has developed an that detects and precisely tracks magnetic fields with a single global sensor. This artificial skin is not only light, transparent and permeable, but also mimics the interactions of real skin and the brain, as the team reports in the journal Nature Communications.

Originally developed for robotics, e-skins imitate the properties of real skin. They can give robots a or replace lost senses in humans. Some can even detect chemical substances or magnetic fields. But the technology also has its limits. Highly functional e-skins are often impractical because they rely on extensive electronics and large batteries.

The device provides a range of sensations, such as vibrations, pressure, and twisting. A team of engineers led by Northwestern University has developed a new wearable device that stimulates the skin to deliver a range of complex sensations. This thin, flexible device gently adheres to the skin, offering more realistic and immersive sensory experiences. While it is well-suited for gaming and virtual reality (VR), the researchers also see potential applications in healthcare. For instance, the device could help individuals with visual impairments “feel” their surroundings or provide feedback to those with prosthetic limbs.