Toggle light / dark theme

Study illustrates huge potential of human, artificial intelligence collaboration in medicine

A.i assisting the doctors.


Artificial Intelligence (AI) is increasingly being used in medicine to support human expertise. However, the potential of these applications and the risks inherent in the interaction between human and artificial intelligence have not yet been thoroughly researched. The fear is often expressed that in future, as soon as AI is of sufficient quality, human expertise will become dispensable and therefore fewer doctors will be needed. These fears are further fuelled by the popular portrayal of this as a “competition” between humans and AI. An international study led by MedUni Vienna has now illustrated the enormous potential of human/computer collaboration.

The international study led by Philipp Tschandl and Harald Kittler (Department of Dermatology, MedUni Vienna) and Christoph Rinner (CeMSIIS/Institute for Medical Information Management, MedUni Vienna) now debunks the idea of this alleged competition, highlighting instead the of combining human expertise with Artificial Intelligence. The study published in Nature Medicine examines the interaction between doctors and AI from various perspectives and in different scenarios of practical relevance. Although the authors restrict their observations to the diagnosis of skin cancers, they stress that the findings can also be extrapolated to other areas of medicine where Artificial Intelligence is used.

AI does not always improve diagnosis

In an experiment created by the study authors, 302 examiners and/or doctors had to assess dermoscopic images of benign and malignant skin changes, both with and without the support of Artificial Intelligence. The AI assessment was provided in three different variants. In the first case, AI showed the examiner the probabilities of all possible diagnoses, in the second case the probability of a malignant change and, in the third case, a selection of similar images with known diagnoses, similar to a Google image search. As a main finding the authors observed that only in the first case did collaboration with AI improve the examiners’ diagnostic accuracy, although this was significant, with a 13% increase in correct diagnoses.

Japanese supercomputer is world’s fastest

The supercomputer Fugaku – jointly developed by RIKEN and Fujitsu, based on Arm technology – has taken first place on Top500, a ranking of the world’s fastest supercomputers.

It swept other rankings too – claiming the top spot on HPCG, a ranking of supercomputers running real-world applications; and HPL-AI, which ranks supercomputers based on their performance in artificial intelligence applications; and Graph 500, which ranks systems based on data-intensive loads.

This is the first time in history that the same supercomputer has achieved number one on Top500, HPCG, and Graph500 simultaneously. The awards were announced today at the ISC High Performance 2020 Digital, an international high-performance computing conference.

Restaurants Are in Need of a Helping Hand. Miso Robotics Is Offering Them One. Literally

https://vimeo.com/234073915

Both are AI-enabled, allowing them to take in their surroundings and learn and evolve over time. They know what time to start cooking a well-done burger so that it’s done at exactly the same time as a medium-rare burger for the same order, or could learn how to optimize oil use to minimize waste, for instance.

In a pre-pandemic time of restaurant labor shortages, Flippy kept kitchen productivity high and costs low, a giant deal in an industry known for tiny margins. Introducing Flippy into a kitchen can increase profit margins by a whopping 300%, not to mention significantly reduce the stress managers feel when trying to fill shifts.

But even if restaurants have an easier time finding workers as places reopen, Flippy and ROAR aren’t gunning for people’s jobs. They’re designed to be collaborative robots, or cobots, the cost-effective machines created to work with humans, not against them.

Molecular robot swarms

Rapid progress has been made in recent years to build these tiny machines, thanks to supramolecular chemists, chemical and biomolecular engineers, and nanotechnologists, among others, working closely together. But one area that still needs improvement is controlling the movements of swarms of molecular robots, so they can perform multiple tasks simultaneously.

New Vertical Takeoff and Landing flying taxi has its roots in modern sci-fi films

Flying automobiles have long been a staple of science fiction’s optimistic visions of tomorrow, right up there with rocket jetpacks, holidays on the moon, and robot butlers. And who wouldn’t want to climb into a vehicle capable of rising up into the air above the clogged arteries of traffic experienced on most major boulevards, highways, and freeways?

Now a lofty new air taxi being built by the Israeli startup firm Urban Aeronautics hopes to cash in on those promises with its new Vertical Takeoff and Landing (VTOL) car that unites technology with Jetsons-like futuristic dreams mostly only observed in films like Blade Runner, The Fifth Element, Back to the Future, and most recently on TV in Season 3 of HBO’s Westworld.

Cityhawk 2

Tesla’s controversial vision-based full self-driving approach is finally paying off

Like many things about Elon Musk, Tesla’s approach to achieving autonomous driving is polarizing. Bucking the map-based trend set by industry veterans such as Waymo, Tesla opted to dedicate its resources in pursuing a vision-based approach to achieve full self-driving instead. This involves a lot of hard, tedious work on Tesla’s part, but today, there are indications that the company’s controversial strategy is finally paying off.

In a recent talk, Tesla AI Director Andrej Karpathy discussed the key differences between the map-based approach of Waymo and Tesla’s camera-based strategy. According to Karpathy, Waymo’s use of pre-mapped data and LiDAR make scaling difficult, since vehicles’ autonomous capabilities are practically tied to a geofenced area. Tesla’s vision-based approach, which uses cameras and artificial intelligence, is not. This means that Autopilot and FSD improvements can be rolled out to the fleet, and they would function anywhere.

This rather ambitious plan for Tesla’s full self-driving system has caught a lot of skepticism in the past, with critics pointing out that map-based FSD is the way to go. Tesla, in response, dug its heels in and doubled down on its vision-based initiative. This, in a way, resulted in Autopilot improvements and the rollout of FSD features taking a lot of time, particularly since training the neural networks, which recognize objects and driving behavior on the road, requires massive amounts of real-world data.