Tech | Visa | Scholarship/School | Info Place

Generative AI is coming to healthcare, but not everyone is excited

generative artificial intelligence, Images, text, audio, video and more can be created and analyzed, and it’s increasingly making its way into healthcare, driven by big tech companies and startups.

Google Cloud, the cloud services and products arm of Google, is working with Highmark Health, a nonprofit healthcare company based in Pittsburgh, to develop generative AI tools designed to personalize the patient experience.Amazon’s AWS unit said it is working with unnamed customers on ways to use generative artificial intelligence for analytics “Social Determinants of Health” medical database. Microsoft Azure is helping nonprofit healthcare network Providence build a generative artificial intelligence system to automatically classify messages patients send to caregivers.

Notable generative AI startups in healthcare include Ambience Healthcare, which is developing generative AI applications for clinicians; Nabla, an ambient AI assistant for practitioners; and Abridge, which creates analysis tools for medical documents.

Widespread enthusiasm for generative AI is reflected in investments in generative AI efforts targeting healthcare. Overall, generative AI in healthcare startups has raised tens of millions of dollars in venture capital to date, and an overwhelming majority of health investors say generative AI has had a significant impact on their investment strategies.

But professionals and patients are divided on whether healthcare-focused generative AI is ready for prime time.

Generative AI may not be what people want

In a recent Deloitte survey, only about half (53%) of U.S. consumers said they believed generative AI could improve health care, for example, by making it more accessible or reducing appointment wait times. Less than half said they expected generative AI to make health care more affordable.

Andrew Borkowski, chief artificial intelligence officer at VA Sunshine Healthcare Network, the largest health system in the U.S. Department of Veterans Affairs, believes this cynicism is not unfounded. Borkovsky warned that the deployment of generative AI may be premature due to its “significant” limitations and concerns about its efficacy.

“A key problem with generative AI is that it cannot handle complex medical queries or emergencies,” he told TechCrunch. “Its limited knowledge base – i.e. a lack of up-to-date clinical information – and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations.”

Several studies show these ideas to be credible.

In a paper published in JAMA Pediatrics, OpenAI’s generative AI chatbot ChatGPT, which has been piloted by some medical institutions in a limited use case, was found to make 83% errors in diagnosing pediatric diseases. While testing OpenAI’s GPT-4 as a diagnostic assistant, doctors at Beth Israel Deaconess Medical Center in Boston observed that the model ranked the wrong diagnosis as the best answer nearly two-thirds of the time.

Today’s generative AI also struggles to handle healthcare management tasks, which are an important part of clinicians’ daily workflow. Evaluate generative AI performance on the MedAlign benchmark for summarizing patient health records and searching across notes, GPT-4 fails 35% of the time.

OpenAI and many other generative AI vendors warn against relying on their models to provide medical advice. But Borkovsky and others say they can do more. “Relying solely on generative AI for health care can lead to misdiagnosis, inappropriate treatment, and even life-threatening conditions,” Borkoski said.

Jan Egger, who leads AI-guided therapies at the Institute of Artificial Intelligence in Medicine at the University of Duisburg-Essen, which studies the use of emerging technologies in patient care, shares the concerns. He believes that currently the only safe way to use generative AI in healthcare is under the close watch of a physician.

“The results could be completely wrong, and it’s getting harder and harder to maintain awareness of that,” Egger said. “Of course, generative AI can be used to pre-write discharge letters. But it is the doctor’s responsibility to conduct the examination and make the final decision.”

Generative AI can perpetuate stereotypes

One particularly harmful way generative AI in healthcare can go wrong is by perpetuating stereotypes.

In a 2023 study from Stanford University School of Medicine, a team of researchers tested ChatGPT and other AI-driven generative chatbots on questions such as kidney function, lung capacity and skin thickness. The co-authors found that not only were ChatGPT’s answers frequently wrong, but the answers also included some that reinforced long-held false beliefs about biological differences between blacks and whites—errors that have been known to lead medical providers to misdiagnose health problems.

Ironically, the patients most likely to be discriminated against by healthcare-generated AI are also the patients most likely to use it.

Deloitte’s survey shows that people who lack health care coverage (generally people of color, according to a KFF study) are more likely to try generative AI to seek services such as doctors or mental health support. If AI recommendations are influenced by bias, it could exacerbate inequalities in treatment.

However, some experts believe that generative AI is making progress in this regard.

In a Microsoft study released in late 2023, researchers said they achieved 90.2% accuracy on four challenging medical benchmarks using GPT-4. Vanilla GPT-4 cannot achieve this score. However, the researchers said that through hint engineering (designing hints for GPT-4 to produce certain outputs), they were able to improve the model’s score by as much as 16.2 percentage points. (It’s worth noting that Microsoft is a major investor in OpenAI.)

Beyond chatbots

But asking questions to a chatbot isn’t the only benefit of generative AI. Some researchers say medical imaging could benefit greatly from the power of generative artificial intelligence.

In July, a group of scientists unveiled a system called c.A study published in Nature shows complementarity-driven delays in clinical workflow (CoDoC). The system is designed to determine when medical imaging experts should rely on artificial intelligence rather than traditional techniques to make a diagnosis. According to the co-authors, CoDoC outperformed experts while reducing clinical workflow by 66%.

November, one Chinese research team presentation panda, an AI model for detecting potential pancreatic lesions in X-rays. One study showed that Panda was highly accurate at classifying these lesions, which are often discovered too late for surgical intervention.

In fact, Arun Thirunavukarasu, a clinical researcher at the University of Oxford, said there is “nothing unique” about generative AI that would allow it to be deployed in a healthcare setting.

“More general applications of generative AI techniques are feasible exist In the short and medium term, this includes text corrections, automated recording of notes and correspondence, and improved search capabilities to optimize electronic medical records,” he said. “There is no reason why generative AI technologies cannot be deployed if they are effective. exist Play these roles now. “

“Strict science”

But while generative AI shows promise in specific, narrow areas of medicine, experts like Borkovsky point out that before generative AI can be useful and trustworthy as a comprehensive assistive medical tool, it must Overcome technology and compliance barriers.

“There are significant privacy and security concerns with using generative AI in health care,” Borkovsky said. “The sensitivity of medical data and the potential for misuse or unauthorized access pose serious risks to patient confidentiality and trust in the healthcare system. Additionally, the regulatory and legal environment surrounding the use of generative artificial intelligence in healthcare remains As things continue to evolve, issues regarding liability, data protection and the practice of medicine with non-human entities still need to be addressed.”

Even Thirunavukarasu, who is optimistic about generative AI in healthcare, said there needs to be “rigorous science” behind patient-facing tools.

“Particularly in the absence of direct clinician supervision, pragmatic randomized controlled trials demonstrating clinical benefit should be conducted to justify the deployment of generative AI for patients,” he said. “Appropriate governance in the future will be critical for Capturing any unintended hazards following large-scale deployment is critical.”

Recently, the World Health Organization issued guidelines promoting such scientific and human oversight of generative AI in healthcare, as well as auditing, transparency, and impact assessment of this AI by independent third parties. The World Health Organization states in its guidance that the goal is to encourage diverse groups to participate in the development of healthcare-generated AI and have the opportunity to express concerns and provide input throughout the process.

“Until these issues are adequately addressed and appropriate safeguards are put in place, widespread implementation of medically generated AI could … result in potential harm to patients and the health care industry as a whole,” Borkovsky said.

#Generative #coming #healthcare #excited

Leave a Reply

Your email address will not be published. Required fields are marked *

Index