«Even before COVID-19, 40% of physicians said they felt burned out. But the pandemic was a tipping point. Working in jerry-rigged PPE in overcrowded, understaffed ICUs, more than 3,600 U.S. healthcare workers died in the first year of the pandemic alone. After bearing witness to the lonely deaths of some 1 million patients, holding the phone as they shared their final minutes with family members via FaceTime, more doctors are deciding to retire early, exacerbating a looming shortage. A report last year by the Association of American Medical Colleges predicts a shortage of up to 124,000 physicians by 2034. That includes a gap of as many as 48,000 primary care physicians, who report higher levels of burnout than other specialties. And it’s not just doctors: In a January 2022 survey by Prosper Insights & Analytics, just 50% of all healthcare workers said they were “happy” at work.»»
«Happiness won’t be bought overnight. Staffing gaps will take time to fill. But in the meantime, proponents say, artificial intelligence (AI) could be used to help ease the burden on maxed-out MDs. “We need to turn every physician into a super-physician,” says Farzad Soleimani, an assistant professor in emergency medicine at Baylor College of Medicine and a partner at 1984 Ventures, a San Francisco-based VC firm. “At the end of the day, what clinicians do is to learn to recognize patterns. That’s the power of AI.”»
«Of course, there are doubters. An April 2019 Medscape survey of 1,500 doctors across Europe, Latin America, and the U.S. found that a majority were anxious or uncomfortable with AI, with U.S. physicians expressing the most skepticism (49%). Relying on algorithms for patient care also presents ethical, clinical, and legal concerns. AI may bring considerable threats of privacy problems, ethical concerns, and medical errors. Developers may unknowingly introduce biases to AI algorithms or train them using flawed or incomplete datasets. Data used to train AI systems could be vulnerable to hacking. By turning over aspects of decision-making to machines, physicians could lose their traditional autonomy and authority—and notions of liability will be tested should AI-guided recommendations result in patient harm.»
Article written by Asam Bluestein.