If you’re anything like us, dear readers, you have many thoughts about generative AI’s role in health comm. Generative AI (short for artificial intelligence) uses source content, like a database or websites, to create new content. And while some folks think it has great potential to improve health comm, others have serious concerns.
At this point, we have lots of questions — which is why we’ve been eagerly watching for articles that might have some answers. Today, we’re sharing a few of them with you in the latest installment of our Health Comm Headlines series. We hope that they spark discussion with your fellow health communicators. And as always, we’d love to hear what you think — so reach out if you have comments!
- Distilling the Promises of AI in Global Health from the Hype (Johns Hopkins Center for Communication Programs)
This piece gets real about generative AI’s potential in public health and health comm — and its limitations. The article features a fun example of AI-generated content: a song aiming to get men in the Democratic Republic of the Congo more involved in family planning. But it also discusses serious issues, particularly the fact that AI relies on data and some of that data is biased. As the author puts it: “Generative AI may provide the starting point; however, human input is still needed to quality check and provide expertise into context. It is naïve, though, to think these tools won’t factor into future content development.” We couldn’t agree more.
- How AI Is Helping Doctors Communicate with Patients (Association of American Medical Colleges)
This article focuses on health care’s use of chatbots — computer programs that simulate conversations with people. It notes that chatbots interacting with patients have 2 main purposes: monitoring health conditions and answering questions. For example, it describes a chatbot service that reaches out to different types of patients, like people who just returned home after surgery and people with chronic conditions. The idea is that these types of services can help make sure people are getting the care they need — like by alerting a doctor to call them for a follow-up — if their answers indicate a health concern. (While that is definitely promising, it’s also important to keep in mind that chatbots’ track record is far from perfect.)
- A.I. May Someday Work Medical Miracles. For Now, It Helps Do Paperwork. (New York Times)
This piece makes the case that currently, one of generative AI’s biggest benefits in health care is that it can… reduce paperwork. While that doesn’t sound overly exciting, it’s actually a pretty big deal and could potentially go a long way toward improving patient-provider communication and reducing provider burnout. As the article explains, doctors spend a lot of time — during patient visits and after hours — taking notes and logging info in electronic health records. But AI can do this for them, which can free up doctors’ time and improve the quality of doctor visits. The article describes an AI tool that not only takes notes during visits but also sends patients a plain language summary immediately afterward. You can bet that piqued our interest!
- AI Might Be Listening During Your Next Health Appointment (Axios)
This article is also about AI tools that can take notes during doctor visits and provide summaries, but rather than highlighting these tools’ benefits, it focuses on a potential drawback: privacy concerns. For example, the article says advocates are concerned that these tools are being released with little oversight and without standards for notifying patients about their use. Also this: “AI systems are trained on large amounts of real data, raising the question about whether patients’ data may be used for such training in the future.” These are valid concerns that we should all take seriously.
Browse recent posts