Here at We ❤️ Health Literacy HQ, we’ve been talking a lot about generative AI (short for artificial intelligence). By now, you’ve surely heard about — and maybe even tried — programs like ChatGPT or Bard. Generative AI uses source content, like a database or websites, to create new content based on a prompt or question you “ask” it. The results can be quite entertaining: Try asking ChatGPT to write a song about project management, for example. It’ll raise the roof at your next company party. (We speak from experience.)
Aside from being a fun diversion, generative AI has the potential to change the way we work and communicate with each other. For some, that’s great news. Others aren’t so sure: “Never trust anything that can think for itself if you can’t see where it keeps its brain,” as Arthur Weasley once said, probably in anticipation of 21st century AI.
It’s impossible to predict what impact generative AI will have over the coming years on things like, you know, our jobs as health writers and communicators. What we do know is that there are a few things to keep in mind if you’re going to use generative AI in health comm in 2023:
-
- Don’t use generative AI as a replacement for audience research, plain language work, or translation. We can’t emphasize this enough: There’s no substitute for involving actual humans when it comes to the nuances of creating clear, trustworthy, and culturally appropriate health information.
- Be transparent. If you’re using content that’s been generated by a generative AI program, say so — even if you put your own spin on it.
- Check for accuracy. Remember that ChatGPT & Co. are only as good as the sources they draw from — which are vast in number and vary in quality. So as with any information you find online, check your facts by consulting other credible sources.
- Watch out for bias. The source content for generative AI programs was created by humans. And as such, it carries over some of our biases, stereotypes, and misconceptions. Check your content carefully for non-inclusive language, harmful stereotypes, and the like.
Think of generative AI like that classmate in high school who let everyone copy their homework. Sure, it was easy. Especially if you spent the night before playing Sonic the Hedgehog instead of writing that paper on The Great Gatsby. But it also meant that you copied some of their mistakes — and besides, the work never quite felt like yours.
It’ll certainly be interesting to see what happens here in the long term — and how (if?) regulations will play a key role. If you’re anything like us, you’ve been thinking a lot about the types of policies that might have prevented some of the less savory implications of social media back in the day. Of course we all know what they say about hindsight…
If you have thoughts about generative AI and the future of health comm, dear readers, we want to hear them! As always, you can respond to this email or find us on LinkedIn or Twitter.
The bottom line: Generative AI, like ChatGPT or Bard, can make some parts of our work easier. But remember that they’re just another tool for our communicator toolbox.
Tweet about it: Read @CommunicateHlth’s take on generative AI and its impact on our work as #HealthComm professionals: https://rb.gy/ijcwz
Browse recent posts