HLOL Podcast Transcripts

Health Literacy

Artificial Intelligence & Health Communication (HLOL #238)

Helen Osborne: Welcome to Health Literacy Out Loud. I’m Helen Osborne, President of Health Literacy Consulting, founder of Health Literacy Month and author of the book Health Literacy from A to Z. I also produce and host this podcast series, Health Literacy Out Loud.

I’ve been working in and focused on health literacy for many years. There have been a lot of changes along the way. To me, one of the biggest is something I’m just starting to hear about now: the introduction of artificial intelligence, or AI. I’m certainly grappling with how to best use it as a tool in health communication.

Claire Wardle knows a lot about this topic. I’m thrilled she agreed to be a guest on Health Literacy Out Loud.

Claire has a PhD in communication and is co-founder and co-director of the Information Futures Lab and Professor of the Practice at Brown University’s School of Public Health.

Claire is considered a leader in the field of misinformation, verification and user-generated content. Among her many accomplishments, Claire developed an organizational-wide training program about eyewitness media for the British Broadcasting Company and was a Fellow at the Shorenstein Center for Media, Politics and Public Policy at Harvard’s Kennedy School.

Welcome, Claire, to Health Literacy Out Loud.

Dr. Claire Wardle: Thanks for having me.

Helen Osborne: This AI, it’s here. I’m reading about it everywhere. Clue us all in. What is AI, and what do we need to know and do?

Dr. Claire Wardle: AI isn’t new. We’ve had artificial intelligence for a while, and I’ll get to what that is in a second.

Previously, artificial intelligence powered things like social networks. The fact that you only see certain information in your feeds, that’s powered by artificial intelligence. That was only available to big corporations, but in the last six months, there are now tools that have been released to the public, which allow any of us to play with AI.

One of them that people might’ve heard is ChatGPT, which you can go and ask a question. You can say, “In what year was the smallpox epidemic?” But that’s the kind of thing you should get from Google. It’s more interesting if you say, “Could you come up with a bedtime story that involves a rabbit, a frog and a pond in the middle of England?” and it will write you a story.

It has the ability to retrieve information, but it also has the ability to create completely new information.

ChatGPT is something that people might’ve heard about. There’s some thing called Dall-E, which allows you to say, “Create me an image of a teapot on a green sofa,” and it will create that image.

The thing about artificial intelligence is, essentially, it knows how to create patterns. It’s basically using super computers to look at huge amounts of data, either images or text, and it’s looking for patterns in those texts.

It learns from those patterns, so it’s able to retrieve information, but it’s also able to create new information based on the patterns that it’s learned.

It’s read a lot of children’s stories about frogs, ponds and rabbits, so it creates a story based on the patterns that it’s learned.

Helen Osborne: I think that’s what really overwhelmed me. As you talk about it, that it’s been around for a while, that’s probably why if I’m looking for a pair of sneakers, all of a sudden those very same sneakers show up on my computer. It’s like, “Really?”

Dr. Claire Wardle: Exactly.

Helen Osborne: But now you’re saying it can not only give us back information we somehow entered into it, but it can create something new. I think that’s the part I find awesome scary.

Dr. Claire Wardle: For example, news organizations even back in 2016 and 2017 were using AI to write stories for sports and finance.

Helen Osborne: They were?

Dr. Claire Wardle: Finance stories look the same. It’s basically reading the market. They said, “Why are we asking somebody to write that when we can ask a computer to write that?”

Artificial intelligence has been available for corporations. It just hasn’t been available to you and I, and that’s now changed.

Helen Osborne: Why did it change and what do we need to know about it? It seems all the rage.

Dr. Claire Wardle: Fundamentally, this is about money. What’s been happening in the background is Microsoft, Google and others, OpenAI, they’ve been trying to create these tools for the public as a way to mass market, mass distribute this technology.

The reason that you’re hearing so much about it, Helen, is because, quite rightly, people are saying, “When it was in the hands of a small number of corporations, it didn’t mean that there weren’t going to be problems. But now what does it mean when everybody is using AI?”

Is it going to become increasingly difficult to say, “Claire, did you write that, or was that ChatGPT?” More importantly, for my students, did my students write that themselves or did they ask ChatGPT to put it in?

We’ve seen many, in the last two months, founders of some of this technology writing open letters saying, “Actually, we were wrong. We should’ve been more careful about this rollout. We need to make sure we’ve got guardrails. We need regulations.”

That’s part of the conversation now, which is because artificial intelligence is self-learning, the fear is if it keeps learning and learning and learning, is it going to become so smart that it could actually become a threat to civilization?

There are some people who say yes. There are other people who say, “No, you’re being hyperbolic. It’s not that dangerous.” But the truth is we don’t really know.

Helen Osborne: I find it scary awesome.

One thing you said is they’re doing it for the financial incentive. I’ve seen two tools I’ve practiced on. One right now is free. They have a paid model. Another one is a monthly subscription, but that goes from a narrower database.

Is that where this is going, that this will all be monetized? Right now, it’s available to everybody like the internet used to be.

Dr. Claire Wardle: You’re absolutely right. Even with ChatGPT 3, that’s the free version. There’s ChatGPT 4, which you have to pay a monthly subscription for.

But going back to those businesses, Microsoft wants you to go and use Bing as the search engine rather than Google. These existing companies are also now using artificial intelligence to get smarter and more creative.

You probably talk to a lot of health communicators. They might have used the platform Canva, which is a way that you can design nice posters, pamphlets or Instagram images.

Now if you go onto Canva, it will say, “Do you want to use our AI tool to do an even better version of that?” It will cost. By integrating AI into existing products, it’s about saying, “We’re even smarter. We can do the job for you because AI will take the creative necessity. You don’t have to be creative. Our AI tools will be creative for you.”

Helen Osborne: I saw something like that just this morning. I was dealing with my newsletter, and now the company that does the newsletter said, “You want to use our AI tool?”

Dr. Claire Wardle: Exactly.

Helen Osborne: I’ve been hedging away from this. I am looking on from the sidelines. It’s why I wanted to talk with you so much. I don’t know what I should or shouldn’t be doing, but I also want to be talking with you specifically about it as a tool of health communication.

Our listeners to this podcast might be clinicians, public health folks or people in community organizations. They can be anywhere in the world. We all care about health literacy, plain language and communicating in ways patients and the public can understand.

What do you see for us now, those of us who are thick into this health communication, and what do you want to tell us about what’s ahead?

Dr. Claire Wardle: There are some incredible ways that AI in particular can help. For example, I can put a body of text into ChatGPT and say, “Can you rewrite this at a Grade 5 level?” or, “Can you rewrite this in Spanish?”

It is a very good writer. There is no denying that, because it’s looked at billions of words and it’s got very smart about really effective language. That’s one tool that can be useful.

You just gave another great example. We all, if we’re writing newsletters, want people to open our newsletters. One of the AI tools would be, “Do you want to use our AI to help you with your headline writing?” Yes, please. Because you’ve seen so many effective headlines, what might be a better way for me to communicate the new information on mpox that came out today?

There are ways that AI, I think, will really help in terms of health literacy. But we also need to be a little bit careful about that.

Helen Osborne: You’re generating all these different questions. Many of us serve as plain language writers and editors. Is our work done? Will the robots take over?

Dr. Claire Wardle: There’s no denying that there are certain jobs that AI is going to take away. In fact, yesterday I had somebody who’s a voiceover artist saying, “Nobody wants my services anymore at the lowest level.” She said, “No computer is ever going to take away my creativity, but the people who want to pay small amounts for small jobs, I’m no longer getting hired.”

If you think about all the jobs that you do in terms of health communication, there are some that really require your years of expertise and your deep knowledge of communities. A computer could never take away from that.

But there’s some other kind of grunt work that you do every single day that computers will do more of.

I’m less worried about the fact that none of us will have jobs. We will shift our jobs into, I think, much more creative, much deeper and much more thoughtful jobs.

In the same way when newspapers started using AI to write finance stories, it freed up that journalist to do more interesting investigations. It’s not like that person lost their job. They were freed up to do the kind of journalism that a computer can’t do.

That’s, I think, the shift over the next 50 years. We will see a lot of new jobs that we don’t have today, but we will see the end of some other jobs. But for most of us, I think we’ll see grunt-type work disappearing, and we’ll see an ability to do more creative, more interesting work.

Helen Osborne: I’d like to think that maybe the first pass of doing something in plain language might be taken care of by the computer. But we know our audience, we respect our audience and we understand the nuances of health information. I would hope that our role would be to massage that, to look for accuracy, to vet the sources.

Also, I want your opinion about health content. What I have heard is that any query you put into these bots, it then takes as their own.

Dr. Claire Wardle: They’re constantly learning. Exactly.

Helen Osborne: How do we vet that content? I’m not ready to give up my role as a plain language writer. I am just not there. Maybe you are, but I’m going to be very reluctant and skeptical about that. I think humans offer something that computers can’t.

But that’s one level. That’s how we use the words, but now we’re talking about the medical content. What I hear is these computers don’t always get it right.

Dr. Claire Wardle: I think about AI as a tool. Talking about plain language, when you use Word or Google Docs, I bet if you see a red squiggly line, you do a double check. You say, “Oh, goodness. I just used the wrong ‘there.’” Grammarly is another tool that says, “Actually, your grammar could be slightly improved.”

I’m sure, Helen, you’re a perfect writer, but there are already tools that we use to improve our writing. That’s what I think about ChatGPT or AI.

To your point about content, sometimes I’ll just double-check and say, “I think this, this and this. Have I missed anything?” Often I haven’t, but sometimes it says, “Oh, here’s a citation.”

Sometimes it gives me a citation, and I go and that citation doesn’t exist. It’s what’s called a hallucination.

It’s the same way with Wikipedia. Wikipedia is a glorious, incredible tool, but it’s not as if we go don’t go to Wikipedia and just say, “Let me just double-check that.” We’ve been taught that Wikipedia is wonderful, but it’s not foolproof. Nothing is foolproof.

That’s how I see AI. It has huge possibilities, but nothing it spits out, to use a better word, should be taken as gospel. We should always verify like we would any other source.

Helen Osborne: I’ve actually seen some programs that are specifically for medical writers, that where they’re getting their content from is from the published papers, from the government papers that are available. That, to me, gave me a dose more confidence. I did not start using it regularly. I was just beginning to learn about it.

Dr. Claire Wardle: I had a conversation with somebody today who said they were really trying to get some wrong answers from ChatGPT around medical information, and it was all correct.

I said, “That actually doesn’t surprise me, because in medicine we have scholarship that’s peer reviewed and checked. It’s been trained on high-quality data.”

I’m more worried about information spaces where it’s been trained on lower quality data. If it’s just around cultural norms or news articles, that kind of subject is going to be less trustworthy than medicine where we actually do have very strong data that the AI has been trained on.

Helen Osborne: What about if we’re making recommendations? Certainly patient education, sometimes it’s tailored to that population. Maybe it’s someone who’s had diabetes for a long time or newly diagnosed with something else. Would the computer be spitting out things for them to do?

And who is it protecting? What if we just took that information saying, “You’re newly diagnosed with this disease. Here are the top three things you need to do”? Is that our responsibility, or is that the computer saying that?

Dr. Claire Wardle: It’s the computer saying that, and so it’s the kind of thing, Helen, that if you did that, you’d say, “Actually, that’s pretty good. They are the three things that I would recommend because I’ve seen that in the literature,” or it might not.

Again, it depends on where it is drawing from and it would depend on how much of a settled science you had about something.

On something like diabetes, you might get better results than something like long COVID. Right now, we don’t have settled science around long COVID, so the computer doesn’t have good data to draw on.

It doesn’t have a consensus, so it’s more likely to be confused and it’s more likely to give information that we might not agree with or we might say, “Hang on. You’re saying that as if it’s clear, and it’s not clear because we’re only a couple years into long COVID.”

Helen Osborne: Is this cheating to use this? I was thinking about it even for my newsletter, when I saw you can now do it with AI, I thought, “I keep working on my nouns and verbs on this, and I’m doing my best.” Is that cheating if I have a computer do that next pass at it for me? Is that cheating if we do it with plain language or health content?

Dr. Claire Wardle: This is a great question. I would say if you wrote something, Helen, and then you put it through ChatGPT and it tightened things up slightly or it maybe shifted the grade level, I don’t think that’s cheating.

I think cheating comes from if you said, “Dear ChatGPT, please write me a 500-word newsletter around health literacy and long COVID,” and it spat it out, and you published that as your own work.

What’s so interesting for us as educators is that some people on their syllabus will say, “Please put the percentage of a paper that was written by ChatGPT.”

A bit like the citation, you might say, “Twenty-five percent of this paper was actually written by ChatGPT,” or, “I asked ChatGPT for my definitions.” We want the students to be transparent about how they’re using it. Otherwise, it’s a bit like plagiarism.

It’s not that you can’t cite other people. You just have to cite them. We talk about citing ChatGPT or citing whatever AI program you used.

Helen Osborne: Are there official ways to cite that now? I have not seen that.

Dr. Claire Wardle: No. We’re working it out. It’s a lot of conversations around college campuses about this.

There is a cautionary tale, a terrible case of a mass shooting recently. The comms officer used ChatGPT to write the thoughts and prayers communication afterwards, and it was found out.

Helen Osborne: Ew.

Dr. Claire Wardle: That’s the thing. How do we feel about the use of ChatGPT in different contexts? If you’re putting out a newsletter and you start saying, “I don’t write this. This is just ChatGPT,” I’m going to lose trust in you.

If you are transparent with me and say, “I use ChatGPT to search for new sources of information I might not have seen before,” I’d say, “Thanks, Helen.” That’s going to strengthen trust.

I think we just have to think about it as a tool. We have to understand that it’s not perfect. I think we need to be transparent about that. I think over the next year or so, those norms around how we cite it will become clearer.

Helen Osborne: I’m writing down your tips here. It’s a tool like anything else. We’re just more aware of it now than we were before, because it’s right out there and it’s more available to the public than it ever was before.

We need to be transparent about what we’re using. It sounds like I don’t need to be quite as leery about this as I am. I’ve been really trying to avoid it. Just do it.

We have to be thinking about what our role will be in the future, and how to cite it.

For the audience who I described for this podcast, what else would you want them to know? We’re all dealing with health content of some sort.

Dr. Claire Wardle: We haven’t really talked about the dark sides of AI. It’s great to have a positive conversation about it, but there are fears about the ways in which ChatGPT could be weaponized.

I think everybody just has to have one eye open on the fact that I can create imagery that could say, “I want to see a picture of Helen Osborne behind bars,” and it will create that picture. I can circulate that and say, “Look, Helen is a terrible person.”

Unfortunately, because now we can create imagery out of nothing and we can create text out of nothing that looks very professional, there are . . .

Helen Osborne: We can create voices, too, right?

Dr. Claire Wardle: It can create voices. We have to be aware of that. I would say as people who are working in this space of education and literacy, we should also be talking to our communities about this, about the ways in which AI can potentially cause harm.

But it’s the same with the internet. We’ve been struggling with misinformation for the last decade. People are actually pretty savvy about understanding what’s true and false, but they need to understand what’s possible.

I think if people don’t know that it’s possible to create an image of you behind bars, they’d say, “It must be true because it’s a photo,” without knowing that the technology now exists.

I think we have a role to educate our communities around this technology.

Helen Osborne: I’m so glad I’m talking to you. I feel like I’m a few steps behind. Just today, I wanted to look up some medical thing, so I went to Dr. Google. I’ve gotten much more comfortable about that. Years ago, I would not be that comfortable. I know you can’t trust things on the internet, but I know how to vet that myself and go to the sites that I find credible.

It sounds as though that’s going to be one of our jobs ahead in health literacy and health communication. It will be educating ourselves and then educating our patients and the public about this new technology that is and will be a factor of life.

Dr. Claire Wardle: One hundred percent.

Helen Osborne: Ways to learn more. I wrote down a few sites that you mentioned. I’m going to add them to your Health Literacy Out Loud web page. Is there any example or reference that you want people absolutely to know right now?

Dr. Claire Wardle: I wish that there was a go-to resource. One thing I’ll say is if you see headlines, read about it. Don’t have your head in the sand. This technology is moving at such speed that if you and I had this conversation in six months’ time, Helen, it might even look very different again.

What I would say to people is just keep an eye on this, because it really is something that we are going to, as a society, need to evolve with. The only way we can do that is by learning about the benefits, but also the potential harms.

Helen Osborne: Oh, you’re wonderful. I think that you told me a while ago you actually had something in The New York Times about using this. I want to be able to see that and share that, if we can. We’ll have a link on your Health Literacy Out Loud web page. Can you just give us a little overview of what that is?

Dr. Claire Wardle: In 2019, The New York Times asked me to talk about deep fakes, which is artificial intelligence as a way to create videos, and they turned me into Adele.

Helen Osborne: The singer Adele?

Dr. Claire Wardle: Yes, the singer. The first 30 seconds, you think you’re hearing from Adele, and then it morphs back into me and I say, “Actually, I’m Claire Wardle.”

It was a bit of a bucket list thing. Not only was I in The New York Times, but I also was there as Adele. Anyway, it’s a fun 2.5-minute video, but people might find it helpful.

Helen Osborne: I certainly can’t wait to see that. Claire, I learned so much from you. Thank you, thank you, thank you for teaching me, teaching all of us and I think maybe making me a little more confident to try this a little bit more, but also appreciate the downsides. AI is just a tool. We’ve learned to manage our other tools.

Dr. Claire Wardle: Exactly.

Helen Osborne: Thank you for being a guest on Health Literacy Out Loud.

Dr. Claire Wardle: It was my pleasure.

Helen Osborne: As we just heard from Claire Wardle, it’s important to appreciate all the new tools that are coming our way, including artificial intelligence. But doing so is not always easy.

For help clearly communicating your health message, please take a look at my book, Health Literacy from A to Z. Feel free to also explore my website, www.HealthLiteracy.com, or contact me directly at helen@healthliteracy.com.

New Health Literacy Out Loud interviews come out the first of every month. Get them all for free by subscribing at www.HealthLiteracyOutLoud.com, or wherever you get your podcasts.

Please help spread the word about Health Literacy Out Loud. Together, let’s tell the whole world why health literacy matters.

Until next time, I’m Helen Osborne.

Listen to this podcast.

Quote

"As an instructional designer in the Biotech industry, I find Health Literacy Out Loud podcasts extremely valuable! With such a conversational flow, I feel involved in the conversation of each episode. My favorites are about education, education technology, and instruction design as they connect to health literacy. The other episodes, however, do not disappoint. Each presents engaging and new material, diverse perspectives, and relatable stories to the life and work of health professionals.“

James Aird, M.Ed.
Instructional Designer