Health Literacy

How Do You Know? Measuring the Effectiveness of Health-Literacy Interventions

Article from the Boston Globe’s On Call Magazine, January/February 2005

By Helen Osborne, M.Ed., OTR/L
President of Health Literacy Consulting

How do you know your efforts to improve health literacy at your institution are working? Mary Ann Abrams, MD, MPH, and Gail Nielsen, RTR, BSHCA, SAHRA, have some answers. Abrams is a health management consultant, and Nielsen is a patient-safety administrator and a fellow of the Institute for Healthcare Improvement. Both are committed, ardent health-literacy advocates who lead organization-wide initiatives that affect 11 hospitals as well as hundreds of physicians and clinics within the Iowa Health System.

The heart of health literacy, says Nielsen, is knowing the patient’s ability to understand the provider and then actively intervening to make sure the information patients need is getting through. “When patients do not understand what their health providers are communicating,” Abrams says, “it can affect healthcare costs, effectiveness, and patient safety.” Consequently, it’s not enough simply to provide information in a simple format. If that information is still not understood, the effect could be devastating. Evaluating the effectiveness of your communication efforts is essential to providing good healthcare.

Ask, Count, Call

One intervention that Nielsen and other health-literacy experts are taking a close look at is a technique called “teach back.” With this method, patients are asked to repeat (or teach back) in their own words the medical instructions their providers are giving. Researchers hope to learn whether this technique actually changes patients’ knowledge and, if it does, whether the new level of understanding affects health outcomes.

One teach-back study is looking at patients with congestive heart failure (CHF) and measuring their knowledge about how to weigh themselves. (Weight gain with CHF can indicate an impending medical crisis.) In this study, researchers are measuring health knowledge and outcomes by using three specific techniques — asking, counting, and calling.

  • First, they ask patients open-ended questions about problems they have weighing themselves. By not limiting patients’ answers to “yes” or “no,” researchers are finding out what concerns patients have on a day-to-day basis as they try to follow the instructions they’ve been given. For instance, a patient might say, “I don’t have a scale,” or “I can’t see the scale beyond my belly.”
  • Researchers also count the number of patients who have unplanned hospital admissions or unexpected emergency-department visits due to CHF. One assumption is that this outcome may be due in part to patients not knowing how to correctly weigh themselves.
  • Finally, researchers call patients 48 hours after discharge to ask about CHF symptoms. This is intended as a way to measure knowledge gaps between what patients were taught at discharge and what they correctly recall two days later. In addition, researchers make a follow-up call one week after the initial survey.

Using what they learn from these three steps, providers can see what knowledge needs were not met and what concerns need to be addressed in future communications.


The Iowa Health System uses commercially produced survey tools such as those available from Press Ganey. Patients are asked to rate aspects of their healthcare experience on a scale of 1 to 5. While the survey is not specific to health literacy, Abrams feels that five of these questions are particularly useful when examining health communication. Answers to questions that focus on topics such as the following can provide a wealth of information about how well patients understand what they’re being told:

  1. Extent to which nurses kept patients informed about care
  2. Extent to which doctors kept patients informed about care
  3. Explanations about what would happen in tests and treatment
  4. Instructions about medications received
  5. Instructions for caring for themselves at home

Sometimes standardized surveys are written or designed in ways that are too difficult for many people to understand. To make surveys or questionnaires easier for everyone — including those with limited literacy or language skills — Abrams and Nielsen suggest incorporating the principles of plain language. These include using common one- and two-syllable words, short sentences, sufficiently large font size, and adequate white space (or nonprint areas). While it is unknown how these types of changes affect response rates, Nielsen notes that staff are relieved and delighted to see that no important content is lost when questionnaires are simplified.

Patients’ Stories

Personal stories can balance hard data, says Abrams. As an example, she talks about a patient who was instructed to eat only fresh fruits and vegetables. When asked what she eats each day, this patient listed several foods including canned peas. It is only through this story that the provider discovered a knowledge gap — for when the provider intended “fresh” to mean food that is unprocessed and uncooked, the patient understood “fresh” to be opening a new can.

Participation and Enthusiasm

Another measure of effectiveness can be the level of staff participation. For example, when the health-literacy initiative began in the Iowa Health System last year, there were just four teams of participants chosen by their managers. A year later, there are now 10 active teams and more want to get involved. There is a lot of excitement and many “aha!s” when people recognize the good work that is going on, Nielsen says.

How Not to Measure

While there are many standardized and creative ways to measure effectiveness, Abrams and Nielsen agree on a few methods they do not recommend. These include the following:

  • Asking closed-ended questions such as “Do you understand?” or “Do you have any questions?” Nielsen finds that questions like these often discourage people from giving more useful information.
  • Testing patients’ literacy skills at the point of care. The reason not to use these types of measurements is that the focus should be on improving understanding, not on evaluating reading skills. Also, asking patients whether they can or cannot read may needlessly expose feelings of shame. As participants in a recent New Readers of Iowa conference said, “The doctor’s office is no place for a reading test.”
  • Counting how many patients disclose that they can’t read. After Abrams and her colleagues instituted a number of patient-friendly office changes, staff were disappointed when lots of patients did not immediately ask for help with paperwork. In hindsight, Abrams believes that this is because poor readers carry a “burden of shame” resulting in an unwillingness to share with strangers that they cannot read. “Don’t be surprised if people don’t disclose. This doesn’t mean that what you are doing is not the right thing to do.”

Abrams and Nielsen acknowledge that health-literacy interventions and measurements are closely entwined. As Nielsen says, “We need to do the right thing, not just do something for the sake of doing something.”

How to Find Out More

Mary Ann Abrams, MD, MPH, is a health management consultant at the Iowa Health System in Des Moines. You can reach Abrams by email at

Gail A. Nielsen, RTR, BSHCA, SAHRA, is a patient-safety administrator and fellow of the Institute for Healthcare Improvement. You can reach her by email at

Web Resources

In Print

Nielsen-Bohlman L, Panzer AM, Kindig DA (ed) 2004. Health Literacy: A Prescription to End Confusion. Institute of Medicine, Washington, DC: The National Academies Press.

Article reprinted with permission from On Call magazine and published by a division of Boston Globe Media.