Skip to content
This repository is currently being migrated. It's locked while the migration is in progress.

Latest commit

 

History

History
60 lines (39 loc) · 7.08 KB

File metadata and controls

60 lines (39 loc) · 7.08 KB
layout content-style-guide
title Readability and usability of VA digital content
intro-text Read this page to learn how we make sure that our core benefit and health care content is easy to find, understand, and use. And learn about our approach to readability scores.
anchors
Our process for core benefit and health care content
Our approach to readability scores

Our process for core benefit and health care content

Here’s how we make sure that our core benefit and health care content is easy to find, understand, and use:

  • We research the terms that Veterans and their families use. We incorporate the terms that people use most to search for information about VA health care and benefits. We also directly test terms with Veterans and their families across multiple usability tests.

  • We follow the VA.gov content style guide. We’ve built and continue to refine and expand this guide based on a range of trusted standards as well as ongoing testing with and feedback from Veterans.

  • We review our content against content and accessibility quality assurance (QA) standards. We check our informational content against our QA checklists before publishing. And we review product content against content experience standards before launch.

  • We test our content—and the organization of our content—with Veterans and their family members. Teams creating content and products across VA.gov regularly test navigation labels, content in forms, and informational content on webpages. We use the results to make each product better and to build our continued understanding of how to make all content easy to find, understand, and use.

  • We monitor content usage and survey feedback. We monitor analytics and feedback to help determine if Veterans can find, understand, and use the information we provide. We use this data to identify ways to keep making our content better.

Our approach to readability scores

We often get the question of how we use readability scores (like grade levels) to make sure our content is easy to read.

We appreciate that readability scores provide a clear metric to understand, track, and discuss readability. But like many organizations with experience in creating clear and usable content, we also recognize the limitations of these scores. Here’s how we use readability scores and why we don’t rely on them as the main way to make sure our content is clear and easy to understand.

How we use readability scores

We use readability scores for 2 main reasons:

  • We sometimes use readability scores to help show a basic “before and after” comparison of content that we rewrite into plain language. For example, showing that a plain language edit lowered the grade level of a section of content from grade 18 to grade 7 can help explain the difference a plain language approach can make.
  • We use readability scores to help track and surface possible problems with content. For example, we audit our core benefit content each year. As part of this audit, we run a check of the content’s grade level to help surface readability issues that updates to the content throughout the year may introduce. If a page scores higher than a grade 8 reading level range on an automatic scan, we use this as 1 metric to decide if the content needs more review or testing. We also use other metrics, like feedback scores, to decide if content needs more review or testing.

Notes:

  • We use the Flesch-Kincaid readability formula for our automatic checks of grade level. We chose this formula based on familiarity within VA and ease of access to tools that use this formula. When we decide that we need to test the grade level of specific content with more accuracy, we use a manual method such as the SMOG or FRY methods.
  • We always aim to create the easiest-to-read content as possible. We use grade 8 as the upper limit for our annual audit metrics based on scoring without preparing content such as removing proper nouns or program names.

Why we don’t rely primarily on readability scores

Here are some reasons why we don’t rely on readability scores as our main approach to making sure our content is easy to find, understand, and use.

Issues with reliable scoring

  • Readability scores aren’t as precise as we might think. And scores can vary between methods and tools. For example, we used 6 different readability methods and 4 different tools to score a page of VA.gov content. Grade level scores for the same content ranged from grade 4 to grade 12. Even scores from different tools based on the same method ranged from grade 6 to grade 8.
  • Accurate readability scoring requires preparing content for scoring. Preparing content includes removing proper nouns and program names. Quick scans using automatic tools don’t prepare content in this way. Preparing and scoring content manually with a formula such as SMOG or FRY is more accurate. But we believe that the value of the time spent doing manual grade level checks of all content is better spent checking content against our standards and testing content with our audiences.
  • Readability methods often struggle to score digital content. Most readability scores rely on larger chunks of content to measure readability. We aim to keep digital content brief and in small chunks. And the way we format content for ease of reading (like using bulleted lists) can also confuse readability scores.
  • Writing to get a specific grade level score can make content harder to understand. Readability scores often rely on the length of words and sentences. Word and sentence length are only 2 of the many factors that go into creating clear content. Therefore, a writer who makes choices simply to get a grade level down can lessen the empathy, flow, and overall comprehension of content in the process.
  • Readability scores don’t measure all types of literacy. Content testing and feedback better help to reveal issues that stem from these different types of literacy
    • Functional literacy: The ability to understand and use the words and numbers someone reads for practical purposes like solving problems
    • Numeracy: The ability to understand and work with numbers
    • Health literacy: The ability to find, understand, and use information and services to make health-related decisions

Issues with audience and context

  • Grade levels don’t consider the knowledge that adults build throughout life—or the vocabulary of a specific audience. If an adult has a grade 8 reading level, this doesn’t mean they understand concepts the same way as an eighth grader with the same reading level. And specific audiences (like Veterans and their families) may have a greater knowledge of and comfort with certain topics and terms.
  • Grade levels don’t consider the need to include and explain complex terms a government agency or health care provider use. If we leave out the complex term in these cases, we miss an opportunity to help our audience get familiar with a term they might need to know as they go through a benefit process. We help our audience more when we include the complex term along with a plain language explanation.