Sound Usability? Usability heuristics and guidelines for user-centered podcasts

Jennifer L. Bowie, Ph.D.

Screen Space
2241 Blenheim Ct
Marietta, GA 30066


In this paper, I explore usability for podcasts. I begin with a definition of podcasting. Next, I discuss usability for podcasts, focusing on eight key areas. From here, I build seven usability heuristics for podcasts. With the usability heuristics, I examine the anatomy of podcast, providing 127 podcast usability guidelines. I conclude with a discussion of future research needs.
Categories and Subject Descriptors
D.3.3 [Information Interfaces and Presentation]: User Interfaces –   evaluation/methodology, theory and methods, user-cnetered design.
General Terms
Measurement, Documentation, Performance, Design, Human Factors, Standardization.
Podcast, usability, usability heuristics, usability guidelines, sound, MP3, podcasting, podcaster, audio.

Podcasting started out in 2004 as grassroots and noncommercial. Many of the earliest podcasters were passionate about their topics and fascinated with what the new technology could do. Now, companies of all sizes (including Fortune 500 companies), civic groups, authors & artists, charities, hobbyists, magazines, radio stations, all sorts of private citizens, geeks, and ninjas have podcasts. As podcasts become increasingly popular with individuals, teams, and businesses, it is important to understand the usability of these media. We need to know what works well and what does not. We need to comprehend how our users use podcasts and how they respond to the various parts. In addition, we need to understand how Quesenbury’s five dimensions of usability (effective, efficient, engaging, error tolerant, and easy to learn) apply to podcasts in general and certain components of podcasts [1].

The actual numbers of people listening to podcasts is hard to determine, as at best we can rely on self-reported data or downloads (but a downloaded podcasts is not necessarily a played podcast). Webster, October of 2010, estimated 70 million Americans have watched or listened to a podcast, putting that at 27% of the population [2]. This is similar to the number of Americans who use iPods (28%), other MP3 players (23%), and much more than Hulu users (15%), satellite radio users (12%), and e-Reader users (3%) [2]. Thus, we see many people are using podcasts. Podcast listeners tend to be male (53%), are fairly evenly distributed between the ages of 18 and 54, tend to listen in their cars (67%), tend to use iTunes to subscribe (76%), and from an earlier study tend to be higher educated (21% have an advanced degree, 7% some grad study, 15% a college degree), tend to have higher household incomes, are more likely to create their own online content [2, 3]. They also tend to listen to more podcasts; Gay et al. found their listeners listened to ten podcasts on average [4].

Despite the high numbers of podcast users there is limited research about the usability of podcasts, although there are plenty of podcasts on the topic of usability. On a recent search on iTunes there were 120 podcasts on the topic of usability (my own included), but there are few usability studies on podcasts. Avgerinou, M., Salwach, J. & Tarkowski, D. have a 2007 proceedings article on information design for podcasts, and Wolfram’s thesis is on a rhetorical heuristic for podcast usability. [5, 6].  There are no studies listed in the Journal of Usability Studies on podcasts or even audio. There are, however, some studies of podcast audiences, such as the 2007 article “Astronomy Cast: Evaluation of a podcast audience’s content needs and listening habits” by Gay et al. [4]. Gay et al. provides interesting information on the demographics of the audience of Astronomy Cast, along with impacts on audience attitudes, content preferences including length, desired web resources and more [4]. Their findings could easily inform a usability study on podcasts and suggest some areas to consider in such a study [4]. For example, they found eleven of their listeners did not listen to the podcast; they instead read the transcript online [4]. This adds another factor to our understanding the usability of podcasts—how do transcripts fit into the usability and use of podcasts?

In this paper, I present the first step in a large survey usability study on podcasting. In this study, I focus on audio podcasts. While podcasts can be video and a variety of other forms (such as PDF), I am focusing on the most commonly accessed form of podcasts (according to Gay et al. only 1.4% of their listeners watch video podcasts) [4]. In this first step, I develop a definition of usability for podcasts. From there, I present seven usability heuristics I developed, based on the definition of usability. I develop these heuristics into usability concerns for the various parts of a podcast, presenting over 125 podcast usability evaluation guidelines. First, I begin with a definition.

Before determining what is a usable podcast, it is important to note what a podcast is. The word “podcast” as been defined as a portmanteau of “Personal-On-Demand” and “narrowcast.” Thus, in the very name of podcast we have an audience and topic focused transmission (narrowcast) that is both personal and available on demand. More specifically, podcasts are digital media files distributed over the internet. They are not just audio & video, as seems to be a common assumption. However, for the purpose of this discussion of usability I am limiting my use of the term “podcast” to denote audio podcasts only—in MP3 or MP4 file formats. These files are available via subscription through RSS feed technology. They can be played on anything else that can play a MP3 files. Podcasts also are both time- and location-shifted—or they are “any time anywhere” media. Users are not limited to a specific time (6 pm Wednesday on NBC) or a particular location (near a radio and listening). Users can listen whenever and wherever works best for them. In short, podcasts are time- and location-shifted narrowcast digital files available through subscription.

Podcasts are also part of complex systems of use. People can listen to podcasts from websites, blogs, podcatcher software like iTunes, and/or on mobile devices like cell phones and MP3 players. People may subscribe to podcasts through feedreaders, podcatcher software on their mobile device or computer; they may go back to the same site frequently to listen; or they may find a podcast series online and just listen to a single episode. A user’s ability to navigate the software technology to listen to the podcast may greatly impact the usability of the podcast, so a true understanding of usability will need to take the great variety of access methods and use into play. In addition, podcasts are often designed to be mobile. Many podcasters assume their listeners are listening on the go in some form—while driving their car, while running or working out, while taking a train or bus, while doing yard work, or whatever. Thus, the user’s context of use could play heavily into the usability of the podcast and greatly impact how we design and create podcasts.

While there are many definitions of usability, they tend to touch on a few general areas. Barnum states “usability is determined by the user’s perception of the quality of the product, based on the user’s ease of use, ease of leaning and relearning, and product’s intuitiveness for the user, and the user’s appreciation of the usefulness for the product” [6, p. 6]. Similarly, the ISO definition of usability is “the extent to which a product can be used by specified users to achieve specified goals in a specified context of use with effectiveness, efficiency, and satisfaction” [7]. Quesenbery has presented five dimensions of usability:  effective, efficient, engaging, error tolerant, and easy to learn [1]. Both Nielsen and Zhang [8, 9] have presented heuristics for usability. From these definitions and heuristics, I developed eight key usability concepts to consider:

  • Effective: Successfully enables users to meet their goals and/or complete the required/desired tasks
  • Efficient: Quickly produces desired results without impacting accuracy or wasting time, actions, and/or resources
  • Engaging & satisfying: Attracts the attention of the user and creates a pleasant, fulfilling, and enjoyable interaction(s)/use while meeting the expectations, goals/tasks of the user
  • Error tolerant: Prevents errors and when errors occur does not unnecessarily penalize users and enables a quick recovery
  • Easy to use & learn: Requires minimal effort and difficulty to use at any point, “supports both initial orientation [use] and deepening understanding of its capabilities” [1, p. 88]
  • Context sensitive: Works within the user’s context (including the environment and circumstances) of use
  • Goal and task oriented: Enables the user to meet goals and complete tasks
  • Useful: Being of use by serving the needs/desires of the users

I drew upon these eight concepts to develop podcast heuristics. I deemphasized errors compared to what can be found in many sets of heuristics to better fit the functionality of podcasts. Errors are not a common issues in podcasting due to the way podcasts work—technological issue are not often caused by the podcast itself, but perhaps by the technology used to listen and could include slow loading time, low MP3 battery, or MP3  player issues. Merging these eight usability concepts with podcasting, I established seven key podcast usability heuristics (the nickname for each is in brackets), which I next present.

3.1 Informative feedback and error prevention [Feedback]
Users should receive regular and timely feedback to the state of the podcast including the topic of the podcast, episode, where they are within the podcast, what to expect next, and information that allows them to skip or move through the podcast in ways beyond linear time. Users should be able to identify, at least generally, where they are in the podcast. Episode information should be provided up front, so users can quickly realize if they are listening to the correct podcast and not waste time listening to the wrong podcast. Podcasts should prevent users from making any errors and clearly indicate the beginning and ending of each podcast. Also, they should provide structure, transitions, and other presentation best practices to lighten user’s cognitive load for the content of the podcast. For example, the podcast may include information on how long each segment will take if users want to skip it or the information users mostly likely will want to skip should be at the end, where they can easily navigate to the next podcast without missing any key information.

3.2 Satisfying & engaging [S&E]
Podcasts should engage and satisfy the users, as appropriate for the content. This does not, for instance, require a “radio voice” and an expensive studio. Strong content, a passionate podcaster or more, appropriate humor, cleanly edited episodes, helpful transitions, decent theme music, and other such things can lead to an engaging and satisfying listener experience. Obviously what counts as satisfying and engaging varies greatly on genre and podcast. Due to the large number of podcasts out there this not only aids usability, but also can help the podcast attract and keep listeners—thus survive.

3.3 Easy, effective, and efficient [EEE]
Podcasts should successfully enable users to meet their goals and/or complete the required/desired tasks and quickly produce desired results without impacting accuracy or wasting time, actions, and/or resources. Time is a consideration—including length of the podcast and length of segments of the podcast.  The podcast should require minimal effort from the users, including cognitive load. Often this will be something as simple as listening to the correct podcast when and where they want in the amount of time they have. However, some podcasts may have additional efficiency considerations. One example is a podcast providing business news for the commute home—this podcast should be shorter than the average US commute, which is 25.1 minutes [10, p. 1].

3.4 Considers users and context of use [Users & Use]
A podcast should consider users differences, including experience level with content, accessibility and access issues, user’s language level (beginner, expert…). Podcasts should provide different ways to access information based on needs, uses, and user differences—like transcripts for those with hearing issues or the need to search or scan through the information—allowing the users to tailor their experience. They should also consider how the users are accessing the podcast and related content (subscriptions, streaming through the associated website or blog, podcatcher software like iTunes,…). They need to provide an easy ways to access information.  Podcasts should match the user’s model of a podcast within that genre. In addition, podcasts should keep user’s context and use in mind. Podcasts should not require the user to interact with the technology that is playing the podcast if the user is unable to in her context of use. For instance, the user cannot view the graphic being referenced while she is driving, so do not force her to do so to understand the content. Nor could the user easily adjust the podcast—so great changes in volume or other things that require user interaction can be inappropriate and even dangerous.

3.5 Appropriate design & delivery [D&D]
Superfluous content should be removed to minimize distraction and even competition of components. This does not mean the podcast needs to have minimalistic design, but that the podcaster should balance minimalism with aesthetics and with the user’s needs for the podcast. For example, a five minute song that ends a podcast can be appropriate for a music or entertainment podcast, but may be inappropriate for a ten minute business podcast. Audio podcasts should have content that is designed for audio delivery—highly visual and overly complex information may be best delivered in other forms. Podcasts should provide access to visual and other content the user may need for the podcast, but not require the user to access this content while listening. This heuristic includes sound levels, bed music, transitions, voices, sound effects, and other such components and their relations to the purpose of the podcast and needs of the users. Content is delivered in the best medium and form. Delivery is clean and appropriate. One example of inappropriate design and delivery could be bed music that is too loud and competes with the content. Another could be the six minutes in the middle of an interview where the people get off topic and ramble—this could be removed, or moved to the end of the podcast for users to skip if they desire or placed outside the podcast (perhaps an extra file interested users could download).

3.6 Consistency [Consistency]
A podcast should be consistent across and within episodes and consistent with the genre and genre conventions. Any breaks from consistency should be explained. Consistency can include publication schedule, length, transcripts, format, organization of podcast, and topic (a podcast on networking should not become a podcast on gardening). For instance, if advice podcasts tends to be 20-30 minutes in length, then a new advice podcast should follow this convention or explain why it does not. Or, if a podcast always is published with the transcript in the show notes, this should continue.

3.7 Documentation and support [Doc]
While a podcast should never need help documentation, other types of documentation and support may be needed. Transcripts and show notes should be available and easy to access—for reference, searching, and users with accessibility issues. Other support content, such as links, references, images, and content referenced in the podcast should also be available and easy to access. All available documentation and support materials should be noted in the podcast and access information should be provided. For example, if a full transcript is available, the podcaster should mention in the podcast where it is available —perhaps on the associated blog and in the lyrics field of the MP3.
3.8 Severity
In addition to heuristics, it is important to understand the severity of the usability concern or issue.  Drawing on Nielsen’s usability severity rating scale [10] and Zhang’s application [9], I incorporated these five severity ratings:

  • 0: Not at all a usability problem
  • 1: Cosmetic Problem—Lowest priority, fix if time is available after other problems have been fixed. This could be minor editing issues, small differences between the transcript and recording, or minor rambling.
  • 2: Minor Usability Problem—Low priority to possibly medium priority. Fix after higher priorities have been fixed. Consider needs and goals if deciding between minor problems. This could include some rambling/time off topic, moderate sound level issues, and problematic or nonexistent transitions between parts.
  • 3: Major Usability Problem— High priority. This is causing major usability issues and must be fixed for a usable podcast. This may include technical problems with the podcast including sound issues, sound levels, and other problems like missing parts of the podcast.
  • 4: “Usability catastrophe” —Highest priority. This makes the product unusable and must be fixed before the podcast is published. It is rare podcasts will have this level of problem, as the interface is not part of the podcast, and thus podcasts will likely be “usable” outside of the podcast text itself. But, podcasts can have serious sound issues or other things that may lead to a usability catastrophe.

4 The Anatomy of a Podcast
These heuristics are a starting place for designing a usable podcast and evaluating the usability of a podcast. Each huerisitc can be applied in different ways to each part of the podcast. A fully usable podcast should incorporate these not only generally put for each component. So, I developed a checklist over 125 of usability evaluation guidelines to consider, which I based on applying the heuristics to the components of a podcast. The anatomy of a podcast includes 11 main parts:   

  • Album Art:  Image, logo, or other visuals included to be shown on the iPod/MP3 player while listening and in other ways associated with the podcast (on related sites and content, as the icon in iTunes…).
  • Album Text: All text that is part of the actual podcast files. This includes Name (used for episode name), Artist (podcaster), Album Name (Podcast name), Album Track Number (used for episode number), and the Lyrics field (used for show notes or transcript). This text is visible or accessible when one is playing a podcast on most MP3 players.
  • Pre-Intro: Includes episode number, date, topic, other vital information. This should be very short ( approximately five seconds) and the very first part of the episode—like a book title. This tells the listeners what they are listening to before they get too far.
  • Theme Music: Music that is the “theme” of the podcast. Sometimes a single song is the theme and used in the intro and outro.  Sometimes a different song is used for the intro and outro—the intro will them likely be the “theme” song. The music should connect to the podcast is some way and may become a recognizable auditory association to the podcast—a bit like a logo is a visual connection to a company.
  • Intro: This is longer than the pre-intro and will come after it. The intro includes similar information to the pre-intro but explains more. If the pre-intro is like a book title, the intro is like the blurb for the book on the book jacket or the introduction of a paper. These are often about 30 seconds long, in which the podcaster needs to grab and keep listeners attention (by about 45 seconds from start of podcast). This needs to include an overview of the podcast—what the podcaster will be talking about/doing in the episode. Often this will include information on the podcaster.
  • Musical and Other Transitions: Often podcasts will have music and/or other transitions between sections of the podcast. These can match theme music or be something different. Bells, whistles, animal noises and more have been used.
  • Bed Music: Background music that is played for all or parts of the podcast. It is important that the podcaster is heard over the music. Often these are faded in and out. These are most common in intros and outros.
  • Main Body: This is the “meat” of the podcast. Could be interviews, a researched argument, a story, a rant, ten great songs, whatever.
  • Visuals: Enhanced podcasts can include visuals that play with the audio comment and can have chapter markings. Enhanced podcast are difficult to do (easier with Macs) and do not play on all MP3 players.
  • Outro: The podcast wrap up and conclusion. Often these include a summary, teasers for the next episode, information on where users can find the links and resources discussed in the episode,  contact information, a catch phrase, citations, necessary credits (such as giving credit for CC music used), and a fade out to music or something else.
  • Podcast as a Whole: This is the whole podcast file, but not the associated additional text. So, this includes all parts of the audio file, including everything listed above.
  • Additional Texts: These are anything besides the podcast file associated with the podcast. These can include the blog of the podcast, documents associated with the podcast, links to websites, and PDF transcripts. Most podcasts have an associated blog.

5 Podcast Usability Evaluation Guidelines
Applying the seven podcast usability heuristics to the 11 parts of a podcast, I developed more than 125 Podcast Usability Evaluation Guidelines (see the Tables 1–12 for each guideline). I assessed the severity of each guideline and provided a range for the severity for each guideline, based on the importance of the guideline to the overall use and usability of a podcast. These guidelines can be used by new podcasters who are trying to create a usable podcast, by seasoned podcasters who want to improve their podcast, and by usability researchers and analysts to determine the usability of a podcast. These guidelines can also be considered for other digital texts, especially sound-based texts.

Table 1. Album Art Guidelines

Usability Concerns Heuristic Severity
Provides a simple, clean, clear, and straightforward design with podcast name and logo. Feedback EEE, Users & Use, D&D 1–3
Easily identifiable in smaller sizes (MP3 player screens, podcast thumbnails…) from a few inches to about two feet (for instance, on a MP3 player held at arm’s length). Feedback 3–4
Enables quick identification of the podcast by the album art when the episode comes up. Feedback 3–4
Provides enough information on topic to engage. S&E 0–2
Stimulates user interest. S&E 0–2
Displayed consistently across all episodes. Consistency 2–3



Table 2. Album Text Guidelines

Usability Concerns Heuristic Severity
Is complete and provides user with all the information needed to identify the podcast series and episode (both name and number). Feedback, EEE, Users & Use, D&D 3–4
Includes a transcript or show notes that allows the user to navigate the episode, including time/place location of information should the user want to skip segments or access a particular segment. Feedback, EEE, Users & Use 1–2
Stimulates user interest. Episode title, podcast title and more should be interesting. S&E 0–3
Provides enough information on topic to engage. S&E 0–2
Provides necessary information for users’ context. Users & Use 2–3
Provides information, especially podcast name, episode number and name, consistently across all episodes. Consistency 3
Includes show notes or transcript. Doc 1–2



Table 3. Pre–Intro Guidelines

Usability Concerns Heuristic Severity
Exists. These are common, but not all podcasts have them. Feedback 3–4
Includes episode name, topic if not obvious from the name or the name is too close to another episode’s name, and podcast name. Feedback, EEE, Users & Use 3–4
Stimulates user interest. Episode title, podcast title and more should be interesting and presented in an engaging way (interested, non–monotone voice, for instance). S&E 0–3
Is short and to the point—ideally less than 10 seconds. EEE, Users & Use 1–3
Delivers necessary information (podcast title, episode name, possibly episode number) and nothing more. D&D 2–3
Provides podcast name, episode number and name consistently across all episodes. Consistency 3



Table 4. Theme Music Guidelines

Usability Concerns Heuristic Severity
Provides backup support for the identity of the podcast. Not required, the identity can be handled in other ways, but severity increases if other identifiers are not used. Feedback 1–3
Corresponds to podcast subject area to increase engagement—professional music for a professional podcast, fun music for a kids podcast, futuristic music for a podcast about future technology—as appropriate for the podcast. S&E 0–3
Sets the mood/tone of the podcast. S&E, D&D 0–2
Is generally likeable and interesting. Does not turn users off of the podcast. S&E 1–3
Is of an effective length—not too long (especially) nor too short. EEE, D&D 1–3
Considers users’ context and needs in the selection of music, length of music played, and style of music. Users & Use 1–2
Is the same across all episodes. Consistency 2–3



Table 5. Intro Guidelines

Usability Concerns Heuristic Severity
Includes episode overview/ outlines; more details required for longer episodes. Feedback 3–4
Provides users information on segments they may find more or less useful and may want to skip, including areas too advanced or easy. Ideally give at least a time in the episode, so used can navigate these sections. Feedback, Users & Use 1–3
Orientates the user to her location within the series—for example part two of three. Provides information on the rest of the series.  [only necessary for series] Feedback 2–4
Displays podcaster’s interest or passion in the topic to engage listener. S&E 1–3
Provides enough of a podcast overview to hook the listeners and make them want to stay without going too long. S&E 1–3
Starts in a timely manner (normally within 30 seconds of the start of the podcast). EEE 1–3
Provides key information in a short amount of time. EEE 1–3
Covers topics and episodes title (at least) within 45 seconds of the start of the podcast. EEE
Points out ways to access alternative content to allow for users to tailor of content. Users & Use 1–2
Considers context and method of use—providing information in ways that best work with the medium. Users & Use 1–3
Provides key content in appropriate tone. D&D 1–3
Precludes superfluous material. D&D 1–3
Matches the introduction of the other episodes. Consistency 2–3
Corresponds to the introduction format of other podcasts in the genre. Consistency 2–3
Points to any additional documentation or support materials. Doc 1–3



Table 6. Musical & Other Transitions Guidelines

Usability Concerns Heuristic Severity
Provides consistent transitions—if music is used it is used consistently and predictably. The longer the podcast the more severe the lack of transitions is—also the more segments the more severe the lack of transitions. Feedback 1–3
Indicates location in the podcast—including where was and where going (for example “That wraps up the topic for today. Now we move to listener feedback”). For longer podcasts these verbal transitions are more necessary. Feedback 1–3
Adds interest and possibly fun or entertainment between segments. S&E 0–2
Keeps users hooked by not straying from the topic too far and corresponding to the topic. S&E 0–3
Provides mental breaks between segments or in longer segments to keep and holder overall listener’s interest and reduce mental load, without wasting time. S&E, EEE 1–3
Are short and informative: Keeps time in mind and does not waste user’s time. EEE 1–3
Provides information on the next segment, so users can skip if the segment is not appropriate for their levels or needs. Users & Use 1–3
Considers users’ context and needs in the selection of music, length of music played, and style of music. Users & Use 1–2
Considers users’ context and needs in the types of transitions used, frequency of transitions, and even the inclusion of transitions. Users & Use 1–2
Adds aesthetics and sets tone. D&D 1–3
Does not get superfluous. D&D 1–3
Used consistently within the episode. Consistency 1–2
Used consistently across all episodes Consistency 1–2



Table 7. Bed Music Guidelines

Usability Concerns Heuristic Severity
Used consistently to aid listeners with their location in the podcast—for example only used in the intro and outro, or used with a fade out at the beginning of each section. Feedback 0–2
Adds and keeps interest during the start or end of segments. S&E 0–2
Provides or supports the tone of the podcast—such as scary music during a horror story. S&E, D&D 0–2
Does not overpower the content being presented at the same time—user should be able to easily hear the podcaster/content over the bed music. EEE 3–4
Does not distract from content. EEE 3–4
Considers users with hearing problems that may not be able to differentiate between background noise and key content. Users & Use 3–4
Considers users’ context and needs in the choice to use music, selection of music, length of music played, and style of music. Users & Use 1–2
Adds aesthetics and sets tone. D&D 1–3
Used consistently within the episode. Consistency 1–2
Used consistently across all episodes. Consistency 1–2



Table 8. Main Body Guidelines

Usability Concerns Heuristic Severity
Follows a consistent pattern from episode to episode. Feedback 1–3
Provides location information every few minutes through transitions, overviews, summaries, and placement indicators like “next” and “second.” Feedback 2–4
Employs an interesting and engaging voice and tone—but “radio voice” not needed. Podcaster(s) should make sure they seem interested and even passionate about the material and that will add to the listener’s engagement and satisfaction. S&E 1–3
Provides the listeners with the materials they need or want. S&E 1–4
Keeps on topic and to the point (as necessary, some podcasts may be engaging by going off topic), avoiding distractions and superfluous material. S&E, D&D 0–2
Delivers key and expected information/content. EEE, S&E 1–4
Supplies content effectively—without unnecessary tangents, ramblings, excessive details, and so on. Listener’s time is kept in mind. EEE, S&E 1–4
Presented at the level of the majority of the audience. Users & Use 2–4
Accommodates other key user groups/levels by offering additional content—such as definitions for beginners and short theoretical sections for experts. Users & Use 1–3
Delivers content appropriate to the user’s context and methods of use—for example, highly complex information may not be best is a podcast meant to be listened to while driving. Users & Use 1–4
Matches the body of the other episodes. Consistency 2–3
Corresponds to the body format of other podcasts in the genre. Consistency 2–3
Points to links and further information on any sources, documentation, or additional materials. Doc 1–3



Table 9. Visuals Guidelines

Usability Concerns Heuristic Severity
Used as necessary to provide additional location information. Feedback 0–2
Adds interest and supports the podcast episode. S&W 0–2
Supports tasks were users can access/view visuals, such as workout positions during a workout. EEE 1–3
Illustrates complex concepts with visuals to support user needs and levels. Users & Use 1–3
Provides visuals only in contexts and types of use where the visuals can be accessed—a podcast for commuters should not require the driver to look at visuals, for instance. Users & Use, EEE 1–3
Supports content and aids in delivery. D&D, Doc 1–3
Used consistently within episode and across episodes. Consistency 1–2



Table 10. Outro Guidelines

Usability Concerns Heuristic Severity
Includes episode summary and wrap–up—more details required for longer episodes. Feedback 4
Leads into the next episode to indicate place in the series or episodes. Feedback 2–3
Orientates the user to her location within the series—for example part two of three. Provides information on the rest of the series. [only necessary for series] Feedback 0–2
Reminds user of the satisfying content by summarizing and highlighting key points. S&E 0–2
Hooks user on next the next episode. S&E 0–2
Engages with same passionate/interested tone. S&E 1–3
Wraps up episode effectively and efficiently. EEE 1–3
Points out ways to access alternative content to allow for future user tailoring of content. Users & Use, D&D, Doc 1–3
Fits context and use. Users & Use, D&D 1–2
Matches the outro of the other episodes. Consistency 2–3
Corresponds to the outro format of other podcasts in the genre. Consistency 2–3
Provides basic reference information. Doc 1–2
Points to links, show notes, transcripts, and further information on any sources, documentation, or additional materials. Doc 1–3



Table 11. Additional Texts Guidelines

Usability Concerns Heuristic Severity
Provides information needed on related podcast, such as episode number and section it is related to. Feedback 2–4
Follows location/place practices (in text feedback) as normal for type of text. Feedback 2–4
Provides users methods to further explore the topic and content. S&E 0–4
Follows the rules for efficiency and effectiveness for the type of text. EEE 1–3
Are easy to access. EEE 1–3
Matches appropriate user levels and needs. Users & Use 1–3
Provides information needed to support different user levels. Users & Use 1–3
Accommodates the different access methods of users (text transcripts for the hearing impaired, for example). Users & Use 3–4
Delivers materials in forms that best match use, content, purpose and context. Users & Use, D&D 1–2
Used consistently within episode and across episodes. Consistency 1–2
Designed to support the core content. D&D, Doc 1–2
Easily accessible from information provided in the podcast. Doc 1–3
Are available for users to access, search, and utilize: at least show notes or a transcript outside the podcast file—on an associated blog for instance. Doc 2–3



Table 12. Podcast as a Whole Guidelines

Usability Concerns Heuristic Severity
Utilizes various techniques throughout in a consistent manner to provide feedback on user’s location within the text for navigation and location. Feedback 2–4
Weaves content, podcaster tone/engagement, music, and other pieces of the podcast together into a satisfying and engaging text. S&E 2–4
Provides an edited, clean podcast without distracting sound issues, noises, or repetitions. EEE, D&D 1–4
Fits within time constraints (such as an expected episode length, or commute time for podcast designed for commutes or a workout time for podcasts designed for workouts). EEE 3–4
Provides link and methods to easily access additional material. EEE 1–2
Offers a variety of ways to access key content—through the podcast, through transcriptions and show notes in the album text and corresponding site, links to references, and so on. Users & Use 3–4
Provides additional content for users of different levels—such as definitions (perhaps in the transcript) for beginner users, and links to advanced applications for expert users. Users & Use 1–2
Considers context and use, presenting only appropriate materials for the context and methods of use. Users & Use 1–4
Delivered in correct format—heavily visual material for instance, should not be a podcast. D&D 3–4
Is of an effective length—not too long (especially) nor too short. D&D 1–3
Sound levels are appropriate across the podcast and with other MP3 files. D&D 2–4
Considers the sound as a medium and presents material in best ways for this medium. D&D 1–3
Balances aesthetics and minimalism. D&D 1–3
Parallels in format, content type, arrangement, and delivery other episodes and within the genre. Consistency 2–4
Applies consistent sound levels and sound editing. Consistency 1–3
Provides reference information for listeners to locate and access the music, citations, references, and other outside material used in the podcast. Doc 1–2
Provides at least enough support materials for those with hearing issues to access the text and for those who need an alternative form of access. Ideally provides a form of the text that can be quickly skimmed and searched (via search engines and keyword site and browser searches). Doc 1–2


6 Conclusion: Usable passion
With 70 million or more podcast users in the United States, it is past time we understand the usability of this medium. This project is one step in that direction. Further research must be done applying these heuristics and guidelines to actual podcasts, usability testing podcasts, and conducting other usability studies of podcasts.  The seven podcast usability heuristics and 127 Podcast Usability Guidelines offer researchers, analysts, designers, and podcasters a way to understand how usability applies to podcasts. Currently, many podcasters podcast for the love of it. They dedicate time, resources, and money to podcasting, often with no expectations of money or rewards. With these heuristics and guidelines, these podcasters can begin to make their podcasts more usable and thus likely more successful. With these guidelines, podcasters can make their passion usable.


  1. Quesenbury, W. The five dimensions of usability. In Albers M., and Mazur, B. eds.  Content and Complexity: Information Design in Technical Communication, LEA, 2003, 81–102.
  2. Webster, T. 2010. The current state of podcasting. Blogworld Expo (Las Vegas, October 15, 2010). DOI=
  3. Webster, Tom. 2009. The podcast consumer revealed 2009: The Arbitron/Edison Internet and multimedia study. Edison Research. DOI=
  4. Gay, P., et al. Astronomy cast: Evaluation of a podcast audience’s content needs and listening habits. CAP. 1, 1 (Oct. 2007), 24–29.
  5. Avgerinou, M., Salwach, J. and Tarkowski, D. 2007. Information design for podcasts. In C. Montgomerie & J. Seale eds. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (). Chesapeake, VA, 2007, AACE. 754-756.
  6. Barnum, C. 2002.  Usability Testing and Research. Longman, New York.
  7. International Standards for Business, Government and Society. 1998.  ISO 9241-11:1998.  Ergonomic requirements for office work with visual display terminals (VDTs) — Part 11: Guidance on usability International Standards for Business, Government and Society.
  8. Nielsen, J. 2005. Ten usability heuristics. Useit. DOI=
  9. Zhang, J., et al. 2003. Using usability heuristics to evaluate patient safety of medical devices. Journal of Biomedical Informatics 36, (2003), 23–30.
  10. Nielsen, J. nd. Severity ratings for usability problems. Useit. DOI=
  11. McKenzie, B. and Rapino, M. 2009. Commuting in the United States: 2009. U.S. Census Bureau, U.S. Department of Commerce. (Sept. 2009). 1–20.



Some Impacts of ”Big Data” on Usability Practice

Stephany Wilkes
461 2nd St.
San Francisco, CA 94107


Two shifts  in the  technological  landscape  – the era of ”big data” and the popularity of Agile software development methodologies – have made users (and specifically data about them) central to the development process and broadened the definition of user-centered design and usability testing.  This paper briefly describes the impact of these shifts on the usability practice.  Rudimen- tary data types useful to usability practitioners are introduced, as well as helpful data tools and required skills.  The paper concludes with a list of methodological and pedagogical gaps that should be addressed. KEYWORDS: SIGDOC, ACM proceedings


Two recent shifts in the technological landscape

– the arrival of the era of ”big data” and the pop- ularity of Agile software development methodologies – have made users (or rather, quantitative data that describes user behavior) central to soft- ware development and broadened the definition of user-centered design and usability testing.

This  paper begins  with  an overview  of these two shifts and briefly describes their impact on usability practices.  Next, it introduces tools and suggests skills helpful to usability practitioners who are new to the idea of integrating user data in their work.  The paper concludes with some benefits and drawbacks of the user data deluge.

This paper is intended for technical communicators and usability and design practitioners who would like to learn more about how to incorporate user data into their work, some common types of data they can expect to encounter along the way, and tools  that  make this  work easier.   Much of this content will be familiar to practicing data scientists.


This section describes what ”big data” [4], including the rise of mobile device usage and Internet access, and the popularity of Agile software devel- opment methodologies.  It also introduces some of the implications of these shifts for usability prac- titioners.

2.1    The Rise of ”Big Data”

The professional and popular press has declared that we have entered the era of big data,” a catch- phrase  that  communicates  the  fact  that  more data is being produced than ever before; that hardware can store more data than ever before,  more inexpensively; and that software applica- tions  and algorithms  can better  analyze  more data than ever before.  In short, the necessary technological conditions to produce, store, ana- lyze and learn from vast quantities of data have been met.

Though  difficult  to  quantify,  it is  estimated that 1,200 exabytes of digital data were created in 2011 alone.  (Helpfully,  the  Economist  points out that  one exabyte  is equivalent  to 10 billion copies of the Economist.)[4] We have more data, in part, because more things create data:  sensors, computers, research labs, cameras, phones and so on surpassed storage capacity in 2007 [4].

Why should usability practitioners care about this data deluge? Because much of this data is user data, and that has made users more central to  product  development  in the  form of detailed data  about  their  behavior.   Taken  in the  aggre- gate, user data enables researchers to understand human behavior of many kinds (whether book buying, film watching, or dating) at the popu- lation level rather than the individual or small study sample level.  Entire systems are designed on and driven by user behavior:  we are all famil- iar, for example, with Amazon and Netflix using collaborative filtering to make recommendations to users based on what other users like[4] .

Quantitative user data like this may (and may not) be what usability practitioners have in mind when they discuss users being more central to de- velopment processes and their feedback being in- corporated at numerous stages[5].  There are dif- ferences in what  quantitative  user  data  and ob- servational human studies can tell researchers. To generalize a bit, user data tends to provide more ”what”  than  ”why”:  often,  data  makes it clear that  something has happened but  cannot iden- tify causes and suggest  solutions.   Instead,  data is analyzed, observations made, multiple solutions tested, and data observed to see which solutions have had a measurable effect.

In addition  to  learning  from  product  usage data, improving the usability of data itself, within the organizations it drives, is a challenge that us- ability practitioners and technical communicators are particularly well positioned to address.

2.1.1   Mobile Devices and Data

As of July 2011, an estimated 42% of Americans own a smartphone[?], each one with its own data- creating sensors, cameras, web access, apps, and more. In January 2012, AdAge reported Google data that indicated more people worldwide have an Internet-capable mobile device than have a desktop or laptop computer [2].  In the U.S. the difference is nearly 10% more (76% to 68%), al- though consumers still report accessing the inter- net on multiple types of devices [2].

Usability  practitioners  began testing  and rec- ommending best practices for mobile devices sev- eral years ago, creating a substantial body of re- search on mobile usability that needn’t be bela- bored here.  It is  important  that  now abundant data provided by mobile devices and apps, how- ever, supplement these practices:  a single prod- uct  can be available  on several  hundred mobile devices globally, and one person may access the same product from a combination of different de- vices (a smartphone, tablet, and browser on a lap- top, for example, as is the case with many users of Catch Notes).

2.2    From Waterfall to Agile

Most technical communicators and usability prac- titioners are familiar with software design, devel- opment and testing methodologies. Though it is now commonplace for ”big sites run controlled experiments to see what works best”[4], this is a recent development powered, in part, by the big data shifts described earlier.

Briefly, the five major phases of Waterfall de- velopment proceed linearly and are comprised of Requirements, Design, Implementation, Verifica- tion  and Testing.   The  creation  of documenta- tion, especially requirements and design docu- ments, is also a significant aspect of Waterfall methodologies[7].  Product design is completed upfront and that design is tested with users later when complete, or nearly.  Such extensive phases proved  both  costly  and ineffective[7].  It is  diffi- cult, for example, for people to know exactly what requirements they need before reviewing a work- ing prototype and commenting on it, and their requirements change over time.  So much design being done early on, and subsequently changed, invalidates a good deal of working hours and in- creases development costs[7].

For these reasons and others, Waterfall methodologies have lost ground to Agile method- ologies during the past decade.  While Waterfall assumes that both the problem and solution are known at the beginning of product development, Agile practices acknowledge that while a prob- lem may be known, the best solution(s) to it are unknown.  The expectation is that testing and iterative development, with the inclusion of user  data, will be required in order to discover the best solution(s) to a problem.  For this reason, Agile development principles include the early and fre- quent delivery of working software (often referred to as minimum viable products or MVPs), from a couple of weeks to a couple of months at most, to real users [1]. User acceptance is too costly to risk finding out about it at the end, and it can’t come at ”the end” if there isn’t one anymore. Through these ongoing cycles, hypotheses about the mar- ket, feature success, pricing and more are tested all the time.

These ongoing cycles alter the view of usabil- ity as an end-of-the-production cycle affair for the better.   Johnson et  al.   noted  that  ”Practition- ers and scholars alike have been continually frus- trated by the fact that, at least in the worlds of engineering and commerce, usability is often seen as an end-of-the-production-cycle affair.  That is, we claim that we know better; we are well aware of the strong impact that early, middle, and late usability can have on product usefulness, mar- ketability, and integrity.”[5] Agile methodologies embody these  beliefs,  albeit  in the  form of user data  more than  observational  studies  of human participants.

2.3    Data  Itself

What do we mean when we talk about user data? Using concrete examples from Catch, the follow- ing section introduces common types of data, where they are usually stored, and describes why they are useful to usability practitioners and product  designers.    It covers  the  most  critical steps of logging, storing, analyzing and visualiz- ing data.

2.3.1   Hunting,  Gathering  and Cleaning

The first, unavoidable step in making use of data is finding it:  finding out which data is being col- lected,  where it is stored,  what its contents are, what  those  contents  mean,  and determining  if they have value.  Though it sounds simple, the process of finding and documenting data is time consuming, not always pleasant, and surprisingly manual:  often, the best way to find data is to ask people in person, and it is not always easy to find the people who truly understand the contents of various databases.

Usability practitioners’ familiarity with inter- view  techniques  helps  to  facilitate  easier  data hunting  and gathering.  In addition,  researching and documenting data contents are two other ar- eas in which technical communicators are skilled and can reduce organizational  data  pain.   User researchers already familiar with cleaning data (understanding and naming fields in data sets; removing extraneous characters or cases with too many empty  or null fields;  and so on) will find their work cut out for them and their skills valu- able.  Fortunately, free tools such  as Google Re- fine have helped to make repetitive data cleaning tasks significantly easier and faster.

2.3.2   Log   Files   and  the   User-Agent

This section describes some of the most common forms  that  data  takes  and where might  a prac- titioner  begin  to  look  for it,  using  rudimentary examples of log and system files in which valu- able usability and interaction data can be found. Log files are records of system processes, com- mon but important.  Generally, ”log file” is taken to mean ”server web log file.” Anything from the Internet  that  hits  a server  might  be logged  and most web-based products, whether mobile app or browser-based, hit a web server at some point.  A log file’s utility may not be obvious from this de- scription, but some of the questions that log files can help  answer  are obvious  to  usability  practitioners:  How do users use the product?

At Catch,  for example,  millions of people use one of our mobile  apps, Catch  Notes,  on more than  300 different  mobile  devices.   This  device variety has usability implications, since each de- vice has different capabilities, interaction modali- ties (side-to-side swiping vs. tapping), and screen sizes,  making  a display  that  looks  great  on a larger device screen appear crowded on a smaller screen.  Operating system (iOS and Android, for example) differences also exist, as do differences between versions of those operating systems.  An- droid 2.1 and Android Ice  Cream Sandwich  are different from one another.  Each provides the Catch Notes Android app with different auto- matic capabilities  and even makes our app look different in unexpected ways, such as background shading.  It is important for usability and design practitioners to know how these factors and oth- ers can change a product’s appearance and func- tionality, and how it helps detect and fix potential usability problems.

Fortunately, the rudimentary user-agent string relates  what  devices  and platforms  a product  is  used on, enabling an application to identify it- self.  As with most data, the user-agent string is not especially helpful on an individual user basis but is most helpful in the aggregate, across many users.

The below is a user-agent string from a mobile device:

Mozilla/5.0 (Linux; U; Android
2.1-update1; de-de; HTC
Desire Build/ERE27) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile

This  string  provides  a great  deal  of information.  It tells us that the user has an HTC Desire smartphone (a touchscreen, 3G phone), which is running the  Android 2.1 operating  system  and version.  It is HSPA/WCDMA (2m up (HSUPA), 7.2 Mbps down) or GSM/GPRS/EDGE (quad band), and contains  features  like  a 5 mpx cam- era, bluetooth 2.0, USB, Wi-Fi (802.1b/g), GPS. We also know that it can handle audio files in the formats of Audio MP3, AAC, M4A, WMA, MIDI, WAV,  WMA,  OGG. Since our product, Catch Notes,  enables  users to  create  voice  notes,  it is helpful for us to know what audio file formats are supported on the device, especially in relation to the audio file types web browsers do and do not support.  This user may back up her notes to the Catch website, for example, and encounter a us- ability problem when trying to play back a voice note in an audio format supported by the smart- phone but not supported by the browser.

Simple log data also contains helpful informa- tion about geography.  Log files indicated that Korean users of Catch Notes, for example, had trouble creating accounts.  Our server logs reg- istration starts, errors, and completed registra- tions,  and we noticed  a high  numbers of starts and errors, and low numbers for completed regis- trations, when we examined registration data by country.

This is an example of a ”what” so we could figure out ”why.” We subsequently found three problems:

  1. We remembered that our username and password fields were ascii-only.  Ascii en- codes characters to represent text to com- puters and is based on the English alphabet. The ascii-only requirement meant that Ko- rean users could  not  create  usernames and passwords with Korean characters.
  2. The registration instructions had not been translated into Korean, nor was there an in- struction to use ascii characters.
  3. Error messages had not been translated into Korean.

In this way, log files with geography data helped us to find and make a few simple, quick, and inexpensive usability improvements with one of our largest user groups.

2.3.3   Database Records and Privacy

Like  server  log  files,  database  records  contain a great deal of useful information.   Database contents depend on what is being collected and databases  are usually  where the  bulk of an or- ganization’s  data  reside,   so  the  data  hunting step must apply to databases, too.  Documenta- tion should include database technologies in use (MySQL  vs.  Mongo, for instance), data models, and contents.

Am important question for the usability prac- titioner  is ”What data can and should be exam- ined?”  Catch  Notes,  for example,  is  a note  tak- ing product.  Users create personal notes which they can share with others at their discretion. These notes are saved to our database primarily for customer back-up, so that they can be made available  in case a smartphone  is  lost  or stolen. We would never, ever look at private note data. Though such data may tell us more about the ways in which people use our product, note data is inherently private data and looking at it would violate our privacy policies and ethics.  Note data, then, is a ”can”  but is not a ”should.”

The  Catch Notes database also contains  tags (labels  that  look  like  #hashtag),  which  users can add to their notes for organization purposes. Counting tags in the aggregate without any iden- tifying user data helps us understand the ways in which people use (and by extension, do not use) our product without violating privacy policies.

Knowing how many users create tags would be useful  if we wanted  to  limit the  number  of user interface elements on small-screen mobile devices. We would have to make tough choices about what stays and what goes, and a tags count would help answer  questions  like  ”Can  we remove  the  tags or move it to a sub-screen without many people noticing, or not?”

Knowing what  words  users choose when  they create tags is also helpful in the aggregate.  It helps us develop informed use cases and personas, which in turn help product development.  Seeing a high number of tags like #recipes or #meds may also inform, for example, new features we might develop for particular types of notes (checklists and reminders).  Finally, tags show the languages in which they were created, which provides a hint as to how use cases might differ culturally.  This simple data ultimately gives us a glimpse as to how users conceive  of what  they  are doing  and creating,  useful  for keeping  our biases  in check and informed by facts.

Only very simple tools are needed to conduct a frequency count  of textual  data  like  tags.   R is  a powerful  statistics  analysis  tool  that  makes frequency counts a one-line command.  Google Refine is another that makes aggregation and fre- quency counts very easy, is web-based, and does not require command-line knowhow.  Both are free.

Aggregate tag data gave us a macro, quan- titative  view  of something  we later  observed at the micro level in a small (eight-person) qualita- tive  study.   We  observed  some of the  same tags (#recipes) in our small sample but learned much more about how users attempted and wanted to organize  those  notes,  as compared to  how they were actually able to organize their notes.  We learned, for example, that tags didn’t scale well: some people wanted, and some types of note con- tent demanded, more hierarchical organization, and for tags to be a subcategorization or a la- beling  system  beneath  a hierarchy.  In addition, many users had no idea what tags were, one rea- son why just  17% of all notes  had tags  at  the time.

In summary,  user  data  can help  practitioners find, investigate,  and correct  usability  problems without a study with users, and can also help inform the design of studies with users, ensuring certain issues are included in the study.

2.4    Data  Tools

The types of data  discussed  so far are not  new, but free tools with which to quickly store and an- alyze  this  data  are fairly new.  We’ve  seen that different types of data may live in different places but that, together, they can tell powerful stories about user behaviors and product usability.  Vi- sualization  tools  are key to  drawing  the  stories data tells,  as seeing data is the best way to get an idea of what’s in it.

Catch uses three tools to combine, learn from, and visualize  the  basic  data  described  to  this point.  These tools are Redis, Cube, and D3 and they are:

  • Open source and free
  • Well documented
  • Easy to learn
  • Automated, pulling data in and pushing dis- plays out (There is no need for manual de- sign tools to make data appear in a certain way each time data must be displayed.)
  • Easy to change as data needs and questions about users evolve

2.4.1   Redis

Catch uses Redis ( primarily as a data aggregation tool.  Redis listens for ”data events” (server logs being made, for example, or data  being  added to  our  database)  that  we’ve told Redis are important and pulls them in. Though we log everything on our servers and save those logs to  disk,  we  also dump  them  to  Re- dis.  While Redis has many capabilities, we use it primarily for pubsub (publication  and subscrip- tion).   Our server  logs  are defined  as the  event ”log:servername,” so Redis listens for ”log:*” which translates to ”Pay attention to these events and ingest all server logs that are made.” This is how all of our server logs get into Redis.

Let’s return for a moment to one of the user- agent string example noted earlier in this paper:

Mozilla/5.0 (Linux; U; Android
2.1-update1; de-de; HTC
Desire Build/ERE27) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile

It is  the  pubsub  piece  of Redis  that  extracts properties from the user-agent string and pub- lishes them to Redis.  Distinct pieces of the user- agent string then become pieces of a larger mea- sure that we’ll call ”Measurement” for the pur- poses of this example.  ”Measurement” looks like this:


“Measurement”: { “Os”: “android”, “OsVersion”: “2.1”, “Model”: “HTC Desire”,

”Client”: “catch”, “ClientVersion”: “4.5.2”, “Country”: “gb”, “Language”: “en”, “Mobile”: true,

“Time”: “2012-01-12T20:23:40Z”,


In practice, ”Measurement” is not confined to the contents of server logs or of the user-agent string.   It also  includes  data  we want  to  track and measure that resides in our database.  These, too, appear in ”Measurement” as named  events. In this way, data from different places is brought together as a collection of distinct events with one name (”Measurement”) to mean ”everything we want to measure right now.” Important user data is essentially consolidated for ease of use, but is not compressed;  each of the pieces of ”Measure- ment” can still  be examined on its own, as one, two  or more pieces  together,  and so on.  ”Mea- surement” is not all or nothing.

2.4.2   Cube

Catch also uses a tool called Cube for time-series and other simple data analysis. Cube is a ”sub- scriber” to the data Redis ”publishes.” Redis es- sentially says ”Push this thing I’m listening for, called Measurement, to Cube.” Cube provides an easy way to look user data in aggregate and drill into different facets of it.

Being able to drill down into facets of interest enabled us to discover the aforementioned regis- tration problems Korean users experienced.  We began ”slicing and dicing” and viewing the data in ways available to us, and ”Country” (shown in the  ”Measurement”  example  above) was one of the elements on which we could slice and dice.

We began with a time-series graph of login events to see the general shape of the data.  As we changed the view by country, we noticed that the login data from Korea had a different shape. We looked at a few, randomly selected,  individ- ual (but fully anonymous) log files from Korea and, thanks to the time series view, had the hor- rifying realization that some users attempted to create new accounts for as long as eight to 15 minutes.  Obviously, there was a problem, but we only found it was specific to Korea because of the data views we had. Future tests of changes made should show a reduction in registration problems, as measured  by a reduction in the time required to complete attempted new registrations, and an increase in the number of successful logins af- terward, with failed registrations decreasing over time.

2.4.3   D3

While Cube is useful for some data visualization during analysis, D3 can more beautifully visualize data to a larger group of internal users.  Techni- cally, D3 is a JavaScript library for manipulating data  contained  in various  document  types,  but this simply enables it to create quantitative,  in- teractive visualizations for a large variety of data sets.  This is important for technical communi- cators because there is finally a tool that makes Tufte-level data visualizations fairly easy to do.

D3 is built on familiar web standards like HTML5  and CSS, making it browser friendly for end users and enabling interface designers to style data with a standard like CSS rather than a pro- prietary format or expensive graphic design tool like Adobe Photoshop.  These familiar standards also enable technical communicators to focus on choosing appropriate visualization(s) for the data and audience involved, and enabling users to ac- cess it through commonly available tools (like a browser), rather than learning how to use propri- etary tool skills that don’t extend to other soft- ware.

The availability of ”big data” means that this work is never really static: because of inexpensive storage and improved analytical tools, many vari- ables can be analyzed not just over time but on an ongoing basis, with new data being pulled in and added to the picture on a daily, weekly, monthly, and yearly basis.  For this reason, tools that can do some of this ongoing work for the practitioner are key and, fortunately, there are several.

In summary, the use of freely available aggre- gation, analysis and visualization tools now make user data easier to manage, learn from, and dis- play internally, creating a sort of dashboard view of potential usability  issues as they happen.

2.5    Skills Needed

This section further shows why technical commu- nicators are well equipped to address these chal- lenges,  where their  skills  are most  needed,  and what those skills are.

2.5.1   Researcher

As mentioned earlier, knowledge of qualitative research  methods  and interview  techniques  are  helpful because researchers will need to find and approach people,  observe  them,  understanding the data artifacts they create and/or manage, and extract and record a great deal of tacit knowledge. User researchers must also interview ”data con- sumers.” Often, their data needs, phrased in their own words, don’t match a data request they for- mulate  for engineers.   Here, the  user  researcher is a translator, aiming to understand what data people need, the problems they’re trying to solve, and the questions they are trying to answer. This is not a new technique so much as a new applica- tion made more urgent by the availability of more data.

Finally, research includes data querying and cleaning, as mentioned earlier.  Graduate-level re- search experience is particularly helpful here.

2.5.2   Data-based Study Design

User researchers and technical communicators skilled in study design can determine how best to quantitative user data before and during a study. They can also help to design what are essentially ”ongoing studies,” or automatic longitudinal data gathering; many more things can be tracked over lengthy periods of time, but it is not always clear when these  techniques  are most  appropriate  or useful.

2.5.3   Data Documentation

Documenting data used, as well as the tools and processes around it,  is  critical  not  just  for cur- rent  product  development  and experiments  but to future work more generally.  Some physicists, for  instance,  have  called  for  better  care to  be taken of data after an experiment has finished [3]. Researchers from major labs, including CERN, formed a working group in 2009 called Data Preservation in High Energy Physics (DPHEP). Data degrading and being orphaned after experi- ments was only part of the problem:  ”Even given the raw data, only someone intimately involved in the original experiment can make sense of it” [3]. Documentation needs to include ”internal notes that explain the ins and outs of analyses, to sub- programs designed to massage numbers  for spe- cific experiments, as well as metainfo, the hacks and undocumented software tweaks made by a team in the midst of a project and then quickly forgotten.” One of the aims of DPHEP is to cre- ate the new post of data archivist, someone within each experimental team who will ensure that in- formation is properly managed. [3]

It is not just physics labs that  need such data archivists,  but  most  organizations,  and techni- cal communicators already possess many of these skills.  They are, for example, skilled at designing document  collections  as well  as individual tem- plates for documenting fields (columns, rows) of data  sets,  their  contents,  and the  meaning  of those contents (real and intended); data loca- tions; and metadata.

2.5.4   Internal Data Usability

There is a tremendous need in many data- drowning organizations for information designers who can turn data into usable, aesthetically pleas- ing products (whether for public or internal con- sumption  or both)  and intuitive  interfaces.   Us- ability practitioners are well poised to advise on data usability, ensuring not only that the people who need certain data can access it in the ways they expect (laptop, mobile, in and out of the of- fice) but that the forms in which they need it are meaningful, enjoyable, attractive, and useful.

Further, usability practitioners and informa- tion designers are needed to suggest ways in which metadata  is  best  integrated  with  data  displays, to communicate to data users what they’re see- ing, and what data is higher quality and more trustworthy than others.  All data is not created equal: some is noisier, dirtier, and not maintained as well as others.  Such attributes are important for making decisions based on data.

Data  usability  is  an area that  is  ripe  for re- search by technical communicators and informa- tion designers.  Best practices, particularly in re- gard to data visualizations with ever changing data, metadata, and myriad possible annotations, are sorely needed.


The increased quantity, availability and visibility of user data, some of which is usability data, ac- complishes what Johnson et  al.  suggest, which is that the usability practice ”develop a partner- ship between science and rhetoric between obser- vation,  representation,  and statistical  analysis so that  we can ultimately offer our best advice ap- plicable to the situation at hand rather than, as science is want to do, definitively settle questions.” Quantitative user data lends rigor to the usabil- ity practice while the popularity of methodologies like Agile reinforce the idea that things are evolv- ing  and never settled,  diminishing  expectations  for decisions to be ”final.”

This paper has shown some of the benefits of ”big data” to usability,  specifically more visibil- ity of users  (because  data  about  the  user,  and thus user behaviors,  are front and center) at all times; supplementary study data and data guid- ance of the  design  of observational  studies;  and the ability to track changes over time more eas- ily, providing evolutionary rather than snapshot views of user behavior.

These changes are not without their draw- backs, which should be evaluated and researched further so their implications might be better un- derstood.   ”Big  data”  creates  more initial  work for usability  practitioners,  a ”never  done”  feel- ing  with  ongoing studies and data  evaluation. It points  to  the  need for practitioners  to  create best  practices  on data  usability,  not  just  prod- uct usability.  Most concerning, user data creates a temptation to replace observational studies al- together; the most quantitative method may not always be the most appropriate.  Finally, prac- titioners must help organizations to avoid ”data survivor  bias”:  the  available  data  is  the  sample but is not necessarily the universe of data, or lit- eral ”all.”

User  data  is,  however, worthy of inclusion  in the usability practitioner’s toolbox.  Its preva- lence and quantity mark a major shift, and this paper has only  begun to  touch  on its implica- tions.


The author is grateful to Andreas Schobel for his development of and collaboration on creation of the Redis, Cube and D3 system described herein.


[1] 2012 Agile  Manifesto.    The  agile  manifesto homepage.

[2] M.  Carmichael. Stat  of  the  day:   Mobile phones overtake pcs. Ad Age, January 2012.

[3] A. Curry.

[4] The   Economist.      Data,   data   everywhere.
February 25 2010.

[5] Salvo M. Johnson, R. and M. Zoetewey. 2007. [6] A. Smith.  2011.

[7] 2012 Wikipedia. Definition of waterfall model. model

Tagged on: