J.D. Applen
University of Central Florida
JD.Applen (at) ucf.edu

Download the full article in PDF format

INTRODUCTION

Our scholarship in risk assessment in technical communication needs to be expanded to include a consideration of induction and probabilistic reasoning. To that end, we could benefit from the mathematical and heuristic applications afforded by Bayes’s theorem. While technical communication scholars are very good at analyzing the rhetorical situations of previous events and then presenting qualitative assessments about what might happen going forward, Bayesian analysis will allow us to make more telling inferences about future events.

Additionally, there are risk assessment models in technical communication that can be used in support of Bayesian analysis. Most notably, the spheres model of risk communication analysis as characterized by Walsh and Walker (2016) allows us to discern and explicate the arguments and beliefs made evident in the interactions of stakeholders in disparate discourse communities when assessing and communicating risk, and understanding this model can aid us in the iterative process involved in Bayesian analysis.

Bayesian statistical modeling challenges us to engage in an iterative process to better assess risk, and while some are uncomfortable with the idea of thinking probabilistically about unexpected and unforeseen dangers, many believe that this is the best method to make responsible projections in some circumstances. While not easily determined, numerical values known as priors used in Bayes’s formula can be established, changed, and employed by using subjective or inductive reasoning, which sets it apart from other statistical methods. Going forward, I will explain how current analytic methods in statistics contrast with Bayes’s ideas, review some of the relevant risk assessment and communication models in the field of technical communication and how they relate to Bayesian analysis, show how both of these methods inform each other in the identification and presentation of risk, and then provide a set of recommendations for technical communication professionals as their work relates to risk assessment.

TRADITIONAL FREQUENTIST STATISTICS, BIG DATA, AND BAYES’S THEOREM

To better understand the element of subjectivity and induction in the Bayesian approach, we need to review the basic tenets of more commonly used statistical methods, namely traditional hypothesis testing and big data correlations.

In traditional frequentist statistical hypothesis testing, the variables that are to be measured in a study are determined before the data collection begins, and those in the frequentist camp believe that an “objective purity” can be realized if we collect enough data using this method (Silver, 2012, p. 255). In this “frequentist approach,” the more data or measurements that are recorded in a study, the higher the confidence levels of the experimental findings. For example, a study that is designed to test the hypothesis that those who watch more television per day are more likely to develop cardiovascular disease would measure just these two data points, hours of watching television per day and heart disease in a population over a period of say, five years, and the confidence level in the results would go up in proportion to the number of subjects in this study.

Big data analysis is another analytical method now seeing wide usage and this is in part due to the advancements in computational technologies and increased access to large bodies of data. Examining the culture of big data shows that the past conventions of hypothesis testing described above has for some given way to the idea that mere correlations between findings, however unlikely, have value, thus keeping us from more thoughtful and accurate conclusions when examining datasets. Just combining datasets together and then hoping for some correlations to show up with little regard for the logic behind the connections has been likened to going on a “fishing expedition” (Mayer-Schönberger & Cukier, 2013, p. 29).

The change in discourse between traditional frequentist statistics and big data practices are evident when “correlation between” equates with the same level of scientific rigor as “cause and effect,” what Kuhn (1970) would call a new “symbolic generalization” (p. 182). Big data can produce false correlations and they keep researchers from asking the question about what might be the underlying problem in the statistical model. For example, Silver questions the conclusion of a big data study (2012, p. 253), one that suggested that toads curtailed mating activity five days before a major earthquake in Italy because they were able to forecast the earthquake (Grant & Halliday, 2010). The statistical evidence was originally gathered without the idea that it would be connected to earthquakes; the scientists had the data on hand from their study that was originally on other aspects of the toads’ behaviors, the L’Aquila earthquake just happened to occur, and then they made the connection afterwards. However, there might be other explanations for toads curtailing mating.

Researchers can also suggest and perform more traditional studies that seek to determine the actual mechanisms behind the correlations and how to present these data so we can begin our analyses. In this regard, Krenchel and Madsbjerg (2014) write “(W)hat people in the humanities would call thin data,” data that suggest connections without any plausible interpretation, needs to be replaced with “thick data” based on methods that actually uncover the workings of natural phenomena. By claiming that we can plausibly make sense of the world based on uninterpreted thin data, we “radically reduce what data and understanding means” (2014). Just because there is a correlation, does not mean that there is any real cause and effect. So toads stop mating five days before a major earthquake; how far can a toad travel in five days? Would it be enough to escape the dangers of a major earthquake? Would it confer any significant evolutionary advantage to them? What percentage in a population of toads die every year because of earthquakes? The change in discourse or the rhetorical features in our society are evident when discourse such as “correlation between” produces as satisfying a connection as “cause and effect.”

The seemingly neutral and objective nature of large datasets has allowed many to subscribe to the “myth” that big data is unbiased; there are massive collections of datapoints just sitting there waiting for researchers to identify and use them when they need them (boyd & Crawford, 2012, p. 663). Yet there is the potential from many kinds of biases when using big data. For example, large datasets may be thought to be random, yet this is not always the case, and researchers need to examine the sources of their datasets. Researchers also have to account for their own biases when interpreting data and the choice of the mathematical formulas that these data are poured into. With these biases also comes the inclination of many to see patterns in correlations of big data, apophenia (boyd & Crawford, 2012, p. 668), such as the pattern regarding toads, mating, and earthquakes.

In contrast to the frequentist statistical approach and big data analysis, we have Bayesian theory. When there is not much data available that can serve to understand complicated events, Bayesian statistical theory allows us to more accurately define risk and make better policy decisions as it requires us to entertain a range of hypotheses and continually reexamine the underlying assumptions in a study. Employing Bayes’s theorem invites a dynamic sensibility in our analyses that is in accord with some of the risk assessment heuristics in technical communication that are described in a following section.

Bayes’s theorem and its inductive approach to modeling has found a wider audience in recent years in part because of the reputation Nate Silver has gained by using Bayesian analysis to accurately predict the outcomes of events such as the 2012 general election in the United States (O’Hara, 2012). His articles in traditional print media, blog postings, and the publication of a well-received book have also contributed to Silver’s reputation. However, Bayes’s theorem has been around and in use for centuries.

In the eighteenth century, Hume pointed out that while some objects are always associated with other objects, he was skeptical of the idea that we could ever know the specific causes of phenomena. For example, because the sun rises every morning, this does not mean that it will rise tomorrow. This should not be construed to mean that, in all probability, the sun will not rise tomorrow, but we cannot be certain of this in an absolute cause-and-effect manner. Hume was challenging the reigning paradigm of the age, one that assumed that the “design of the world” meant that there was an ultimate cause, a creator, and thinking probabilistically was not accepted as a valid method for understanding the way things worked (McGrayne, 2011, p. 5). Hume’s thinking ushered in a new set of questions for Enlightenment scholars like Thomas Bayes, who sought to determine how he could predict the probability of a future event occurring with data from the past (McGrayne, 2011, p. 8).

Bayes’s basic theorem that we now attribute to him was set published in 1740, but his work fell into obscurity. Bayes was not a mathematician and he worked out his ideas with thought experiments, but it was Pierre Simon Laplace who independently theorized the same concept in 1774 and then produced a mathematical formula that more concretely described it and helped it gain currency. However, Bayes gets credit for this discovery only because of historical convention (McGrayne, 2011, p. x). In essence, both Bayes and Laplace believed that there were absolute connections between things in the universe that could be found out, and in this way, they challenged Hume’s skepticism, but to do this they had to work through a series of approximations that would gradually get them “closer and closer to the truth” (Silver, 2012, p. 242-3).

In Bayesian analysis, we start with a scenario that we want to test. For example, Morris (2016) presents a basic scenario that seeks to determine what percentage of people who have symptoms associated with the flu actually have the flu? Next, we assign a prior to it. This prior will be an empirical reflection in our analysis of the phenomenon we are assigning risk to. In this scenario, the prior could be the percentage of people in a population that currently have the flu, rather than the oftentimes higher percentage of people who show symptoms of the flu but do not have it. Priors are variables in the formula that LaPlace came up with that reflected Bayes’s method. Here is Bayes’s formula, as derived by LaPlace, and the prior (the percentage of people in a population that currently have the flu) is represented as P(A):

P(A|B) = P(A)P(B|A) / P(B)

In Bayes’s formula, the vertical line or “pipe” ( | ) between B and A or A and B indicates a conditional probability. It can be read as “given that.”

P(A|B) is the probability that A is true given that B is true.

As described above, P(A|B) would be the probability of someone actually having the flu given that they have presented some symptoms of the flu. For example, Morris (2016) presents a scenario that lists sore throats and headaches as symptoms of the flu.

P(B|A) is the probability that B is true given that A is true. Given that someone actually has the flu, this is the probability that there are also attendant symptoms that are associated with the disease, a sore throat and a headache.

P(B) is the probability, in our example, that someone in a population has these symptoms. Using the hypothetical example set forth by Morris (2016) for diagnosing the flu, here are some numbers we can use in Bayes’s Theorem.

The probability that someone has the flu given that they have the symptoms of a headache and a sore throat, P(A|B), is what we are trying to determine.

In this example, we have determined that someone has the symptoms of a sore throat and a headache given that they also have the flu is .9, or ninety percent. This is P(B|A).

The probability of people in a population actually having the symptoms of a sore throat and a headache at any one time regardless of whether or not they have the flu is .2, or twenty percent. This is P(B).

The prior is the actual number of people in a population that actually have the flu, or P(A), is .05, or five percent of the population in this hypothetical case.

Using these numbers in Bayes’s formula, we get the number .225 for P(A|B).

.225 = .05 X .9 / .2

Twenty-two and one half percent of the people in a population with these symptoms of the flu actually have the flu.

Priors are commonsense or baseline numbers used in Bayes’s theorem that can serve to contextualize the correlations. Using data from the Center for Disease Control, Silver uses the example of false positives to explain priors and the other variables in Bayes’s formula. The chance of a woman having breast cancer between the ages of 40-50 is 1.4 percent (2012, p. 245). This would be a prior, or P(A). The chance that a woman without cancer receives a false positive mammogram result occurs only about 10 percent of the time. If the patient in this age group actually does have cancer, a mammogram will correctly indicate that this is the case 75 percent of the time. This yields an overall probability that a woman between ages 40-50 with a positive test result will actually only have breast cancer about 10 percent of the time. By adding priors to statistical analysis, we can come to more accurate conclusions as they keep our assumptions in check. Bayes’s theorem allows us to “think through” some kinds of problems better as we too often look at “the newest or most immediately available statistic” (Silver, 2012, p. 246), such as a positive result for breast cancer indicated by a mammogram that can produce significant anxiety, but not the overall context.

Bayesian thinking can also be employed in risk assignment when we have no empirical value like the 1.4 percent number used above to plug into the equation. In order to determine the future probability of something happening, Bayes decided that we could start by inventing or guesstimating a number that would serve as a prior, use it to see if our thinking actually did predict that something would happen with some accuracy, and if it did not, we would adjust this prior based on a recalibration of all of the information available to us (McGrayne, 2011, p. 6). The informed guesswork involved in determining priors and their probabilistic nature has been a major concern of the proponents of conventional frequentist statistics as described above that is based on measured or known empirical values found through data collections, and this is one reason why Bayesian thinking has been controversial since its inception.

Howson and Urbach (1989) point out that the criticism of Bayesian subjective assumptions by traditional frequentist statisticians might as well be a criticism of their own technique:

the ideal of total objectivity is unattainable and that classical methods, which pose as guardians of that idea, in fact violate it at every turn; virtually none of those methods can be applied without a generous helping of personal judgment and arbitrary assumption (p. 12).

For example, in traditional hypothesis testing, deciding what is in fact a viable hypothesis that needs to be tested is a subjective choice (Howson & Urbach, 1989, p. 224). Moreover, citing Mendel’s work in genetics, all theories are based on probabilistic assumptions and that every “possible genetic configuration” in a species has not been verified (Howson & Urbach, 1989, p. 7).

The scientific experiments used to confirm a hypothesis are also imperfect and can only be “predicted with a certain probability” (Howson & Urbach, 1989, p. 8). Schneider and Moss also see some presumptions of the traditional approach as unrealistic; there are not as many “independent and identical” studies “out there” that can be consulted to further test a hypothesis as frequentists claim, thus they are not as objective or readily available in the short term (Schneider & Moss, 1989, p. 36).

Priors also can be adjusted with new information and this illustrates the dynamic nature of Bayes’s theorem. For example, before one plane flew into the World Trade Center, the prior for this event being a terrorist attack was non-existent because it had never happened before. Since planes have been invented, only a few of them have flown into buildings in NYC, all in accidents and none by terrorists, and none into the World Trade Center. But after the first plane hit one of the towers, the priors indicating the probability of this happening again had gone up, and because of a previous failed attempt to destroy the World Trade Center by terrorists took place in 1993, the priors or probability that a second plane flown by terrorists could hit the World Trade Center could be plausibly increased (Silver, 2012, p. 422).

Godfrey-Smith expresses some concern about the establishment of some kind of initial prior as it is subjective and because two or more researchers will come to different conclusions about what it should be. He acknowledges the “washing out” argument that Bayesians use that concludes that as more data or insights come in, the prior will be adjusted because of the “weight of the evidence” and the two positions will eventually converge into one (Godfrey- Smith, p. 209), or as Howson and Urbach describe it, two scientists starting from different assumptions and who initially come up with different priors will eventually come to “a common view as the evidence accumulates” (1989, p. 380). However, Godfrey-Smith thinks that this “convergence” argument fails to recognize, like the initial establishment of a prior, that this new data might not necessarily be viewed in the same way by researchers with different perspectives. Additionally, if there is in fact a convergence of views and a closing in on what is in fact a probabilistically valid result, the process “could take a very long time” (Godfrey-Smith, 2003, p. 209).

Bayesian analysis invites ongoing revaluation of priors and assumptions as new information or new insights become available and this is not unlike the predictive methods of machine learning that are “Bayesian in spirit” (Murray, 2013, p. 23). To employ Bayes and to expand on his basic technique described above, we start with an initial theory that could be “scientific, raw expert opinion, or the output of some neutral exploration of data” that is a measurement of the “behavior” of some phenomena we are considering, apply some inductive reasoning, and then see if our theory is accurate, and if it is not, we start again with a different set of assumptions. If our theory “implies a bunch of facts that we know to be true,” we can keep adding and adjusting it with “actual measurements” and eventually can employ it after a series of adjustments that yield results with “narrower variances” to make “high-confidence predictions” (Cantor, 2013, p. 23). Silver points out that it is not all about the math or thin data and illustrates this by describing the National Weather Service’s practice of keeping “two sets of books,” one that shows how well computers accurately forecasted the weather, and the other that demonstrates how well humans “add value” to the projections made by computers by making adjustments that reflect their past experience: “humans improve the accuracy of precipitation forecasts by about 25 percent over the computer guidance alone, and temperature forecasts by about 10 percent” (Silver, 2012, p. 125).

In risk assessment documentation used by the nuclear power industry in the 1970s, Carolyn Miller points out the degree to which “expertise can substitute for ethos” (2003, p. 175) in the establishment of technical or scientific authority, where ethos becomes logos. Like others, she sees Bayesian analysis as a technique that is best used when there is a “paucity of data or a state of epistemic uncertainty requires the use of expert judgment.” However, the difficult part of employing Bayes is “selecting experts and aggregating their opinions” (Miller, 2003, p. 182). In applying Bayes to climate science studies, Hulme (2009) has a similar concern and describes the “organised subjective” approach where the quality of a study is a function of the level of expertise involved in making judgments. In one method, groups of “ten or more” experts can have their opinions aggregated and then applied to the Bayesian formula (Hulme, 2009, p. 86).

Schneider et al. identify Bayesian theory as one of many “powerful means and techniques to conceptualize, quantify, and manage uncertainty” (1998, p. 167) and they provide a set of typologies that break down the “distinct components” of what is not known to provide more acute descriptions of uncertainty. They also expand the range of uncertainty’s components beyond the “technical or physical or biological character” to include its “social, cultural, and institutional” constructions (Schneider et al., 1998, p. 170). In global change risk assessment, when “perceived reality departs qualitatively from expectations” that can be generalized to the notions of the “observing community” of those working in this area, we have an instance of “imaginable surprise” (Schneider, 1998, p. 172). The typologies that Schneider et al. present allow for the identification of “epistemological and communal impediments” of the community members so they can adopt a more open recognition of “alternative views and theses” to better foresee “surprises” in global change research (Schneider, 1998, p. 181).

UNKNOWN UNKNOWNS

As described in the example above, a prior for the probability of terrorists flying a hijacked plane into a building was deemed an “unknown unknown” precisely because terrorists had never done this, but a subjective reexamination of some “signals” that were missed before the event illustrated that this might happen (Silver, 2012, p. 444). In 1991, Islamic extremists had tried unsuccessfully to destroy the World Trade Center with a truck bomb that was detonated in the parking garage beneath it, so there was an established history of aggression. In July 2001, Condoleezza Rice, the National Security Advisor, was told that there was heightened Al-Qaeda activity. Additionally, Zaccharias Moussoui, a known extremist, was arrested one month before the World Trade Center was attacked in 2001 by the FBI for immigration violation after he had caused some suspicion when he applied to get training in a Boeing 747 simulator even though he had never even soloed in a plane before. An FBI agent tried to sound a warning to higher ups in the organization but was not given any significant attention. By thinking inductively and putting together some of this information after we had “attached some prior possibility that ‘terrorists might hijack planes and crash them into our buildings,’” the increased potential for this event might have been recognized even though it had never happened before (Silver, 2012, p. 444).

Silver identifies a number of “unknown unknowns” in his work, and they are not always easily reducible to a numerical prior such as the percentage of those in a population between 40-50 having breast cancer or the number of people who have the flu at any one time in a population. Bayes’s theorem requires us to think “probabilistically about the world” and is not a “magic formula” as it consists of a simple equation. As Bayes presented it, its power allows us to move forward in an incremental fashion to reveal absolute truths about nature, and it also compels us to recognize the “epistemological uncertainty—the limits of our knowledge” (Silver, 2012, p. 248–9). One can say that it is easy to make pronouncements in hindsight about past events like 9/11, but not to try to separate “the signal from the noise” in an attempt to identify the potential of a catastrophic event undermines the entire risk assessment project (Silver, 2012, p. 5).

On a philosophical level, this way of thinking has its antecedents. Theorists of epistemology and the scientific method have challenged us “to look around the spaces of an object” in our analyses (Stocking, 1998, p. 176), to consider the agnotological elements of data to reveal another way of framing a problem (Croissant, 2014), something that Kuhn might call a new “shared commitment” (1970, p. 185). Stocking describes John Proctor’s work as “concerned with why some particular lines of inquiry fail to get pursued over time” and uses his characterization of cancer studies as entrenched in the biology of the disease as the root cause, not the possible social causes, thinking that is a function of the “ignorance arrangements” in the field (1998, p. 173). Croissant describes a framework for agnoses and includes at the epistemological level knowledge that is “cognitively unaccessible” because “we do not yet know” (2014, p. 7). This could include “professionals and organizations focusing on what they do well and excluding that which eludes them,” and she makes the point that while Foucault never discussed agnotology and non-knowledge explicitly, he drew attention to “things/ bodies/identities that are elided by epistemological formations” (Croissant, 2014, p. 10). The thinking that Bayesian analysis brings to determining the possibility of an unforeseen event can enhance and support risk assessment theory in technical communication.

RISK ASSESSMENT AND COMMUNICATION

To better understand how Bayesian analysis can be used in the field of technical communication, we need to consider some of the contemporary theories of risk assessment and their applications.

Grabill and Simmons challenge the artificial separation between risk assessment and risk communication. As they describe in the “technocratic approach” (1998, p. 421) after “decisions have been made” (1998, p. 424) by “experts” (1998, p. 425), technical communicators are merely to pass this information on to the public in the most rhetorically effective manner, which can mean, explain the risk and try to show that the risk assessments are indeed accurate, but this can lead to the “‘oppression’ of (typically citizen) audiences” (Grabill & Simmons, 1998, p. 423–4). However, this exclusion of insights of those other than experts is why risk communication so often fails (Grabill & Simmons, 1998, p. 420), and risk communication can be better practiced if it is constructed by a “web, a network, an interactive process of exchanging information, opinions and values among all parties” (Grabill & Simmons, 1998, p. 425). Technical communicators can ask for institutional changes in power structures to be more inclusive of audiences that are communicated to, and this would support more useful practices that met with less resistance and production of mistrust.

In the view of Grabill and Simmons (1998), risk assessment and communication efforts need to be socially constructed; experts should work with others with various degrees of expertise to consider risks and make a decision, and this determination is a negotiated process. To reify social construction, they use Johnson’s (1998) description of the efforts of technical communicators who sought to better understand traffic flow in Seattle by incorporating the opinions of the driving public via focus groups, surveys, interviews, and observations. The technocratic approach would have relied on existing traffic flow data alone.

Walsh and Walker (2016) employ Goodnight’s spheres model in risk communication as a heuristic that allows us to characterize and track dynamic discourses across the boundaries of “genres, communities, and forums” (p. 81). This model consists of three “spheres of argument”—technical, personal, and public—and the intersections of these three “regimes” form “hybrid forums” of deliberation (Walker & Walsh, p. 74). Imagine a Venn diagram with these three circular bodies, all equal in size, with some overlap to convey their intersections or hybrid forums. Goodnight’s key contribution, according to Walsh and Walker (2016), is the idea that “proliferating uncertainties tend to orbit” in these three spheres in “predictable argument patterns” (p. 73).

Each sphere, according to Walsh and Walker (2016), consists of the beliefs and discourses that will be valued in them. In the technical sphere we would see forums like “scientific journals” and “conferences” as discursive communities with “reliability” as a value; in the public sphere, “rallies” and “town council meetings” would be where ideas are exchanged with values such as “freedom” and “equality.” Residing in the private sphere we have “personal correspondence” as a forum and “safety” and “happiness” as values (p. 74).

Regarding political uncertainty, Goodnight (2012) writes “Arguments engage social change when the systems of authority embedded in spheres not only fail to provide resolution but the expectations of implicit norms, conventions of propriety, or explicit rules become part of the debate” (p. 260). Goodnight (2012) attributes the “idea of spheres” to Hans Gerth and C. Wright Mills as it “creates a contextual understanding of the world where mis- expectations open opportunities for social change and risk attaches to all communicated anticipations and deeds and interpretations of words and deeds,” and “uncertainty is a basic enabling and constraining condition” (p. 259). Because there is uncertainty, new forums and discourses are always coming into being, recognized, and negotiated in these hybrid spaces, and this dynamic is akin to the iterative activity scientists and others can engage in when we use Bayesian analysis.

Walsh and Walker (2016) identify hybrid forums as the rhetorical spaces in the spheres model in uncertainty management and risk assessment in technical communication (p. 74). These hybrid forums exist in the intersecting boundaries of different spheres and discourse communities that allow for policymakers, scientists, and members of the public to make their cases, challenge each other, and work to be heard in a manner that allows people to reflect on the nature of the arguments. Porter’s (1986) descriptions of discourse communities and what can be said, where it can be said, and who can say it come to mind in this dynamic (p. 46). The spheres model allows for scholars of technical communication to “slow down the hybridization of risk discourses” so we can study this dynamic and it provides us with “a heuristic to help us track uncertainties as they move and diversify through kairoi, genres, and forums” (Walsh & Walker, 2016, p. 76). For example, both technical and public audience speakers share these hybrid forums, and Callon et al. (2009) have also described these spaces as consisting of “experts, politicians, technicians, and laypersons” (p. 18). The hybrid forum that lies in the intersection of the technical and public spheres could consist of “science blogs, environmental impact statements,” and “popular science magazines” (Walsh & Walker, 2016, p. 74).

What constitutes a reasonable approximation of risk can be identified, resisted, and negotiated in these hybrid spaces: “Uncertainty is a boundary object; it is differentially and willfully misunderstood by communities to scaffold cooperation or resistance” (Walsh & Walker, 2016, p. 79), and this is consonant with the boundary objects that Star describes. For example, Star (2010) discovered that marginalia found on the forms that nineteenth-century physicians gave to family members of epileptics to record what they witnessed—notes like “exposed to night air” and “had too much hot soup”—were in the margins precisely because they did not fit into the approved symptoms or observations in the body of the forms, and thus “a whole folk medicine” was excluded from the research (p. 607). This observation led Star (2010) to ask, “How do forms shape and squeeze out what can be known and collected?” and how are “standards and boundary objects inextricably linked” (p. 607), and this linkage could take place in the hybrid space between technical, public, and private spheres that Walsh and Walker describe.

The spheres model better allows us to see at what point people are agreeing and disagreeing with ideas and values, where they have been generated from, where they are being willfully ignored, and where they constitute boundary objects. In this rhetorical space, what counts can be identified and disputed. Ottinger describes how official scientific standards can have a “boundary-policy function” and “structure the debates over who can participate” (2010, p. 264–65). However, citizen scientists can produce scientific data that, while not based on the official standards used to make environmental policy, can eventually compel officials to take some action. To determine risk, government technocrats used the measurements of effluent emissions from a local chemical plant that were averaged out over a twenty-four hour period as their sole standard for regulatory oversight, but the local citizens collected air samples at peak hours and insisted that their measurements be taken into consideration as a standard. Both the averaged and snapshot methods were inductive as there was no statistical data that could absolutely correlate effluent with health risks. In this hybrid space between government technocrats and the public, the citizens scientists were able to communicate that short bursts of effluent could potentially have severe effects on the area’s residents (Ottinger, 2010, p. 257), or as Walsh and Walker describe, “uncertainty-based shifting of authority from scientists to citizens can on occasion enable, rather than stymie, effective policy action” (2016, p. 79).

The “topoi of uncertainty” that both generate or restrict the scope of arguments in the hybrid spaces between Goodnight’s spheres also serve as boundary objects because they allow for arguments and characterizations of uncertainties to be “translated” between spheres (Walsh & Walker, 2016, p. 81). To better characterize these three regimes in the spheres model, private or personal uncertainties channel belief and express commitment to that belief, technical uncertainties identify data that provides empirical evidence of belief, and public or political uncertainties convert “belief into conviction” (Walsh & Walker, 2016, p. 81).

Using a case study of Huiling Ding, Walsh and Walker illustrate how the spheres model can be applied to more acutely describe and track the movement and translations of arguments across spheres that influenced the World Health Organization’s advisories on the H1N1 flu and potential pandemic. In Ding’s case (2013), she showed that there were two uncertainties at the boundaries between communities that the World Health Organization was considering: 1) the likelihood that without screening passengers as they left Western nations to come to China, there would be a greater transmission of H1N1, and 2), the potential for political backlash of the West that would weaken the already troubled global economy of 2009 if there was a slowdown in the movement of people between continents.

Ding (2013) calls our attention to an “open letter” posted in an online space from a Chinese graduate student named Sheng who, while studying in the United States, employed calls to deeply embedded values in Chinese culture that emphasized self-sacrifice and duty to family and country to personally implore other graduate students not to return home during the height of the H1N1 flu pandemic in 2009: “ . . . I urge all overseas returnees to act responsibly for our motherland and for our family members” (as cited in Ding, p. 141). The personal letter had “significant” impact; many Chinese students who were studying overseas canceled their return home and “most” who did return engaged in self-imposed quarantines (2013, p. 142). De Certeau describes how “the mute processes that organize the establishment of the socioeconomic order” are ubiquitous and that it is important “to discover how an entire society resists being reduced” by them; “consumers” or “dominees” of the society can “manipulate the mechanisms of discipline and conform to them only in order to evade them” (2011, p. xiv), and we can see this theory reified in Sheng’s use of this online space and deploying the time honored commonplaces that have traditionally served to uphold the power structure of Chinese society as his personal message was heard and amplified as it proliferated between communities to challenge the Chinese establishment.

The chief epidemiologist for the Chinese Center for Disease Control (CDC) eventually made the choice to advise students, based on the effect of this widely circulated “bottom-up travel advice” from Sheng, to, among other things, “avoid contact with high-risk populations,” and this contrasted with the initial communications from official sources that played down the potential risk of the H1N1 pandemic (Ding, 2013, p. 143). Without this hybrid forum that existed between the public/technical and personal regimes that allowed for the personal posting of this open letter, it would be difficult to generate these ideas that bring people together. The political and personal discourse from non-experts or informed citizens outside of official technical institutions was both informative and actionable and served to form an “epidemiological gaze” (Ding, 2013, p. 144). These hybrid spaces can foster innovative risk deliberation that “combine the argument standards of home and target spheres” and “expose the interdependence of two or more terms that are usually opposed in the target sphere” such as “person and nation, objectivity and subjectivity, fact and value” (Walsh & Walker, 2016, p. 83).

In their study, Walsh and Walker commend Ding’s analysis in general but suggest that she could more acutely define or identify the movement and translation of ideas between the regimes and hybrid forums they identify to better describe the kairotic nature of both of her studies on H1N1 (2013) and SARS (2009). She moves from a technical sphere to the public sphere, but without specifying how technical uncertainties such as forecasts based on mathematical models are eventually transferred into political uncertainties (Walsh & Walker, p. 81). Ding does identify technical theory, such as the idea that people from countries that are seeing an outbreak in H1N1 should be screened before they board planes, but is satisfied with just pointing out the hypocrisy of the shifting of policy recommendations of the World Health Organization (WHO) without detailing the discursive dynamic of this shift. Here are Ding’s major points. When SARS was first reaching substantial levels in China in 2003, the WHO held that border screenings should be in place before people left China to come to the United States where there were no documented cases. Conversely, when HIN1 was identified in the United States in 2009, using border screenings of passengers who were coming to China from the United States was not considered a viable policy as it would have a negative effect on the economic considerations during a time of global recession, and this dynamic suggests a Western bias in the WHO’s decision making process.

The spatial elements of kairos in this rhetorical situation has many layers and the specific movements of ideas and their effects cannot always be reduced to one thought or event that describes the absolute reasons why things changed over time, the temporal dimension of kairos. However, in a nutshell, Walsh and Walker’s analysis extends Ding’s points by demonstrating that initially, a technical uncertainty was surpassed by a political uncertainty which was then overridden by a lone actor who channeled his private or personal uncertainties that were based on his own common sense understanding of epidemiology into an expression of belief that was shared in a hybrid space that existed between the personal and public spheres that eventually altered the position of a government technocrat who was making decisions from the technical sphere.

To be fair to Ding, she does identify and detail interactions between mainland Chinese inhabitants and diasporic Chinese studying overseas in the United States who could possibly transfer H1N1 to China. This “transnational virtual community” (2013, p.141) could be understood as a hybrid space between personal and public spheres and the comment from Sheng above constitutes the kinds of discourse one might see here as described by Walsh and Walker, but she does not use their terminology. She does identify the medium, ethos, and position of Sheng in his society with clarity and demonstrates how his message changed the way the Chinese authorities communicated the risk of H1N1, thus using the public and civic values that Goodnight’s model helps reveal.

APPLYING BAYES TO RISK COMMUNICATION

If we are using the spheres model to better describe the nature of the uncertainty we have theorized or identified, we can in some cases more accurately assess the uncertainty and generate probabilistically based forecasts using Bayesian analysis.

Using Bayesian analysis, we could consider the possibility or uncertainty that screening passengers before they took flights back home to China from the United States would significantly decrease the transmission of disease. If we assigned and adjusted numerical values to a prior in Bayes’s formulation and applied this technique to Ding’s work, we might be able to recognize that students coming home from China posed a greater threat than what the government initially suggested. Given that there were Chinese students studying in America during the H1N1 outbreak, they would be in a different position than the technocrats in China who produced health advisories. They would have more access to social media forums to present their ideas, and perhaps grown use to more freedom of expression living in their host country. These students might have an enhanced ethos as they were living where H1N1 was actually taking hold, yet they could also be considered Chinese patriots trying to save the people of China from this disease. Thus, what these students felt in the private sphere could be more readily generated, believed, and passed on to others in the public sphere that shared a hybrid space with health officials in the technical sphere.

We could extend the formula we previously described (Morris, 2016) with the same numbers to demonstrate that those passengers boarding planes and leaving America to return home to China have a twenty-two-and-a-half-percent chance of having the flu if they have a sore throat or a cough. This could be used to induce the authorities in power to make the case for screening even though there might be a political and economic cost because of the recession. This number would be based on the prior percentage of all people in the general population who had the flu, which was five percent in this earlier example.

But this prior could even be debated and adjusted in the fashion that Bayesian analysis affords and in the context of the spheres model. For example, students who travel home from universities usually do so right after finals, a period of high stress and sleep deprivation, and students could have weakened immune systems and be more susceptible to contagion than the members of the general population. International students are more likely to live on campuses than the general population, so they are in high contact areas in dorms, libraries, and dining halls. In the fall before finals, many American students travel home for Thanksgiving and see family and friends who might have the flu, perhaps contract it, and then bring it back to their universities. This last factor is something that Chinese students living abroad would more likely be cognizant of than public health officials in China who do not understand the place in American culture of this holiday and thus not be aware of the post-Thanksgiving bump in contagion transmission. International students studying in America would be more aware of all of these factors if they were hearing public health messages regarding H1N1 at their respective universities and coming across fellow students who were sick; they would be viscerally aware of the disease, something the mainland Chinese public and some of the political authorities might not be aware of.

Using the motivating value of “progress” often found in the public sphere of Goodnight’s model, they could bring these facts and sensibilities into a hybrid space between the public and technical communities to educate, cajole, motivate, put on notice, and implore Chinese health officials to take action. They could show the value of screening students at airports for symptoms of the flu, have them go into self-imposed quarantine if they do get home to China and find that they have developed these symptoms. In Ding’s study, Sheng was the one individual who did the most to catalyze this phenomenon, someone working out of the personal sphere and motivated by values such as loyalty and safety as he implored his fellow citizens in the public sphere to not return to their homeland until the flu season was over.

Thus, students coming home from American universities might even see higher rates of exposure to contagions such as the flu, and the prior or P(A) would be debated and adjusted to say eight percent of the population in a university community. This would give us the following formulation:

.36 = .08 X .9 / .2

Thirty-six percent of the students who were attempting to board planes at airports with the symptoms of cough and a headache would have the flu, up from twenty-two percent in our original formulation. This shows that there is a significant increase in carriers of the flu with only a three percent increase—five to eight percent—in the second prior that we settled on. This could temper the political considerations and more highly value the science.

The key takeaway from this numerical example is that if we do come up with a scenario and then begin to determine what the prior should be, we have something to start with, something to debate, something to adjust, something to base a recommendation on, something to show others and perhaps convince them to change their thinking. It reveals a dynamic that compels us to focus on the effect of one variable. However probabilistic and subjective, this relatively simple statistical tool is transparent, and in this example, it could convince reasonable people of the increased value of screening passengers.

Coupled with the risk assessment theory posed by Walker and Walsh, we could better understand how the assumptions and values of others need to be considered when initially coming up with a scenario and then deciding on the value of the priors we will use to determine the probabilistic outcome. The essential subjectivity of data that can be unpacked in hybrid spaces; who discovers or recognizes it, where it can be found, and how a conversation can follow that better allows us to modify or fine tune their values can be identified with the spheres model and applied to Bayesian analysis, or as Morris (2016) has written, Bayes allows us “to quantify skepticism and enable us to have a clearer understanding” of a situation involving risk. Regarding the spheres model, Walsh and Walker describe a similar affordance that encourages an examination based on the identification and circulation of ideas and texts between spheres and within hybrid spaces, and “how and when might uncertainties be used to build reflective capacity in audiences and transform doubt into ethical deliberation” which allows us to find our way out from “dysfunctional or stagnated risk discourse” (2016, p. 83).

Communicating uncertainty that is determined by inductive reasoning needs to be conveyed in an accurate and nuanced way. In their work on the “epistemological framework” of uncertainty, Parrott et al. use “genetic determinists” to describe people who believe that regardless of the lifestyle that they have chosen, the genes they have inherited will ultimately determine their health (2004, p.115). In fact, while they might be more or less predisposed to certain medical conditions, “genes seldom absolutely determine health” (2004, p.119). For this group, messages need to be tailored that “address the interaction of genes with personal behavior, social and ecological environments, and cellular and developmental processes” (Parrott et al., 2004, p. 107).

This process could be enhanced by examination of all possible arguments and values as contextualized by Walsh and Walker. How can there be a negotiation in the hybrid space between genetic technologists and this specific public? The audience’s “spiritual lives” or values as they inform their fatalistic understanding of genetics could be considered and ultimately used to tailor a rhetorically effective message to them and engage them with the medical community (Parrott et al., 2004, p.116). Assessments based on probabilistic reasoning often meet with some skepticism and some confusion, and involving communication professionals in the early stages of iterative assessment activities afforded by Bayesian analysis so they can better convey the probabilistic nature of the science involved will better allow them to describe risk and to employ the most “culturally appropriate statements” (Parrot et. al., 2004, p.116), and thus produce more rhetorically effective texts.

Regarding the accurate and disciplined use of terms and phrases when communicating risk, Moss and Schneider see Bayesian analysis as a “paradigm” for “a formal and rigorous language used to communicate uncertainty” (2000, p. 36). In their “recommendations” to authors working in writing teams on the Third Assessment Report (TAR) of the Intergovernmental Panel on Climate Change, they point out that terms that have been “used differently across chapters and reports” undermine the rigor of the analysis and the communication of results to “the general public and the media” (Moss & Schneider, 2000, p. 35). Some of these terms are “high, medium, and low confidence,” “almost certainly,” “doubtful,” and “unlikely” (Moss & Schneider, 2000, p. 35). While they hold that subjective recommendations using Bayesian analysis are “most appropriate,” it is not difficult to see that rigor needs to be applied by writing teams to the rhetorical dimensions of the language, and the iterative nature of Bayesian analysis could challenge writers to carefully examine and reexamine the terms and phrases that they use (Moss and Schneider, 2000, p. 36).

CONCLUSION

As we have seen in some examples above, Bayes’s theorem in its strictest application asks for uncertainty to be distilled into a numerical value, a prior, and though it is “nominally a mathematical formula,” it offers us much more than this as it challenges us to be more comfortable with “probability” and “uncertainty” (Silver, 2012, p. 15) and generates an examination of what might in fact constitute the kinds of information we can consider and rely on in risk assessment. While Marcus and Davis (2013) have pointed out that Bayes’s theorem cannot readily be applied to all situations when there is not any “consensus ” about what a prior should be, Guerra-Pujol (2013) points out that “it doesn’t really matter what your priors are as long as you update them regarding the matter you are uncertain about.” Silver (2012) expands on this as he points out that the Bayesian method “encourages us to hold a large number of hypotheses in our head at once, to think about them probabilistically, and to update them frequently when we come across new information that might be more or less consistent with them” (p. 444).

Thus, as Thomas Bayes did when he subjectively determined “the value of his initial belief” (McGrayne, p. 8), today’s professionals can confront a risk assessment issue by first coming up with hypothetical numbers for priors or conditional probabilities based on induction and see how the models would have worked with this data as Murray describes above. They can continue to refine their models by entertaining new hypotheses or possible ways of framing data, thus adjusting their priors or conditional probabilities. In the National Weather Service example, perhaps we could imagine a situation where the forecasters establish a number for a prior that indicated what is the probability that on any day a hurricane will hit this region of the country. Given their knowledge of past events, conditional probabilities, and their own inductive reasoning, they could adjust this value and keep adjusting it as new meteorological data comes in.

Jaynes (2003) likens inductive reasoning in science to “qualitative plausible reasoning,” and that “it is not the function of induction to be ‘right,’ and working scientists do not use it for that purpose” (p. 311). However, induction does ask “what predictions are most strongly indicated by our present hypotheses and our present information?” (Jaynes, 2003, p. 310). He describes Bayesian theory as a “unique quantitative expression” of inductive reasoning and points out its value relative to the frequentist model that is limited to using known datasets to begin an analysis (p. 311). Silver (2012) sees Bayes’s theorem as a way to consider situations that those in other schools of thought have been reluctant to, and that “The problem comes when, out of frustration that our knowledge of the world is imperfect, we fail to make a forecast at all” (p. 421).

With these considerations in mind, here are some recommendations to technical communication professionals who are working in organizations that are concerned with risk assessment:

  • Understand that uncertainty is based on a potential problem we are facing, but this is not necessarily a bad thing. It challenges us to assess our commitment to our beliefs about an issue, be cognizant of different ways of thinking and what is valued, and negotiate our differences that can lead to better solutions.
  • See how Bayes’s approach can also be enhanced by an examination of the texts and an awareness of the values in each of Goodnight’s three spheres and their intersecting hybrid communities.
  • Consider how the forums used to communicate risk in each sphere can mediate our understanding of an issue.
  • Become aware of the value of Bayes’s formula as we can use inductive reasoning to reveal “unknown, unknowns.” We do not have to use empirical data that are already in place; assessments based on imagined probabilities can be of value.
  • Know that Bayesian thinking and the examination of the kairos of a rhetorical situation are both dynamic and iterative practices and welcome the inclusion or examination of new information and insights.
  • Recognize that Bayesian thinking allows us to confront and see our way around as “dysfunctional or stagnated risk discourse” (Walsh & Walker, 2016, p. 83).
  • Be mindful of the rhetorical nature of the language used to convey risk across boundaries and into different discourse communities.

Communicating complex information that stems from Bayesian analysis and induction requires that we use these points as heuristics so we can better convey both the potential and limits of probabilistic reasoning, learn how it might be valued in different communities and possibly support the beliefs of their members, and also be open to the idea that new information and ideas generated from sources that come from each community should be considered and that risk assessment and communication is an iterative process.

ACKNOWLEDGEMENTS

I would like to thank the two outside reviewers for their thoughtful and constructive commentary.

REFERENCES

boyd, d., & Crawford. K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Callon, M., Lascoumes, P., & Barthe, Y. (2009). Acting in an uncertain world: An essay on technical democracy. MIT Press.

Cantor, M. (2013, May 28). Filling in the blanks: The math behind Nate Silver’s The Signal and the Noise. IBM. http://www.ibm.com/developerworks/rational/library/filling-in-the-blanks-1/index.html

Croissant, Jennifer L. (2014). Agnotology: Ignorance and absence or towards a sociology of things that aren’t there. Social Epistemology: A Journal of Knowledge, Culture and Policy, 28(1), 4–25. https://doi.org/10.1080/02691728.2013.862880

de Certeau, Michel. (2012). The practice of everyday life (3rd ed). University of California Press.

Ding, H. (2009). Rhetorics of alternative media in an emerging epidemic: SARS, censorship, and participatory risk communication. Technical Communication Quarterly,18(4), 327–350. https://doi.org/10.1080/10572250903149548

Ding, H. (2013). Transcultural risk communication and viral discourses: Grassroots movements to manage global risks of H1N1 flu pandemic. Technical Communication Quarterly, 22(2), 126–149. https://doi.org/10.1080/10572252.2013.746628

Godfrey-Smith, P. (2003). Theory and reality: An introduction to the philosophy of science. University of Chicago Press.

Goodnight, G. Thomas (2012). The personal, technical, and public spheres of argument: A speculative inquiry into the art of public deliberation. Argumentation and Advocacy, 48(4), 198–210. https://doi.org/10.1080/00028533.2012.11821771

Grabill, Jeffrey T., & Simmons, W. Michele. (1998). Toward a critical rhetoric of risk communication: Producing citizens and the role of technical communicators. Technical Communication Quarterly, 7(4), 415–441. https://doi.org/10.1080/10572259809364640

Grant, R. A., & Halliday, T. (2010). Predicting the unpredictable; evidence of pre-seismic anticipatory behaviour in the common toad. Journal of Zoology, 281(4), 263–271. https://doi.org/10.1111/j.1469-7998.2010.00700.x

Guerra-Pujol, F. E. (2013, August 24). What critics of Nate Silver get wrong. Prior Probability. https://priorprobability.com/2013/08/24/what-critics-of-nate-silver-get-wrong/

Howson, C., & Urbach, P. (1989). Scientific reasoning: The Bayesian approach. Open Court Publishing Co.

Hulme, M. (2009). Why we disagree about climate change: Understanding controversy, inaction and opportunity. Cambridge University Press.

Jaynes, E. T. (2003) Probability theory: The logic of science. Cambridge University Press.

Johnson, R. R. (1998). User-centered technology. A rhetorical theory for computers and other mundane artifacts. State University of New York Press.

Krenchel, M., & Christian, M. (2014, November 4). Your big data is worthless if you don’t bring it into the real world. Wired. https://www.wired.com/2014/04/your-big-data-is-worthless-if-you-dont-bring-it-into-the-real-world/

Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). University of Chicago Press.

Marcus, G., & Ernest D. (2013, January 25). What Nate Silver gets wrong. The New Yorker. http://www.newyorker.com/online/blogs/books/2013/01/what-nate-silver-gets-wrong.html

Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt.

McGrayne, S. B. (2011). The theory that would not die: How Bayes’s rule cracked the enigma code, hunted down Russian submarines, & emerged triumphant from two centuries of controversy. Yale University Press.

Miller, C. R. (2003). The presumptions of expertise: The role of ethos in risk analysis. Configurations, 11(2), 163–202. http://doi.org/10.1353/con.2004.0022

Morris, D. (2016). Bayes theorem: A visual introduction for beginners. Blue Windmill Media.

Moss, R. H., & Schneider, S. H. (2000). Uncertainties in the IPCC TAR: Recommendations to lead authors for more consistent assessment and reporting. In R. Pachauri, T. Taniguchi, and K. Tanaka (Eds), Guidance papers on the cross cutting issues of the third assessment report of the IPCC (pp. 33–51). World Meteorological Organization, Geneva. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.399.6290&rep=rep1&type=pdf

O’Hara, B. (2012, November 8). How did Nate Silver predict the US election? The Guardian. https://www.theguardian.com/science/grrlscientist/2012/nov/08/nate-sliver-predict-us-election

Ottinger, G. (2010). Buckets of resistance: Standards and the effectiveness of citizen science. Science, Technology, and Human Values, 35(2), 244–270. https://doi.org/10.1177/0162243909337121

Parrott, R., Silk, K., Weiner, J., Condit, C., Harris, T., & Bernhardt, Jay. (2004). Deriving lay models of uncertainty about genes’ role in illness causation to guide communication about human genetics. Journal of Communication, 54(1). 105–122. https://doi.org/10.1111/j.1460-2466.2004.tb02616.x

Porter, J. E. (1986). Intertexuality and the discourse community. Rhetoric Review, 5(1), 34–47. https://doi.org/10.1080/07350198609359131

Schneider, S. H., Turner, B. L., & Garriga, H. M. (1998). Imaginable surprise in global change science. Journal of Risk Research, 1(2), 165–185. https://doi.org/10.1080/136698798377240

Silver, N. (2012). The signal and the noise: Why so many predictions fail—but some don’t. Penguin Press.

Star, S. L. (2010). This is not a boundary object: Reflections on the origin of a concept. Science, Technology,
& Human Values, 35(5), 601–617. https://doi.org/10.1177/0162243910377624

Stocking, S. H. (1998). On drawing attention to ignorance. Science Communication, 20(1), 165–178. https://doi.org/10.1177/1075547098020001019

Walsh, L., & Walker, K. C. (2016). Perspectives on uncertainty for scholars of technical communication. Technical Communication Quarterly, 25(2), 71–86. https://doi.org/10.1080/10572252.2016.1150517

ABOUT THE AUTHOR

J.D. Applen is an associate professor of English at the University of Central Florida. His research interests include the rhetorical dimensions of technical communication, digital archives, digital humanities, electronic literacy, and the rhetoric of science and technology. He is the author of Writing for the Web: Composing, Coding, and Constructing Web Sites; and the co-author of The Rhetorical Nature of XML.

Using Bayesian Induction Methods in Risk Assessment and Communication