Registration for each workshop is $25. Workshops are conducted on the first day of the conference (Sunday, October 20th).

You can register for a workshop when you register for the conference.

Facilitators

Dr. Kristine Acosta (Florida International University) and Dr. Luke Thominet (Florida International University)

Description

Large Language Model (LLM) chatbots, such as ChatGPT have the potential to broaden access to automated textual analysis. There is a growing body of literature that argues for the efficacy of LLMs in supporting qualitative data analysis. For example, Mellon et al. (2022) argued that GPT-3 could nearly match human performance in coding sentiment, tone, and content in open-ended survey questions. Likewise, Hamilton et al. (2023) and Morgan (2023) showed that ChatGPT could recreate themes from their previous manual qualitative analyses and Omizo (2024) used PaLM2 as an automated research assistant to help test research designs during qualitative genre coding. This workshop takes up these ideas and explores their application in technical communication research projects.

We previously worked with Claude.ai, a LLM chatbot similar to ChatGPT, on an analysis of academic website descriptions of Writing Studies MA degrees. During this project, we used research methods based on inductive thematic analysis (Braun & Clarke, 2021). We also engaged GAI across the process of qualitative analysis from initial data familiarization to codebook development to code application and verification through intercoder reliability (Geisler & Swarts, 2019).

In this workshop, we will introduce participants to methods for working with LLMs when completing data analysis. We will also consider ethical and professional critiques of working with LLM chatbots. By the end of the workshop, participants will have several strategies for working with LLM chatbots on qualitative research, as well as a deeper understanding of the limitations of the technology. Whether or not they choose to work with GAI directly in their future research, this workshop will help prepare participants to provide more detailed analysis and criticism of others’ implementations of the technology.

Structure/Format

Part 1 (45 minutes): Introductions | Review of Literature | Using Claude.ai for Qualitative Analysis

Participants will introduce themselves and share personal experiences with and impressions of LLMs. Then, facilitators will share literature on LLM-assisted qualitative research and walk participants through two examples of how to work with Claude.ai on qualitative thematic analysis.

Part 2 (45 minutes): Prompting Practice | Experimenting with Claude.ai

Participants will be provided with a structured data set as well as example prompts for working with Claude.ai on several phases of qualitative research, including data familiarization, code book development, coding application, and coding verification. Then participants will be asked to experiment with their own GAI-assisted analyses of the data.

Part 3 (30 minutes): Results and Discussion

In the third stage, we will come back together, compare our results, and discuss other potential applications. We will also engage in an ethical discussion of the use of LLM chatbots for qualitative research.

Preparation

Workshop participants will be provided with resources that introduce LLMs and standard prompting strategies. They will also be given a copy of the dataset being used in the workshop to read beforehand. Finally, they should sign up for a free account with Claude.ai (https://claude.ai) and bring a laptop.

References

  • Braun, V., & Clarke, V. (2021). Thematic Analysis: A Practical Guide. SAGE Publications.
  • Carradini, S. (2024). On the Current Moment in AI: Introduction to Special Issue on Effects of Artificial Intelligence Tools in Technical Communication Pedagogy, Practice, and Research, Part 1. Journal of Business and Technical Communication, 10506519241239638. https://doi.org/10.1177/10506519241239638
  • Deets, S., Baulch, C., Obright, A., & Card, D. (2024). Content Analysis, Construct Validity, and Artificial Intelligence: Implications for Technical and Professional Communication and Graduate Research Preparation. Journal of Business and Technical Communication, 10506519241239952. https://doi.org/10.1177/10506519241239951
  • DeJeu, E. B. (2024). Using Generative AI to Facilitate Data Analysis and Visualization: A Case Study of Olympic Athletes. Journal of Business and Technical Communication, 10506519241239924. https://doi.org/10.1177/10506519241239923
  • Geisler, C., & Swarts, J. (2019). Coding streams of language: Techniques for the systematic coding of text, talk, and other verbal data. University Press of Colorado.
  • Getchell, K. M., Carradini, S., Cardon, P. W., Fleischmann, C., Ma, H., Aritz, J., & Stapp, J. (2022). Artificial Intelligence in Business Communication: The Changing Landscape of Research and Teaching. Business and Professional Communication Quarterly, 85(1), 7–33. https://doi.org/10.1177/23294906221074311
  • Hamilton, L., Elliott, D., Quick, A., Smith, S., & Choplin, V. (2023). Exploring the Use of AI in Qualitative Analysis: A Comparative Study of Guaranteed Income Data. International Journal of Qualitative Methods, 22. https://doi.org/10.1177/16094069231201504
  • Huckin, T. N. (2004). Content analysis: What texts talk about. In C. Bazerman & P. Prior (Eds.), What Writing Does and How It Does It: An Introduction to Analyzing Texts and Textual Practices (pp. 13–32). Lawrenc Erlbaum Associates.
  • Mellon, J., Bailey, J., Scott, R., Breckwoldt, J., & Miori, M. (2022). Does GPT-3 know what the Most Important Issue is? Using Large Language Models to Code Open-Text Social Survey Responses At Scale (SSRN Scholarly Paper 4310154). https://doi.org/10.2139/ssrn.4310154
  • Morgan, D. L. (2023). Exploring the Use of Artificial Intelligence for Qualitative Data Analysis: The Case of ChatGPT. International Journal of Qualitative Methods, 22. https://doi.org/10.1177/16094069231211248
  •  Omizo, R. M. (2024). Automating Research in Business and Technical Communication: Large Language Models as Qualitative Coders. Journal of Business and Technical Communication, 10506519241239928. https://doi.org/10.1177/10506519241239927
  • Strubberg, B. C., Bennett, K. C., & Nardone, C. F. (2023). How to Navigate Shifting Tides: Mapping Technical Writing Students’ Use of Artificial Intelligence. 2023 IEEE International Professional Communication Conference (ProComm), 111–116. https://doi.org/10.1109/ProComm57838.2023.00023
  • Tham, J., Howard, T., & Verhulsdonck, G. (2022). Extending Design Thinking, Content Strategy, and Artificial Intelligence into Technical Communication and User Experience Design Programs: Further Pedagogical Implications. Journal of Technical Writing and Communication, 52(4), 428–459. https://doi.org/10.1177/00472816211072533
  • York, E. (2023). Evaluating ChatGPT: Generative AI in UX Design and Web Development Pedagogy. Proceedings of the 41st ACM International Conference on Design of Communication, 197–201. https://doi.org/10.1145/3615335.3623035

Facilitators

Huiling Ding (NC State University)

Description

Recruiting has been largely automated today, with AI screening out 70 percent of resumes during the first round of candidate evaluation. As an emerging technology, algorithmic resume screeners are little understood black boxes that process resumes radically differently compared with human readers. This workshop will introduce a theoretical framework of life cycle of AI systems to help understand how automated resume screening works. We will start with a quick overview of the differences between large language models (LLMs) and natural language processing (NLP), covering their different approaches and issues of transparency and interpretability for LLMs. Then we will simulate the AI life cycle by walking participants through computational analysis of job postings and resumes using methods from natural language processing, which is widely used in automated resume screening products in today’s market of recruiting and hiring.

Aiming to open the black box of resume-job matching algorithms, this workshop will achieve three objectives:

  1. Use the life cycle of AI systems to help participants understand how AI tools screen resumes and
  2. Simulate how leading automated resume screening products process job postings and resumes using a computational job-resume matching project.
  3. Explore pedagogical and ethical implications of these new insights into algorithmic resume screening.

Structure/Format

To guide participants through the life cycle of job-resume matching algorithms, I will walk them through 14 different steps designed to simulate the four phases of AI life cycles, i.e., business understanding, data understanding, data preparation, and modeling/feature selection. Then I will introduce a computational corpus-driven project for job posting analytics and share a pre-built mini-corpus with job postings for technical writers. AntConc will be used to generate top key words and natural language processing tools such as stop words, stemming, and lemmatization will be used for data cleansing and feature refinement. Participants will produce both resume analytics using their own resumes and job posting analytics using job ads provided by me before doing job-resume matching.

More specifically, this workshop consists of eight parts:

  1. Introductions (5 minutes)
  2. Introduction: what is AI life cycle and AI ethics; how are LLMs and NLP different; how do AI job-resume matching tools work (10 minutes)
  3. Data understanding and preparation: Data collection, cleansing, segmentation (15 minutes)
  4. Part 4 Computational job posting analytics with feature generation, selection, and refinement (45 minutes)
  5. Computational resume analytics (10 minutes)
  6. Creating larger categories with text normalization using stemming and lemmatization (10 minutes)
  7. Job-resume match/mismatch analytics (20 minutes)
  8. Wrapping up and reflection: Resume revision strategies, updated job-resume matching results, possible biases and discrimination (5 minutes)

This workshop will provide technical communication scholars a powerful heuristic tool to look “under the hood” to examine how the black boxes of AI systems work. It will shed important light on how resume evaluation has been radically disrupted by AI tools and what different metrics are now used by resume screening algorithms. It has the potential to transform how the field of business, professional, and technical communication teaches the traditional employment project by updating the project to better reflect how AI tools screen resumes in today’s job market. Doing so will help students write for algorithmic evaluators that read resumes in radically different, data-driven approaches compared with their human counterparts in HR and recruiting.

Preparation

Participants should have AntConc installed on their computer before the workshop. Participants should have Microsoft Word, Excel, and plain text tools on their computer. Instruction for AntConc and the mini-corpus will be emailed to participants before the conference. Be prepared to work in small groups and share results during the workshop.

References

  • Baker, P., & Egbert, J. (2016). Triangulating Methodological Approaches in Corpus-Linguistic Research. Routledge: New York, NY.
  • Balakrishnan, V., & Lloyd-Yemoh, E. (2014). Stemming and lemmatization: A comparison of retrieval performances. Proceedings of SCEI Seoul Conferences. 174-179.
  • Broussard, M. (2023). More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. MIT Press.
  • Caprino, K. Mar 23, 2019. How To Write A Resume That Passes The Artificial Intelligence Test. https://www.forbes.com/sites/kathycaprino/2019/03/23/how-to-write-a-resume-that-passes-the-artificial-intelligence-test/?sh=48af75cf6ea7
  • Casey, K. (2021). How to get your resume past Artificial Intelligence (AI) screening tools: 5 tips. https://enterprisersproject.com/article/2021/3/artificial-intelligence-ai-screening-tools-how-build-resume-5-tips
  • Chapman, D. S., &Webster, J. (2003). The Use of Technologies in the Recruiting Screening and Selection Processes for Job Candidates. International Journal of Selection and Assessment, 11.2/3. https://doi.org/10.1111/1468-2389.00234.
  • Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit Soc Sci Commun 10, 567 (2023). https://doi.org/10.1057/s41599-023-02079-x
  • Davies, R. (2019). Build The Best Candidate Shortlist With This Resume Screening Checklist. https://www.softwareadvice.com/resources/resume-screening-checklist/
  • Diaz, C. S. (2013). Updating Best Practices: Applying On-Screen Reading Strategies to Résumé Writing. Business Communication Quarterly. 76(4) 427 –445.
  • Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.
  • Gallagher, J. R. (2020). The Ethics of Writing for Algorithmic Audiences. Computers and Composition.57, https://doi.org/10.1016/j.compcom.2020.102583.
  • Lefkowitz, D. (2022). Black boxes and information pathways: An actor-network theory approach to breast cancer survivorship care. Social Science & Medicine. 307, August  115184
  • Lightcast. (2022). Job Posting Analytics (JPA) Methodology. https://kb.emsidata.com/methodology/job-posting-analytics-documentation/
  • Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
  • Volpe, M., & Esposito., F. (2020). The Data Scientist on LinkedIn: Job Advertisement Corpus Processing with NooJ. Ed. Fehri, H., Mesfar, S., & Silberztein, M. Formalizing Natural Languages With NooJ 2019 and Its Natural Language Processing Applications. Springer.
  • Palmer, D. (2010). Text processing. In Indurkhya, N., Damerau, F. J., Graepel, T., & Herbrich, R. Ed., Handbook of Natural Language Processing. CRC Press. P9-30.
  • Somers, H., Moisl, H., & Dale, R. (2000). Handbook of Natural Language Processing. Taylor & Francis.
  • The New York City Council. (2022). Automated Employment Decision Tools.
    https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-2/
  • Tomsett, R., Braines, D., Harborne, D., Preece, A. D., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552, abs/1806.07552.
  • Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology 32: 661–683.

Facilitators

Chris Lindgren (NC State University) and Dan Richards (Old Dominion University)

Description

A logo establishes a connection with its audience, and, in this workshop, we will engage discussions and ideas about SIGDOC’s logo current and possible new directions. In doing so, we will work across the three following segments.

Structure/Format

Part 1 (35 minutes): Concepting – Design Direction & Mood Boarding

Facilitators will review a brief history of the SIG’s history and logo, as well as share keyword and topic information from SIGDOC’s proceedings that will inform the direction of the design process.

Part 2 (55 minutes): Sketching – Drafting & Iterating Ideas

Participants will be guided through a design drafting process, where no idea and its iteration is a bad one. No artistic skills required. The facilitators will review some of the basic logo design types pertinent to the design direction: Pictorial, Letterforms, Abstract, and Wordmarks. From there, participants will divided into designated groups to iterate one of the aforementioned logo types.

Part 3 (30 minutes): Share & Discuss

Lastly, participant groups will each briefly share their iterations with the group. Discussion and questions will follow.

By participating in the workshop, participants will have served as critical contributors to a larger organizational conversation about ethics and identity. In addition, participants will walk away with experience in several design platforms and applications.

References

Sorry, no matching terms.