[Submitted on 12 Apr 2026]
Title:LLMs for Qualitative Data Analysis Fail on Security-specificComments in Human Experiments
View a PDF of the paper titled LLMs for Qualitative Data Analysis Fail on Security-specificComments in Human Experiments, by Maria Camporese and 2 other authors
Abstract:[Background:] Thematic analysis of free-text justifications in human experiments provides significant qualitative insights. Yet, it is costly because reliable annotations require multiple domain experts. Large language models (LLMs) seem ideal candidates to replace human annotators. [Problem:] Coding security-specific aspects (code identifiers mentioned, lines-of-code mentioned, security keywords mentioned) may require deeper contextual understanding than sentiment classification. [Objective:] Explore whether LLMs can act as automated annotators for technical security comments by human subjects. [Method:] We prompt four top-performing LLMs on LiveBench to detect nine security-relevant codes in free-text comments by human subjects analyzing vulnerable code snippets. Outputs are compared to human annotators using Cohen's Kappa (chance-corrected accuracy). We test different prompts mimicking annotation best practices, including emerging codes, detailed codebooks with examples, and conflicting examples. [Negative Results:] We observed marked improvements only when using detailed code descriptions; however, these improvements are not uniform across codes and are insufficient to reliably replace a human annotator. [Limitations:] Additional studies with more LLMs and annotation tasks are needed.
Submission history
From: Maria Camporese MSc [
]
[v1]
Sun, 12 Apr 2026 22:01:40 UTC (6,970 KB)