[Submitted on 17 Jan 2026 (

v1

), last revised 13 Apr 2026 (this version, v2)]

Title:Powerful Training-Free Membership Inference Against Autoregressive Language Models

View a PDF of the paper titled Powerful Training-Free Membership Inference Against Autoregressive Language Models, by David Ili\'c and 2 other authors

View PDF
HTML (experimental)

Abstract:Fine-tuned language models pose significant privacy risks, as they may memorize and expose sensitive information from their training data. Membership inference attacks (MIAs) provide a principled framework for auditing these risks, yet existing methods achieve limited detection rates, particularly at the low false-positive thresholds required for practical privacy auditing. We present EZ-MIA, a membership inference attack that exploits a key observation: memorization manifests most strongly at error positions, specifically tokens where the model predicts incorrectly yet still shows elevated probability for training examples. We introduce the Error Zone (EZ) score, which measures the directional imbalance of probability shifts at error positions relative to a pretrained reference model. This principled statistic requires only two forward passes per query and no model training of any kind. On WikiText with GPT-2, EZ-MIA achieves 3.8x higher detection than the previous state-of-the-art under identical conditions (66.3% versus 17.5% true positive rate at 1% false positive rate), with near-perfect discrimination (AUC 0.98). At the stringent 0.1% FPR threshold critical for real-world auditing, we achieve 8x higher detection than prior work (14.0% versus 1.8%), requiring no reference model training. These gains extend to larger architectures: on AG News with Llama-2-7B, we achieve 3x higher detection (46.7% versus 15.8% TPR at 1% FPR). These results establish that privacy risks of fine-tuned language models are substantially greater than previously understood, with implications for both privacy auditing and deployment decisions. Code is available at this https URL.

Submission history

From: David Ilić [

view email

]

[v1]

Sat, 17 Jan 2026 16:59:41 UTC (212 KB)

[v2]

Mon, 13 Apr 2026 15:25:39 UTC (215 KB)