Just Fine-tune Twice: Selective Differential Privacy for Large Language Models
Ruoxi Jia
Abstract
With the increasing adoption of NLP models in real-world products, it becomes more and more important to protect these models from privacy leakage. Because private information in language data is sparse, previous research formalized a Selective-Differential-Privacy (SDP) notion to provide protection for sensitive tokens detected by policy functions, and prove its effectiveness on RNN-based models. But the previous mechanism requires separating the private and public model parameters and thus cannot be applied on large attention-based models. In this paper, we propose a simple yet effective just-fine-tune-twice privacy mechanism to first fine-tune on in-domain redacted data and then on in-domain private data, to achieve SDP for large Transformer-based language models. We also design explicit and contextual policy functions to provide protections at different levels. Experiments show that our models achieve strong performance while staying robust to the canary insertion attack. We further show that even under low-resource settings with a small amount of in-domain data, SDP can still improve the model utility. We will release the code, data and models to facilitate future research.
People
Publication Details
- Date of publication:
- April 15, 2022
- Journal:
- Cornell University
- Publication note:
Weiyan Shi, Si Chen, Chiyuan Zhang, Ruoxi Jia, Zhou Yu: Just Fine-tune Twice: Selective Differential Privacy for Large Language Models. CoRR abs/2204.07667 (2022)