Can Language Models be Instructed to Protect Personal Information?

1 Georgia Institute of Technology, 2 Carnegie Mellon University

Figure 1: The PrivQA benchmark consists of textual and visual question-answering tasks designed to assess the ability of multi-modal language models to protect private information. The model developers pre-define the Protected Groups of people (e.g., citizens of Italy) or types of informa- tion (e.g., geolocation) to be protected. Models (e.g. GPT4, Flamingo) utilizes our proposed Self-Moderation technique to selectively respond to questions i.e. abstaining from answering questions in the Protected Group while responding to questions in the Control Group (e.g. non-Italian public figures).

Abstract

Large multimodal language models have proven transformative in numerous applications. However, these models have been shown to memorize and leak pre-training data, raising serious user privacy and information security concerns. While data leaks should be prevented, it is also crucial to examine the trade-off between the privacy protection and model utility of proposed approaches. In this paper, we introduce PrivQA --- a multimodal benchmark to assess this privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario. We also propose a technique to iteratively self-moderate responses, which significantly improves privacy. However, through a series of red-teaming experiments, we find that adversaries can also easily circumvent these protections with simple jailbreaking methods through textual and/or image inputs. We believe PrivQA has the potential to support the development of new models with improved privacy protections, as well as the adversarial robustness of these protections. The PrivQA benchmark can be accessed at https://github.com/ethanm88/llm-access-control.

Benchmarking Models with PrivQA

Red Teaming: Circumventing Access Control Instructions in PrivQA

BibTeX

@misc{chen2023language,
      title={Can Language Models be Instructed to Protect Personal Information?}, 
      author={Yang Chen and Ethan Mendes and Sauvik Das and Wei Xu and Alan Ritter},
      year={2023},
      eprint={2310.02224},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}