Training Differentially Private Neural Networks with Lottery Tickets

Abstract

We propose the differentially private lottery ticket hypothesis (DPLTH). An end-to-end differentially private training paradigm based on the lottery ticket hypothesis, designed specifically to improve the privacy-utility trade-off in differentially private neural networks. DPLTH, using high-quality winners privately selected via our custom score function outperforms current methods by a margin greater than 20%. We further show that DPLTH converges faster, allowing for early stopping with reduced privacy budget consumption and that a single publicly available dataset for ticket generation is enough for enhancing the utility on multiple datasets of varying properties and from varying domains. Our extensive evaluation on six public datasets provides evidence to our claims.

Publication
European Symposium on Research in Computer Security
Lovedeep Gondara
Lovedeep Gondara
Research Scientist

My research interests include machine learning (deep learning) and statistics, with current research focus on large language models, differential privacy, and their applications to healthcare.