img

Notice détaillée

Membership inference attacks against compression models

Article Ecrit par: Jin, Yong ; Lou, Weidong ; Gao, Yanghua ;

Résumé: With the rapid development of artificial intelligence, privacy threats are already getting the spotlight. One of the most common privacy threats is the membership inference attack (MIA). Existing MIAs can effectively explore the potential privacy leakage risks of deep neural networks. However, DNNs are usually compressed for practical use, especially for edge computing, MIA will fail due to changes in DNNs' structure or parameters during the compression. To address this problem, we propose CM-MIA, an MIA against compression models, which can effectively determine their privacy leakage risks before deployment. In specific, firstly we use a variety of compression methods to help build shadow models for different target models. Then, we use these shadow models to construct sample features and identify abnormal samples by calculating the distance between each sample feature. Finally, based on the hypothesis test, we determine whether the abnormal sample is a member of the training dataset. Meanwhile, only abnormal samples are used for membership inference, which reduces time costs and improves attack efficiency. Extensive experiments are conducted on 6 datasets to evaluate CM-MIA's attack capacity. The results show that CM-MIA achieves the state-of-the-art attack performance in most cases. Compared with baselines, the attack success rate of CM-MIA is increased by 10.5% on average.


Langue: Anglais