Protecting Privacy in Federated Recommender Systems: Introducing UC-FedRec

Federated recommender (FedRec) systems have been developed to address privacy concerns in recommender systems by allowing users to train a shared recommendation model on their local devices, thereby preventing raw data transmissions and collections. However, a common FedRec approach may still leave users vulnerable to attribute inference attacks, where personal attributes can be easily inferred from the learned model.

Moreover, traditional FedRecs often fail to consider the diverse privacy preferences of users, resulting in difficulties in balancing recommendation utility and privacy preservation. This can lead to unnecessary recommendation performance loss or private information leakage.

In order to address these issues, we propose a novel user-consented federated recommendation system (UC-FedRec) that allows users to define their own privacy preferences while still enjoying personalized recommendations. By paying a minimum recommendation accuracy price, UC-FedRec offers flexibility in meeting various privacy demands. Users can have control over their data and make informed decisions about the level of privacy they are comfortable with.

Our experiments on real-world datasets demonstrate that UC-FedRec outperforms baseline approaches in terms of efficiency and flexibility. With UC-FedRec, users can have peace of mind knowing that their privacy is protected without sacrificing the quality of personalized recommendations.

Abstract:Recommender systems can be privacy-sensitive. To protect users’ private historical interactions, federated learning has been proposed in distributed learning for user representations. Using federated recommender (FedRec) systems, users can train a shared recommendation model on local devices and prevent raw data transmissions and collections. However, the recommendation model learned by a common FedRec may still be vulnerable to private information leakage risks, particularly attribute inference attacks, which means that the attacker can easily infer users’ personal attributes from the learned model. Additionally, traditional FedRecs seldom consider the diverse privacy preference of users, leading to difficulties in balancing the recommendation utility and privacy preservation. Consequently, FedRecs may suffer from unnecessary recommendation performance loss due to over-protection and private information leakage simultaneously. In this work, we propose a novel user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users by paying a minimum recommendation accuracy price. UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent. Experiments conducted on different real-world datasets demonstrate that our framework is more efficient and flexible compared to baselines.

Read the original article