Metaverse, the concept of creating a virtual world that mirrors the real world, is gaining momentum. The key to achieving a realistic and engaging metaverse lies in the ability to support large-scale real-time interactions. Artificial Intelligence (AI) models, particularly pre-trained ones, are playing a crucial role in achieving this goal. These AI models, through collaborative deep learning (CDL), are being trained collectively by multiple participants.

However, this collaborative approach brings with it certain security vulnerabilities that could pose a threat to both the trained models and the sensitive data sets owned by individuals. Malicious participants can exploit these weaknesses to compromise the integrity of the models or to illegally access private information.

In order to address these vulnerabilities, a new method called adversary detection-deactivation is proposed in this paper. This method aims to restrict and isolate the access of potential malicious participants, as well as prevent attacks such as Generative Adversarial Networks (GAN) and harmful backpropagation. By analyzing the behavior of participants and swiftly checking received gradients using a low-cost branch with an embedded firewall, the proposed protocol effectively protects the existing model.

Although the paper focuses on a Multiview CDL case for its protection analysis, the principles and techniques described can be applied more broadly. By implementing this adversary detection-deactivation method, the metaverse can ensure a more secure and trustworthy environment for collaborative deep learning.

Read the original article