This paper explores the null space properties of neural networks. We extend
the null space definition from linear to nonlinear maps and discuss the
presence of a null space in neural networks. The null space of a given neural
network can tell us the part of the input data that makes no contribution to
the final prediction so that we can use it to trick the neural network. This
reveals an inherent weakness in neural networks that can be exploited. One
application described here leads to a method of image steganography. Through
experiments on image datasets such as MNIST, we show that we can use null space
components to force the neural network to choose a selected hidden image class,
even though the overall image can be made to look like a completely different
image. We conclude by showing comparisons between what a human viewer would
see, and the part of the image that the neural network is actually using to
make predictions and, hence, show that what the neural network “sees” is
completely different than what we would expect.

In this paper, the authors delve into an intriguing aspect of neural networks – the null space properties. They extend the traditional definition of null space from linear to nonlinear maps, shedding light on the presence of null space in neural networks. By identifying the part of input data that does not contribute to the final prediction, the null space can be exploited to deceive a neural network, highlighting a potential weakness in these systems.

One fascinating application discussed in this paper is image steganography. Using null space components, the authors propose a method to manipulate neural networks into classifying an image as a specific hidden class, even when it appears completely different to human viewers. The experiments conducted on well-known image datasets such as MNIST demonstrate the effectiveness of this approach.

This research underscores the interdisciplinary nature of the concepts explored. Neural networks, which are rooted in computer science and artificial intelligence, intertwine with linear algebra and non-linear mappings. The idea of null space, traditionally attributed to linear systems, is extended to nonlinear neural networks, creating new avenues for investigation and discovery.

The implications of leveraging the null space of neural networks reach beyond steganography. By understanding and manipulating the null space, researchers and practitioners can enhance the robustness and security of machine learning systems. Furthermore, this study prompts us to question how neural networks perceive and process information, as it reveals a stark disparity between what a human viewer perceives and what a neural network “sees.” This calls for further investigation into the interpretability and transparency of neural networks, an ongoing challenge in the field.

Overall, this research contributes valuable insights into the null space properties of neural networks, expanding our understanding of their behavior and raising important questions about their vulnerabilities and perceptual differences. By embracing a multi-disciplinary approach, researchers can continue to advance our knowledge and develop strategies to overcome these challenges.
Read the original article