arXiv:2402.10805v1 Announce Type: new
Abstract: The recent advancements in generative language models have demonstrated their ability to memorize knowledge from documents and recall knowledge to respond to user queries effectively. Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters. Given a user query for visual content, the MLLM is anticipated to “recall” the relevant image from its parameters as the response. Achieving this target presents notable challenges, including inbuilt visual memory and visual recall schemes within MLLMs. To address these challenges, we introduce a generative cross-modal retrieval framework, which assigns unique identifier strings to represent images and involves two training steps: learning to memorize and learning to retrieve. The first step focuses on training the MLLM to memorize the association between images and their respective identifiers. The latter step teaches the MLLM to generate the corresponding identifier of the target image, given the textual query input. By memorizing images in MLLMs, we introduce a new paradigm to cross-modal retrieval, distinct from previous discriminative approaches. The experiments demonstrate that the generative paradigm performs effectively and efficiently even with large-scale image candidate sets.

Advancements in Generative Language Models and Cross-Modal Retrieval

In the field of natural language processing, generative language models have recently gained significant attention for their ability to generate coherent and contextually relevant text based on a given prompt. These models, such as GPT-3, have shown remarkable performance in tasks like text completion, translation, and question-answering. Building upon this capability, the authors of this paper propose extending the functionality of these models to incorporate visual content.

Traditionally, cross-modal retrieval refers to the task of retrieving relevant information from one modality (e.g., text) given a query from another modality (e.g., image). This has been primarily approached through discriminative models that try to learn a mapping between the two modalities and retrieve similar instances. However, the authors introduce a novel paradigm by proposing to “memorize” images within the parameters of the multimodal language model.

The key idea behind the proposed framework is to assign unique identifier strings to represent images and train the multimodal language model (MLLM) to memorize the association between these identifiers and the corresponding images. This involves two training steps: learning to memorize and learning to retrieve. During the first step, the MLLM learns to establish the connection between images and their identifiers. In the second step, it learns to generate the identifier of a target image given a textual query input.

The Challenges and Contributions

The main challenge in achieving this goal lies in developing visual memory and recall schemes within MLLMs. Unlike text, which can be easily tokenized and processed by language models, images are high-dimensional data that cannot be directly represented in a language model’s parameters. The authors propose an approach where images are encoded into their unique identifiers using techniques such as deep neural networks.

This proposed framework has several important implications and contributions. Firstly, it introduces a new perspective on cross-modal retrieval by leveraging the generative capabilities of MLLMs. This can potentially lead to more flexible and creative retrieval systems that go beyond simple similarity-based search. Secondly, it expands the scope of multimodal information processing by incorporating images into language models, which have traditionally focused on textual data. This approach allows for a more comprehensive understanding of the content and enables richer interactions between users and models.

Connections to Multimedia Information Systems and AR/VR

The presented research has strong connections to the wider field of multimedia information systems. Multimedia information systems deal with the storage, retrieval, and processing of various types of media, including text, images, audio, and video. The proposed framework addresses the challenge of integrating images seamlessly into language models, which are a fundamental component of multimedia information systems.

Furthermore, this research has implications for the domains of animations, artificial reality, augmented reality, and virtual realities. By enabling language models to memorize and recall images, the framework opens up possibilities for more immersive and interactive experiences in these domains. For example, virtual reality applications could leverage this capability to generate lifelike environments based on textual prompts, creating a more dynamic and realistic user experience.

Conclusion

The introduction of multimodal large language models (MLLMs) that can memorize and recall images presents exciting opportunities for cross-modal retrieval and extending the capabilities of language models. By leveraging generative approaches and training MLLMs to establish associations between images and unique identifiers, the proposed framework provides a new perspective on information retrieval. It also highlights the interdisciplinary nature of the concepts involved, connecting the fields of natural language processing, multimedia information systems, and virtual realities. As further research is conducted in this area, we can expect advancements in multimodal information processing and more immersive user experiences in various multimedia domains.

Read the original article