Product bundling has evolved into a crucial marketing strategy in e-commerce.
However, current studies are limited to generating (1) fixed-size or single
bundles, and most importantly, (2) bundles that do not reflect consistent user
intents, thus being less intelligible or useful to users. This paper explores
two interrelated tasks, i.e., personalized bundle generation and the underlying
intent inference based on users’ interactions in a session, leveraging the
logical reasoning capability of large language models. We introduce a dynamic
in-context learning paradigm, which enables ChatGPT to seek tailored and
dynamic lessons from closely related sessions as demonstrations while
performing tasks in the target session. Specifically, it first harnesses
retrieval augmented generation to identify nearest neighbor sessions for each
target session. Then, proper prompts are designed to guide ChatGPT to perform
the two tasks on neighbor sessions. To enhance reliability and mitigate the
hallucination issue, we develop (1) a self-correction strategy to foster mutual
improvement in both tasks without supervision signals; and (2) an auto-feedback
mechanism to recurrently offer dynamic supervision based on the distinct
mistakes made by ChatGPT on various neighbor sessions. Thus, the target session
can receive customized and dynamic lessons for improved performance by
observing the demonstrations of its neighbor sessions. Finally, experimental
results on three real-world datasets verify the effectiveness of our methods on
both tasks. Additionally, the inferred intents can prove beneficial for other
intriguing downstream tasks, such as crafting appealing bundle names.
The article discusses the evolution of product bundling as a crucial marketing strategy in e-commerce. However, it highlights two limitations in current studies: fixed-size or single bundles, and bundles that do not reflect consistent user intents. To address these limitations, the paper introduces personalized bundle generation and intent inference based on users’ interactions using large language models.
The authors propose a dynamic in-context learning paradigm using ChatGPT, which learns from closely related sessions as demonstrations while performing tasks in the target session. This paradigm involves retrieval augmented generation to identify nearest neighbor sessions and designing appropriate prompts to guide ChatGPT in performing personalized bundle generation and intent inference on these sessions.
To improve reliability and mitigate hallucination issues, the authors develop a self-correction strategy and an auto-feedback mechanism. The self-correction strategy fosters mutual improvement in both tasks without supervision signals, while the auto-feedback mechanism recurrently provides dynamic supervision based on the distinct mistakes made by ChatGPT on different neighbor sessions.
The experimental results on three real-world datasets demonstrate the effectiveness of the proposed methods in improving performance on personalized bundle generation and intent inference tasks. Moreover, the inferred intents can be valuable for other downstream tasks, such as crafting appealing bundle names.
Overall, this research addresses key limitations in current product bundling strategies and proposes a novel approach that leverages large language models for personalized bundle generation and intent inference. The dynamic in-context learning paradigm and the self-correction strategies contribute to enhancing reliability and performance. This study opens up opportunities for using inferred intents in various downstream tasks beyond just improving product bundling strategies, making it a valuable contribution to e-commerce marketing.