Article Commentary: Exploring the Social Experience of AI-Generated Music

The article explores the question of whether artificial intelligence (AI) can provide a similar social experience as playing music with another person. While AI models, such as large language models, have been successful in generating musical scores, playing music socially involves more than just playing a score. It requires complementing other musicians’ ideas and maintaining proper timing.

In this study, the authors used a neural network architecture called a variational autoencoder trained on a large dataset of digital scores. They adapted this model for a timed call-and-response task with both human and artificial partners. Participants played piano with either a human or AI partner in various configurations and evaluated the performance quality and their first-person experience of self-other integration.

The results of the study showed that while the AI partners showed promise, they were generally rated lower than human partners. However, it is important to note that the artificial partner with the simplest design and highest similarity parameter was not significantly different from human partners on some measures. This suggests that interactive sophistication, rather than just generative capability, is crucial in enabling social AI.

This study highlights the challenges of creating AI systems that can provide a truly social experience in music. While generative models can produce impressive musical scores, they still lack the intuitive understanding and improvisational skills that humans possess. These qualities are essential for successful social interactions in music.

To create more convincing AI partners in music, developers should focus on enhancing the interactive capabilities of these systems. This may involve incorporating real-time feedback mechanisms, responsive improvisation techniques, and adaptive synchronization algorithms. By considering these factors, AI systems could potentially achieve a higher level of integration and collaboration with human musicians.

Furthermore, future research could investigate the impact of different music genres and contexts on the perception of AI-generated music. Different genres may require varying levels of complexity and interaction, and understanding these nuances can help in tailoring AI systems to specific musical domains.

In conclusion, while AI-generated music shows potential, there is still a long way to go in replicating the social experience of playing music with a human partner. By combining generative models with interactive sophistication, researchers can pave the way for more immersive and collaborative musical experiences with AI.

Read the original article