Modern mobile and web audio technologies enable large-scale audience participation in concerts in various ways. For example, audience members can use interactive music applications to generate music from their smartphones as a connected ensemble and shape a live music performance. However, it remains an ongoing challenge to design interactions that encourage and sustain audience participation over time. This research team will use data mining techniques over the time-evolving networks of audience collaboration to identify audience preferences and interaction patterns. Such knowledge will be used to improve audience engagement in an audience participatory music performance.
The team already developed a live performance system, Crowd in C[loud], which combines an interactive audience user interface for generating music, an interface for an expert musician to orchestrate their participation, and an ad hoc social network for musical collaboration (Lee et al. 2016). An audience member can write a short melody, browse other people’s melodies, and perform as musicians collaboratively improvising (video: https://youtu.be/8nnrKJ4Ap0c?t =2m40s). The team will analyze large-scale user-to-user interaction data by formulating the interaction between users, their participation, and their musical attributes (e.g., pitch, timbre, duration) as a time-evolving heterogeneous network. The outcome of this work will be an intelligent audience participation system that shows the musician the real-time status of audience engagement, and allows the musician to encourage particular types of audience through social interaction. This work will lead to insights on how to better facilitate audience engagement beyond concerts, such as in classrooms, public events, and conferences.