So this is what I have been working on for the better part of this year. It was part of my honours thesis which explored human-robot interaction with and embodied creative system.
Basically I repurposed the robots and attached an Arduino and some other components. This would allow them to listen to their environments and develop simple song compositions from what they had heard and not what was just programmed into them. They continually learn new songs by listening to other robots and people who play songs on the 3-button synthesizer (as seen in the video). However, they can evolve their own songs without human interaction through a boredom and interest algorithm which consists of entering a compose state once a song has been played multiple times. Composing consists of taking a song from its song memory and altering a note at random to generate a new song.
To better situate themselves in areas where they are likely I hear notes and songs, they move towards areas where they think sound is the loudest (either left or right).