Next week, Intonic presents an installation for "Mozart and the Mind" - a festival celebrating the intersection of Art, Science and Technology in San Diego.
The installation, called ‘Cocktail Part A’ - is an interactive system that translates speech signals into music. The algorithm strips out semantic content from the voice pattern, selecting samples based on harmonic or formant content, and applying a range of filters and other forms of DSP such that the original linguistic material is discarded, and replaced with pitch and rhythm correlates.
Four participants engage in conversation, which is sampled using Android phones, and streamed to the processing software. At the same time, a ‘host’ wearing a Muse EEG headset by neurotech company Interaxon, controls the mix of the voices and certain effects according to the levels of attention and meditation detected, which are processed in the cloud using Qusp's Neuroscale platform.
It remains to be seen how ‘musical’ the output of this installation will be, but as a human social experiment it is bound to be fascinating.