A musical instrument made that I made as a project in university that uses data from a facial recognition algorithm to play sound. The face detection was written in python which then piped data to Max MSP to create sound waves. I also connected the sound output from Max to Processing to a music visualizer I had been working on.
Facial expression is definitely an interesting form of human-computer interaction. There were many decisions to be made about how to map facial movements to the sound being generated. Thus the project was also very interesting from a design perspective. I ended up making some decisions due to certain problems and inaccuracies with the face detection algorithm, for example connecting pitch to head rotation as it was more reliable to give an accurate number.
It was very funny watching people interact with the instrument. Their face glazes over for a second and then begins to cycle through a series of exaggerated expressions as they figure out the “interaction space” of which movements correspond to which attributes of the sound.