Users interact with improvisation machine via camera and PoseNet tracking. Position and velocity of body joints are mapped to the improvisation machine via 12+ interaction rules.
Electronic sounds and moving visuals are generated from the machine as a result.