I am in the 90-day free trial of Ableton Suite and the Max for Live is quite powerful. I explored some patches that would be helpful for my project this week and succeeded in getting the MIDI signal from kinect sensor to Ableton! I am so happy! I will work on mapping different elements for next week.
After the feedback from last class and my experiments over the week, I have concluded some applications that I could use to further develop my project.
- save patterns in audio – overlay and then play
- user variable – changes the probability of the song
- elements of music should include quantization, looping, and pitch.
Exploration of patches in Max for Live
I experimented with the connection kit patch to get Kinect sensor signal from Ni-Mate to Max for Live. The camera patch does not work with Kinect camera. The OSC Monitor sort of worked. I still need to figure out how the OSC Send patch works. The TouchOSC patch only works with touch factor rather than Kinect.
Probability Pack + Step Divider
Last week, Luisa’s feedback in terms of creating generative music in the improv section is very helpful so I looked into this patch. I can map some values from Kinect to some values in the patches. Here is a great tutorial of step divider that I benefited from.
Here is a video of my experimenting with probability pack.
So to play around with the packs, I created a generative music here as part of my studies:
Getting Kinect Sensor Data to Ableton
It has been such a journey to receive the input from Kinect.
From Luisa’s suggestion in sending data to Max for Live instead of Max from Ni-Mate last week, I made a pack to read the data from Kinect to Ableton. It worked occasionally which is quite frustrating to me. Here is the demo which it is possible.
I discovered there is a midi output section Ni-Mate. However, it could not transmit the data to Ableton. I tried to follow the tutorial, but I had been stuck trying to get it to work.
I went to another office hour with Billy and he recommended me using ctlout as the object in Max to send mdi and it worked! During the report session with Luisa, she did mention midi messages do have some limitations such as no clear state detection, which is Live Object Models would be better. So I will keep trying to work with both Max and Max for Live to figure out which is better for me to build my project.
I also tried to search for other methods. Livegrabber stood out to me but some links don’t work so I couldn’t really try it. But the demo video looks quite cool, although not suited for my project.
Max has presentation mode to hide all the raw patches (Luisa showed me how). It was mind-blowing. She suggested that I could hack it later on for my project.
I found this amazing project which is quite similar to what I wanted to build here.
- Mapping music elements as a rough draft to make sure it works
- Figure out how to make the improv generative
- Scale, Key, Pitch, Quantization, Looping, Pitch, etc.