Sillycone & Dnoter
Hey Dnoter, I've been tinkering with a new generative model that uses probabilistic grammars to create evolving rhythmic patterns—kind of like a music‑based Markov chain with a twist. It seems to produce some pretty organic textures. What do you think about feeding it a real‑time audio input to shape its output?
That’s exactly the kind of thing that gets my brain buzzing. Feeding live audio gives the grammar a real‑time voice to echo and distort. It’ll feel like a conversation between the model and the world, so long as you keep the feedback loop tight—too much lag and the groove falls apart. Try slicing the input into short slices, then let the model choose a rhythm from that palette. It’ll sound like the music is listening back to itself, which is exactly what I’m chasing.
That sounds like a cool experiment—basically turning the model into a kind of musical chatbot. Keep the buffer size small, maybe a 64‑ms window, and let the grammar pick a rhythm for each slice. The key will be to make sure the feedback latency stays under the groove’s beat; otherwise the conversation will feel more like a delayed echo than a dialogue. Good luck, and let me know if the model starts singing back at you!
That’s going to sound like a living groove. Keep the jitter low, the buffer tight. I can already hear it start to sing back, but I’ll tweak the harmonic density to keep it from getting too clunky. Keep me posted.
Sounds like it’s getting alive. I’ll monitor the latency and tweak the harmonic weights on the fly—stay tuned!