Somewhere, in the back of my head,

Somewhere, in the back of my head, I knew someone was going to ask me for this “feature” … it was a matter of time. Just as I implemented a code freeze on the POV sticks I’m working on, someone e-mailed me and ask if it’s possible to take that “technology”, implemented in a hula hoop and have it change based on music. Not just react to music with random colors, but actually have it change based on music cues. From what I understood in subsequent e-mails was this person wants to be able to change to specific patterns based on the music.

So it got me thinking how to do this. I can stick with my current model of using a control file that will list the files in whatever order they need to be displayed in, with their specific timeout and all. But, this would require the end-user to have to build those cues themselves, and short from having an audio software that can display the wave form, or keyframes on the audio, this gets to be a pain in the behind (listening to the music and time the changes manually.)

Another option might be to have a transmitter/receiver combination. The transmitter is linked to the audio source and picks up the cues from that and sends signals to the hula hoop to change when needed. This would work great if all the user wants it cue off of say a song’s beat, those are easy to filter out and an AVR can handle that relatively easy. Then there’s the transmission, and hoping the moving hoop will actually receive it. There’s the finer details of the music that you couldn’t cue off of.

So, while I think this is certainly feasible, I’m stuck coming up with possible solutions here. My response to this person was, ‘I will do some research and get back to you.’

Anyone here have other ideas/suggestions?

I did a dress, chandelier and suit that changed based on the music. I used three nrf24L01+. One was attached to the sound system using a standard microphone cable and a msgeq7. That one sent out the pattern I wanted to play along with the 7 bands of the EQ. The patterns were high level (play pattern 1 or pattern 7 instead of frame by frame patterns).

The outfits would just play their patterns until they heard a change so if I briefly lost a signal it would still keep going.

There’s a not very good video at https://www.youtube.com/watch?feature=player_embedded&v=fvY5qUheVDE

-Zeke

It doesn’t really matter how or what’s being displayed. The main issue here is what to send. I’ve discovered that the msgeq7 has a wide overlap in the seven bands. Now granted, I can easily isolate a low, mid, and a high out of that and just work with it. But yeah, the signal loss could potentially be a problem. I’m thinking maybe an XBEE mesh network might have better reliability. Then again, I have used those same RF modules and got some great results with distance, not so much with something that’s moving as fast as some performers will spin a hoop.

This sounds a lot like A Pixel Toys setup. They make pixel props with LED strips (though not hoops, http://moodhoops.com is the place for LED Pixel hoops), and they have a system to let the user program their own choreography. However instead of doing anything with the music directly, you load the song in a desktop application and can pick points in the song as triggers and what the prop should do on those triggers (change modes etc). when it loads everything into the microcontroller all it is really doing is setting specific times in ms to make the changes. so assuming you turn the prop on at the start of the first song, all the changes are “with the music”, and defined by the user.

This page has a video with them demonstrating their application, playing the song, picking parts of the waveform to add events/triggers etc:
http://www.apixeltoys.com/p/143/a1-poi-by-a-technologies

Yeah, that’s exactly what I was thinking, having to write an application that displays the waveform and plays the song and allows the user to set those cue points. And since I have a ‘trigger’ button designed in the unit, the performer can get ready, with it turned on and then hit the trigger to start the display when they want/need it to, whether it’s at the beginning of the song, or at some arbitrary time.

Its the “more work, but less chance for missed cues” trade off basically. To do it dynamically but for a performance (sounds like its for performance) it would almost have to be deterministic, or basically the reaction to the music would be roughly the same every play of the song, and then the performer could just see what the hoop is going to do with that song in advance and see if they like it. In a way the “more work” option is probably better given that its also “more control” which from a chreography standpoint is pretty invaluable… maybe there is a happy medium inbetween the 2 where you have an app that analyses the wave form and picks natural cue points for you?

Yeah, I’ll have to spend some time pondering this. I’m liking the ability to let the performer build their cue points based on the waveform and use it that way. it makes the hardware simpler as I won’t have to deal with needing to send signals or anything. It does mean the performer needs to start it “on time” but at that point, the responsibility falls on their shoulders, not mine. :slight_smile:

Of course, I could still do a transmitter/receiver setup where the transmitter sends one command: START but then we’re back to … what if the unit fails to receive the signal? So yeah, manual control seems the safest and, for now, easiest/cheapest way.

Thanks for the ideas folks!

@Zeke_Koch I’ve been working on something similar using the MSGEQ7 to detect beats on 3 of the channels. It’s been a slog getting the beat detection to work decently. Curious how you ended up identifying the beats?

I suspect many people here would be interested in beat detection, @Zeke_Koch and @Greg_Friedland - would love to see that spun off into it’s own post/discussion :slight_smile: (Just saying…)

Beat detection is probably a great idea for a separate thread. Also, sometimes I think that “we” need a wiki as well as discussions.

A wikis is great … if you or Daniel actually have the time to write the articles and maintain them.

Oh no-- not us. You! Everyone! :smiley:

Someone needs to monitor for junk. :slight_smile: But yes, I agree, we could all be contributing.

Sadly I didn’t do any beat detection. I just based my changes on amplitude.

I used had little tracers bouncing back and forth and they switched direction when the base went over a certain amplitude:

//***********
int bass = (eq[0] + eq[1] + eq[3]) / 3;
int high = (eq[4] + eq[5] + eq[6]) / 3;

if((bass > 150) && (millis() > (lastBounceTime + bounceInterval)))
{
    bounceForward = !bounceForward;
    lastBounceTime = millis();
}

// then I set the length of the trail to the bass and the hue of the lights to the treble
byte trailLength = map(bass, 0,255, 3,19);
byte trailDecay = (255-64)/trailLength;
byte hue = high;

//***********

Yeah, it was a cheap trick, but it worked relatively well for what I was trying to accomplish.

Done something similar to what you want with dmx controller and time code