I want your help.  If you: 1.

I want your help. If you:

  1. Love the idea of LEDS sync’ing with your music…

  2. Think pulse-to-the-beat-type setups are cool, but know that there’s room for a smarter music synchronization algorithm (think instrument isolation based on frequency profiles, concurrent animations in spatial zones around the LED array, and using color palettes consistent with the mood/genre of the song)…

and…

  1. Think you have the coding chops (Arduino/FastLED) to be part of the solution…

Then please add a comment below and let me know!

If it seems like a good fit, I’ll send you a Glowdeck board for free, and we’ll work together to finish this first-of-its-kind music sync system.

*And by the way, our audio signal won’t be coming from the electret microphone you see in the photo (or in this video: http://j.mp/gdmusicsync) - we’ll be working with a clean audio signal, fed directly from the incoming Bluetooth-audio-stream to two analog pins on the microcontroller (one for each left/right stereo channel). Combine that with audio metadata that’s already being retrieved from the Bluetooth module, live ambient light sensing of Glowdeck’s environment, and high-fidelity sound on-board…and I think we’ve got an awesome development environment to make something really special.

At one point in the prototyping stage I used the MSGEQ7 to stream the 7-band frequency readings to the MCU. Definitely a useful IC, but after I saw what an FFT library (along with the math functions built into the MK20DX256 - i.e., the same microcontroller as the Teensy 3.1 uses) could do without any additional hardware, I ditched it in favor of a lower BOM cost.

As for your ideas - we’re definitely on the same page, and the existing code is setup to read values from the left and right channels independently, and output LED patterns that correspond with the space around a channel if the disparity between the readings on the opposite channel rise to a certain threshold (i.e., if a vocal comes in 100% panned left, that would trigger a left-side-only animation - the effect is pretty neat with songs like Bohemian Rhapsody by Queen, where everything is hard-panned).

Huh, now this is something headed in the right direction. After spending the summer tooling around with the MSGEQ7 I was left feeling very underwhelmed. FFT is on my todo list but teensy I hasn’t considered, very interesting idea. Interest piqued.

I don’t know if Openmusiclabs FHT library works on the Teensy, but it sure worked on a Nano. No MSGEQ7 required. And no, I don’t have the chops.

I’ll have to check out the FHT library you mentioned. If it worked on a Nano then I could probably port it to work with Glowdeck (if porting is necessary at all). Thanks for the tip.

FHT is the way to go on an MCU. It’s a heck of a lot lighter, and works on real numbers (floating point math on a micro makes baby unicorns cry). There are a few FHT libraries out there for Avr micros. Google is your friend here.

Looks cool!

(Just as a heads-up, I’m back-burner-working on a much dumber audio-to-Arduino board, but certainly nothing to compete with this grand vision!)

Sounds cool Mark - is it just an Arduino board with an audio-in jack/feed or will there be some audio processing hardware on there too?

Either way, I’ll speak for myself when I say it’d be awesome if the hardware debuted with some corresponding FastLED functions for audio sync’ing animations!

Just a little analog RC low-pass filtering to make the audio signal more code-friendly for (slow) AVR boards that are mostly busy doing other things. No ‘binning’ or anything fancy like that. Mostly I’m just exploring an idea that’s been lurking in my head for a while: that if all you want is beat-sync, you can do 80% of the work with a few analog hardware components, and the rest in some lazy software. Of course, speaking of lazy, I am nowhere near making any of this actually work yet!

I think we already have everything we need library-wise for ‘audio-triggered’ animations; mostly now people need to design what they want their ‘audio events’ to be, and how they want the animations to react to each event. Once that’s designed, actually making the audio processing layer actually trigger the relevant events seems pretty simple.

I think that the big BIG thing here is still the specific design of the logic schema. i.e., never mind how the audio event detection works – just assume that it magically does. Now: what’s the complete list of audio events that you want to receive, and how do you want to specific what-triggers-what? For example is “HARD_PAN_RIGHT” an event that you want to receive? HARD_PAN_LEFT? SILENCE? What else? Basically, figure out how you wish you could script it, and then back into that with what you build to support it.

I saw some approaches but the best were MSGEQ7 based. As Mark said - the main challenge is to create a logic - what shall happen when based on what? I started creating animations with lots of parameters - and then linked them to spectral analysis data. Try and error again and again. The trick is to find a system that gives a constant change in the pattern - whithout turning into wild chaos AND without appearing boring. If you have any precise questions, I’m happy to help. You probably know them all, just in case you don’t here are some short videos from my test setups:



I have it the the back of my mind since months to make this pretty universal animation sound-reactive, but it seems that I enjoy the feeling of looking foreward more than actually doing it. :wink:

Hi @Mark_Kriegsman ​, I started out with RC filters, and you will need them, but the wall I hit was the ADC, having 10bits to work with a value 0-1024 is a little limiting, as right out the gate you only get half that for audio (if we assume audio is AC about 0v and not varying DC). Next challenge I hit was sampling rate. To pick up on a 60Hz (or lower) beat, and the tshhh of a hiHat you need a fair bit of spectrum, but ole uncle Nyquist comes knocking if you push it (and I thing the MSGEQ7 suffered here), and the output just doesn’t match what you hear.

I also found that your eyes and ears will allow a 30ms sync discrepancy, but beyond that, it doesn’t feel real.

It’s a really big challenge for an Arduino.

BTW: FHT is a cool thing in theory but I saw nothing yet what I would call “snappy”. It felt all like having to much latency, so it didn’t the perfect moment. But I would be very happy if someone could show me a fast FHT based animation (lets say minimum 100+ fps). In my experience (as an old fasioned light-show and fire-effect controller) it makes a HUGE difference for the audience if the light effects are just parallel to the music or really perfect on the spot. The moment when the light becomes part of the music… The visible difference is all hands up …or not.

“I also found that your eyes and ears will allow a 30ms sync discrepancy, but beyond that, it doesn’t feel real.”
Hi @Stuart_Taylor , I disagree with that.
The ideal is 0 ms sync discrepancy and you don’t see the difference, but you FEEL it clearly.
How can I say that, how is 0ms possible? Simple: you grab the signal directly before the amplifieres and all the time the sonic waves need to travel from the loudspeaker tho the ear (defined mainly by distance) ist the time you have for data processing and light controlling. Like the light controller operates in the future from the perspective of the listener. And 10 ms feels very differnt than 0 ms. 30ms is far out of range from my perspective.

Hi @Stefan_Petrick ​​​ 30ms isn’t ideal, you could call it the uncanny valley, its almost there, but you are right, in that it’s not hands in the air amaze balls. My background also includes big light and sound rigs (big well known clubs and outdoor events), and 30ms was what we could “get away with”, especially when we used big pyro with a scene (crowd blinding), where the detonation has to be timed to the audio and lighting.

Wikipedia says

For television applications, audio should lead video by no more than 15 milliseconds and audio should lag video by no more than 45 milliseconds. For film, acceptable lip sync is considered to be no more than 22 milliseconds in either direction.

The Media and Acoustics Perception Lab says

The results of the experiment determined that the average audio leading threshold for a/v sync detection was 185.19 ms, with a standard deviation of 42.32 ms

The ATSC says

At first glance it seems loose: +90 ms to -185 ms as a “Window of Acceptability”

and

Undetectable from -100 ms to +25 ms
Detectable at -125 ms & +45 ms
Becomes unacceptable at -185 ms & +90 ms

(– Sound delayed,+ Sound advanced)

So, I stand by my rule-o-thumb :stuck_out_tongue:

Sure, we used delay lines on every rig, but again, I was talking about an Avr, and not £££s of pro equipment. It’s limited for this task.

You know, if FastLed released an audio feature, the group would be full of:
" I have 500 ws2811 hanging off an attiny, and it won’t keep sync to my audio, what gives " posts :wink:

I think it makes sense to seperate 3 scenarios here:
A) a ± “latency” that is obviously disturbing
B) the latency one can “get away with”
C) synchronicity
Again, I talk about the difference people can not describe, but feel. C) is not so hard to achieve. Lets calculate with 3ms/m - that gives already 12ms in a 4m living-room-distance. With a 16MHz AVR and 256 slow WS2812s I got arround 80 fps (including all calculations and audio reading) which is not so bad - so with 4m distance there was a relative latency of 0.5ms - pretty close to C).
I’m writing all this because I’m convinced by experience that it is possible even on weak hardware and that it makes a difference to the audience - even while the most people couldn’t say why. Most of them just get the feeling that they want to dance. :slight_smile:
On stages with huge DMX setups I used to use an external drum computer (with features like tap, speed microadjustment, pause, shift, …) to trigger all. So I synced the drumcomputer to the music like a turntable and then moved the trigger beat backward in time until the effect “snapped in”. With all humbleness I saw just very few lightshows which had the same emotional impact to the audience. But basically it is so simple.

I’m in the process of building animated LEDs into car headlights, and one of the areas that I want to include is music animations. It’s a bit of a challenge, but I’m getting there slowly. The next job is bluetooth control of the patterns, speed and colours. see https://photos.google.com/share/AF1QipNqyoqa6KoNnuETmaTE9IK5u2UWK0uOXe2Dp7_GE-pXmT1Leqgl9IXuK7UrHllWrw?key=NHk3ZGpFc3dDTmdYQjhmdFBOblBIOF91SW1PMkxB and https://photos.google.com/share/AF1QipP0UqPhiObytV6ftCsxdAFLHf0dZmCS_KbzeGyWs2oyLp6Xu_cOOqgY_h6fT7r8RQ?key=ZFRvVzg1X2gzc0xQX2FDSVZPcHBmRUtKV0VWbm1B

After bluetooth / smartphone control I’ll be working on music animations.

I’m using teensy 3.2s as controllers and FastLED parallel functionality to control 16 individual elements on the front of the car.