2-Dimensional 'Blur' This video shows the new 'blur2d' capability newly available on the FastLED

Amazing. the math behind it is beyond me!

From a certainly perspective, it’s the same idea as fire — pixels trend toward thing, and stuff modifies them.

I’m trying to figure out what this is. If I just set one white pixel in the middle of the array - it would spread out and fade over subsequent frames? Looks beautiful.

The math is pretty simple.

Here’s the 1-D version first: For each pixel, remove a certain amount of brightness from it (say, 20%), and increase the brightness of the left neighbor and right neighbor pixels by half that much each.

Example:
0 0 0 100 0 0 0 -> 0 0 10 80 10 0 0

The two dimensional version is the same idea, but it spreads the light to the eight adjacent 2-D neighbor pixels.

That’s all the math there is behind it.

Now, there are two implementation details that are worth noting: first, it’s written in such a way that it doesn’t need a ‘scratch buffer’; it operates directly on the pixels place. ARM chips might have enough RAM for a scratch buffer, but AVR chips almost certainty won’t, so this is important.

Second, if you try your hand at implementing this with the no-scratch-buffer constraint you’ll find that doing it symmetrically in one dimension isn’t too hard, but doing it symmetrically, in one pass, in two dimensions, without a scratch buffer seems impossible – and you’re right, it is. You can’t satisfy all of those constraints at once. But here’s the insight that makes it the implementation practical without the backing scratch buffer: doing a symmetrical 2-D blur is mathematically identical to doing a 1-D blur on each row and then doing a 1-D blur on each column .

0 0 0
0 100 0
0 0 0
-Do 1-D blur on rows ->
0 0 0
10 80 10
0 0 0
-Do 1-D blur on columns ->
1 8 1
8 64 8
1 8 1

Note that the resulting array is blurred symmetrically in 2-D!

So by dividing the work into two passes (rows, then columns), we’re able to keep all the other constraints: 2-D blur, with symmetrical results, without the need for a backing scratch buffer. Each pixel is handled twice, but in AVR this approach is preferable due to RAM constraints, and on ARM it’s fast enough not to matter too much.

The reason this works to ‘dim’ the overall matrix is “rounding”. Since everything on the pixels is done with integer math, sometimes there’s a loss of precision and results are rounded down. Eg,
0 15 0 should become:
1.5 12 1.5 but get rounded down to:
1 12 1
and note that the total light has dropped from 15 overall to 14. Repeat this process and all the pixels will eventually fade to black.

I have a couple of other 1-D and 2-D (and 3-D??) ‘video filters’ that I’ll be sharing as well; all share the basic design that in addition to whatever effect they provide, they also result in an overall reduction in light, so they can be used the way the blur is being used in this demo: as a way of fading out older images as new ones are drawn on top.

@Mark_Ortiz : Yep, you got it! And you can control the amount of blur in each frame.

@Seph_Aliquo : there’s definitely a common draw-blend-iterate thread shared with the and the fire. The big difference is that the fire has an underlying ‘heat’ buffer that it’s doing the simulation on, and this is operating directly in the LED pixels, but the idea is clearly related!

I should mess with my Object3D library - I bet using a point cloud of a mesh with the blur would look pretty sweet.

I was trying a simple experiment on my Due. It’s funny - I have a program called splash fade that looks very similar - though done completely differently (with 10 memory hogging frame buffers!!). One thing I did do was shift the hue while fading, and also fade RGB at different rate… and then I see you have a new fade using color function! :slight_smile:

How did you drive the Smartmatrix / capture the video, @Mark_Kriegsman ? I finally have a Smartmatrix in my hands, but when using the FastLED example code I get a badly flickering result when capturing it with 50, 25 or 24 fps video. Basically I found it even impossible to make a good photo as long as the shutter speed is not perfectly syncronized with the scan rate of the multiplexing. So which trick did you exploit for your flicker free video?

All I can do is tell you what I did, not which parts of it were the “magic” that worked, but it wasn’t super complicated:

  1. I shot the video with and iPhone 5 (which doesn’t have any adjustable-FPS options).
  2. I shot in a dimly lit but not dark room.
  3. I used the Focus/AE Lock on a bright section of the animation-- which is why the background is almost completely black.
  4. I put a piece of white paper over the Smart Matrix.
  5. The camera (iPhone) was mounted about two feet from the Smart Matrix.

The other thing to note is that most of this animation is pretty dim-- there aren’t very many full-brightness pixels at all.

I should do some experiments and see what the magic formula is for photographing or shooting video of LEDs. I suspect that a key part is keeping it all relatively dim. I often setBrightness(32) before shooting video of regular LED strips. The SmartMatrix is a different animal.

OH! Oh one other thing! Note that I used serious color correction on the SM. The SM’s green LEDs are more than twice as bright as the other color LEDs. I wonder if slashing the flood of green light output affects the AE sensors in the camera positively. I bet it helps.

Thanks for the answers, Mark! With “normal” PWM controlled leds I have no trouble to get good pictures / videos. I do it very similar to your recommendations. It is just these row scanning multiplex thing with the SM. Basically there are only 2 rows visible a a time being modulated by @Jason_Coon s binary code modulation. So every line is lit 1/16 of the time. With a short shutter speed like 1/1000 s you see nothing but 2 lines. At 1/200 s you get a half filled display captured. So it causes interferences to film it with anything but the perfectly to the scanrate syncronized fps. I think in your video the effect is hidden by the amount of black and by the fast moving animation itself. Try to photo or video a plain color fill or a very slow noise - you will be surprised. Even in Jasons videos the interference is present. Another aspect of that is, that it requires to think about the SM like about a CRT - where you basically need to wait for a retrace signal, when the screen is scanned completely befor writing new data. Otherwise there appears the problem with fast animations that the picture rips apart because you see part of the new and part of the old frame. You can see that when scrolling a text quickly on the SM - there are no vertical lines anymore. So like in good old times we need a WaitForRetrace() function or a flag indicating that its a good moment to push out the data now…

I had similar problems with the Rainbowduino and Colorduino, which both use multiplexed output too.

I think dimming the scene may lead to longer exposures for still photos. I don’t know how to explicitly do this with video!

Yes, a longer shutter time helps to soften the result a bit. But then you still have lines that are brighter than others. With video you enforce that by selecting a higher aperture value. Every step on the scale (like 5.6-8-11) doubles the exposure time. Other possible ways are to decrease the light sensitivity by selecting a lower ISO value. Or using a physical grey filter. Or just diming the leds but paying that with a reduced dynamic range.

@Mark_Kriegsman I am trying to use the blur2d() function with the RGB shades a.k.a. 16x 5 matrix with a custom XY mapping. I looked and confirmed that the blur columns code uses a XY function in the library cpp file. How can I point to my custom XY map rather than have it use the stock XY in FastLED? Thanks!

I believe that there’s no XY function defined in FastLED. There is a function prototype declared in colorutils.cpp, but you have to provide the function yourself.

There is a ‘standard’ XY function provided in several of the examples, but as you’ve pointed out, you can provide your own XY mapping.

I think that the blur2d function basically requires you to provide a function named “XY” that matches the prototype:
uint16_t XY( uint8_t, uint8_t);

Is that not working correctly for you?

Ok, I understand now what’s going on now. Thanks for the explanation, I used the blur1d() instead, and now the artifacts are gone. The 2d version was using the wrong XY() no matter what I tried. In fact, it seems that the 1d version is a bit ‘faster’?

The 1d version touches each pixel once. The 2d version (necessarily) touches each pixel twice.

I’m concerned about the XY situation.
If you comment out your XY function completely, the code should no longer compile. If it does compile, then you’ve got a second copy of an XY function in there somewhere – and I don’t know how that’s getting in there. As I said, I don’t think that there even is and XY function defined in the library anywhere. I’d really appreciate it if you can help me figure out where it’s coming from. Can you post your entire code somewhere, like pastebin or http://gist.github.com?

Well, I do have two XY functions because I am using one sketch for multiple led devices. One is a matrix without a serpentine layout, another are the glasses, and the other is a strip.

After commenting out the XY(), the library compiles and uploads to the board, but the animation will not run once loaded. Add back the XY() with the proper kmatrixLayout for a non-serpentine matrix, and I get a blur2d() output for a serpentine matrix.

wait… I said that last part backwards.

Add back the XY() with the proper kmatrixLayout for a serpentine matrix, and I get a blur2d() output for a non-serpentine matrix.

You have two XY functions defined at once? (Selected with #if?) or just one XY function with different behaviors depending on #defines?

I’m suspecting either Arduino IDE #define issues, or that you may have an outdated copy of the library. Does your copy of colorutils.cpp have the “attribute ((weak))” commented out at the end of line 357? It should be commented out – and the sketch should not compile if you’re not providing an XY function.

Can you post your entire code so I can see? (I understand if not, but I think that might be faster.)

I had two: XYshades() and XY(). Certain animations used certain functions. I combined them into one XY() and used if statements for direction and the blur2d() now works as expected.

I removed the XYshades() and combined it like this to the XY(): https://gist.github.com/bonjurroughs/70fe94289957e896ca32