FastLED Temporal Dithering FastLED users can easily scale the brightness and power consumption of

FastLED Temporal Dithering
FastLED users can easily scale the brightness and power consumption of their projects with FastLED.setBrightness(…). FastLED v2.1now includes automatic “temporal dithering” which helps preserve color and light when the brightness is turned down. To take advantage of temporal dithering:

  • Code your animations as if they were going to run at normal full brightness; use the full range of 0…255 for RGB and HSV values as much as you wish.

  • Use FastLED.setBrightness( 0…255) to adjust the brightness of your whole animation. The FastLED temporal dithering will ‘kick in’ automatically as you lower the master brightness control.

  • In place of the standard “delay(…)” function, use “FastLED.delay(…)”. FastLED will use the ‘delay time’ to keep the LEDs refreshed with dithered pixel values.

This would require dithering to be left on, right?

Other details:

To disable temporal dithering, for POV or light painting projects, use
FastLED.setDither( 0 );

Temporal dithering has no effect at full brightness (255). It exists to preserve high quality color and accurate light output when the master brightness control is turned down to save power or manage brightness.

The more often your code calls FastLED.show(), or FastLED.delay(), the higher-quality the dithering will be, because FastLED can refresh the LEDs faster and more often.

If you are refreshing the LEDs less frequently (e.g., if you have a hundreds of LEDs, or computationally intensive animations), and you are running at a low brightness level, you may see the dithered pixel output as flickering, and you may want to turn it off if the effect is distracting. It’s not magic; it’s up to you what looks good in your projects.

And yes, if you turn off dithering, the library reverts back to ‘flooring’ integer values, instead of dithering them.

(haha OOPS! The diagram says 42, when it should say 46 ! It’ll be corrected in the version on the wiki.)

Yeah, I already knew it wasn’t going to work for POV (got pictures to show) … I just wanted to make sure others knew that. Not like there’s a sudden influx of people building POV displays. :slight_smile:

such great work!

Do folks really use blocking delays much? I get a little angry with myself when I find em in my code…

Sometimes you have to. When you’re constantly calling LEDS.show() in a fast loop, you need to show it down a bit. With my POV system, I slow things down by 10usecs. I don’t actually have to buy I like being nice to the MCU. :slight_smile:

Why do you have to? I could see not wanting to call LEDS.show() faster than X, but you can put a non-blocking delay around that too.

PS. Apologies for hijacking a wonderful post about a cool feature…

Yes, one should always try and use non-blocking delays. That doesn’t mean that everyone knows how to. Nor that it’s worth rewriting code to do that. I already run all of my code with non-blocking delays, however there are several times where adding yet another check and necessary variables will add too much overhead versus doing a quick delay(1) (or in my case _us_delay(10).)

Hmm, that explains the heavy flickering I’m seeing on one visualisation. Hmm, I guess I can turn on and off as required.

Mat: how often are you calling show()? How many updates per second, or put another way, how many ms per update? And are you using any delay(…) calls?

Flicker increases as update rate drops, and also as brightness drops.

Flicker decreases as updates get more frequent (and also as brightness increases).

The particular visualisation affected is a FFT spectrum analyzer which actually has no delays, it runs through a loop checking to see if there’s a new batch of samples ready. I disabled dithering and there was no effect. That said, the value for this comes from an analog sample so it’s probably noise. I’m just surprised that it results in a particular dithering pattern that makes it look like dots/lines are crawling across the display. That’s what lead me to think it was dithering related.

If I turn dithering off, my plasma animation definately looks worse so I’m getting good use out of your work!

How does this compare with the dithering in FadeCandy? In FadeCandy dithering is used in lower brightness levels to make fading look smoother. Is this possible as well? Since from the above explanation I get that dithering is only applied when altering master brightness (I would like it on individual pixel level).

I’ve read the FadeCandy code, and you’re right that it’s being used for something different there. (We’ve had some interesting discussions about it with Micah, the creator of FadeCandy, too.)

I’ll reread the FC code again, and post a summary comparison because it’s interesting.

Both FC and FastLED take 8 bits of input for each pixel channel value. FC stretches it out so that the low end of the 8-bit range is ‘lower’ while still preserving dynamic range within it. In effect, it’s using 8-bit input as an index into a 16-bit lookup table (LUT), and then (trying to) use the 16 bit value as the LED brightness.

I say ‘trying to’ not to disparage the FC code (at all!), but rather because at the 400Hz update rate that FC uses, only 11 bits are useful. FastLED has a variable update rate (it’s up to you how often you call show() ), but I think we have broadly similar limits. FastLED has 8 bits of data per pixel, plus 8 bits of master brightness, but I think it’s nearly impossible to get more than 12 useful bits out of it, and calling it 10 or 11 is probably more correct.

Given your previous comments and code with respect to gamma correction curves, let me think about this for a bit. ‘Long tail’ gamma correction is what your thinking about here, right?

Really what we’d want in an ideal world is 16-bit-per-channel pixel values (CRGB16). On a 48-, 72-, or 96- MHz ARM-based MCU, we can do useful stuff there – and we do plan to incorporate that into FastLED.

But on an 8- or 16- MHz AVR MCU we are literally out of clock cycles already doing what we’re doing now with FastLED v2.1: brightness control, color corrections, AND dithering, all still at maximum ‘wire speed’, and without requiring any addition SRAM usage.

Let me think about this all a bit. We are ‘out of clock cycles’ on AVR, but maybe this discussion can help us think about the fully-16-bit future on ARM.

1 Like

Indeed, at the lower end you’ll see the steps when fading the leds (I use fixed point perlin noise), so it would be nice if the values in the lower range can be dithered. I used FaceCandy now and this method gives me the right appearance, however I would like to be able to make a standalone work (without a PC).

I can understand that this is too much asked for AVR, you already did an incredible job with all these implementations.

@Kasper_Kamperman : what does your preferred gamma curve look like these days? I like the idea of stretching the 8-bit input into a 10- or 11- bit range, but there aren’t enough cycles to use a lookup table; it’ll have to be a (simple) arithmetic approximation – but that might be good enough. Worth exploring, so I’m curious what your preferred gamma curve is like these days.

@Mark_Kriegsman I still use the one in my code online:

However I recently also used this one: http://ledshield.wordpress.com/2012/11/13/led-brightness-to-your-eye-gamma-correction-no/

This gives also good results. Unfortunately the original site (neuroelec) is offline, it had some more code in the comments (8 bit table).

On this site I’ve found a formula as well, didn’t test it though (I’m on holiday, so I don’t have access to hardware):
http://www.picbasic.co.uk/forum/showthread.php?t=16187&s=8aa3f766376acc90e9e7cc6a1de4be9a&p=112225#post112225

I am trying to figure out how to increase/decrease brightness with a button press. Is there any examples or can it be done?