Curiosity is getting the best of me here,

Curiosity is getting the best of me here, but I’m wondering. On an RGB LED [1], we all know and accept that some colors will be brighter than others. It’s the nature of the chemistry. Now, most, if not all manufacturers will also tell you what the (m)cd is for their product. For example, I received several samples with my LED reel today. Let me post their values:

P/N R3528UBGRW-B-
R: 125-150mcd
G: 420-504mcd
B: 100-120mcd

P/N R3528UBGRW-BH-
R: 220-265mcd
G: 950-1140mcd
B: 200-240mcd

P/N R3535URGBBW-2.8B
R: 550-650mcd
G: 1400-1600mcd
B: 300-400mcd

P/N L5050URGBW-2.8B
R: 700-800mcd
G: 1800-2000mcd
B: 400-500mcd

This begs the question: can those values be used to “tune” what FastLED is feeding a string or panel, assuming one knows what the values are?

For example, between green and blue, all four have roughly the same percentage difference, 21-23%. But between red and green, that’s a different story: 23-39% - that’s a big swing. I can imagine colors on one will not be the same as on the other.

So what if FastLED could adjust for that, and get a better ratio between the three colors, based on the manufacturer’s values. Is that even possible?

I’m not talking about a single string/panel that has different types of LEDs, I’m talking of two completely different strings/panels, each using a separate leds[] array. Can one be adjusted so it matches the other more closely?

I’ll post pictures of these things later so you can see what they are, in case you don’t want to hit Google. :slight_smile:

[1] In reality, this doesn’t only apply to LEDs but for the sake of this group, we’ll limit it to that.

@Ashley_M_Kirchner_No , you’re a man just slightly ahead of his time. I’ll say more in a bit.

I hate it when my brain runs too far ahead of me … hard to catch up to.

Please forgive my feverish friend. The Nyquil. The avr-gcc. You know how it is.

But: Ashley is right: RGB LEDs are not innately color-balanced, and if you want real control and consistency, you want to adjust for that.

When I’m feeling lazy about it, I typically just cut the green in half, and call it close enough. Obviously, real math is the right answer, though.

Oh, don’t even get me started on avr-gcc. Like when it decides that just because you’ve updated the value of a variable in a particularly tight and timing sensitive bit of code you don’t really need to be using that updated value and it will just save you the time and let you keep using the original value.

You know, for laughs.

Oh I came across that more often than I’m willing to admit. Persnickety little bugger.

Was that value in the timing-sensitive code being updated by an interrupt by chance? If so, be sure to add the “volatile” keyword, otherwise the compiler won’t know the variable is being modified by a separate “thread” and will optimize accordingly.

@Tod_Kurt : Yeah, we’ve had to use volatile all over the place, and also make sure we get every asm input constraint code and asm every output constraint code, and every clobber code just exactly right. Even then, we’ve moved into the land of pushing-the-hardcoded-limits-of-gcc. Who even knew there was a hardcoded limit to how many arguments you could pass to an ‘asm’ block? (And who knew we’d be crazy enough to try things that would cause us to hit that limit… Well, OK, that one was more predictable. :smiley: )

No, the value was in a clock cycle counted (not for optimization, but because I need cycle-accurate timing, only 10 cycles per bit at 8Mhz, and we’re packing quite a bit into those 10 cycles :slight_smile: inner loop for pushing out led data where I had already disabled interrupts. It was basically a failure of gcc’s register/variable allocation when crossing between C code and asm blocks (which I only needed to do because gcc has an arbitrary limit of 30 operands in asm blocks, and r/w operands take up two slots - tell me how that is an intelligent decision when there are platforms out there with 32 or more registers!).

N.B. if doing performance optimized code, you want to be careful with littering volatile around, as it basically kills any register placement of your values, which one some platforms (looking at you, ARM) can make your timing extra wiggly because memory access times aren’t deterministic.

But, what I’m doing is likely considered abuse in some circles and doesn’t even get glanced at by the bulk of what people are doing these days, so I’m not surprised the issues I hit up against in gcc aren’t a higher priority for getting fixed.

(This reply brought to you by NyQuil and trying to decide what else I can squeeze into the 14 cycles/pixel that I still have open before unleashing this next update)