Are there plans to include 12bit/16bit support into FastLED? I am currently using the Adafruit breakouts with 12 16bit / 24 12bit controllers (TCL59711/TLC5947 chips) with the Adafruit libraries. Cycling through the entire colorspace with even color distribution with the 16bit chip is really slow.
I wonder whether FastLED with 16bit colorspace would be an improvement.
I’d like to volunteer even though I’m lacking assembler expertise and am very new to hardware programming.
Is there a roadmap/TODO-list for a 12bit/16bit implementation? I’d grab things that I could do.
There are plans to support 16bit color - there’s some major rearchitecting work that I’m currently doing on a private branch that needs to be finished before I can start doing that, however. (Note: even when 16 bit support is in place, I am unlikely to directly support the tlc chips - see https://github.com/FastLED/FastLED/issues/18 for a discussion of why)
Well, the TLC59711 has 16bit on 12 channels. From the link you provided I am reading that 12bit support is unlikely (TLC5940). But since you’d support 16 bt math, are there any other obstacles for TLC59711?
It’s not about 12 vs 16 bit support - it’s about the actual chipset. (Even 12 bit chipsets will use 16bit CRGB objects when that implementation is done)
Is the TLC59711 architecure comparable with the TLC5940? Does it have the same (mis)features? E.g. the datasheet states that the TLC59711 has an “Internal/External Selectable GS Clock”.
I apologize for my questions, I’m a freshman in hardware issues. So, if that chip is out of discussion for you I won’t pester you with questions anymore - well, just one:
What 16bit chips do you recommend & support?
Basically, the TLC59xx series requires a lot more scaffolding to manage (there’s other control lines than just data/clock, and they all seem to require some level of timer/interrupt running on the host MCU to properly drive - things that I explicitly do not want to get into juggling with the library - it’s bad enough I have to do platform specific versions of pin/spi/uart code for every mcu I support, I don’t want to add interrupts/timers on top of that).
I’m not aware of any 16-bit led chipsets that are out there at the moment (that i’m likely to put in support for). There’s the LPD1886 which is 12-bits that I put support in for a while ago (though it is currently untested - and it also doesn’t appear that anyone is selling the LPD1886).
There’s also some benefit to doing RGB work in 16-bit even when pushing out to 8-bit led chipsets, which is the biggest part of the reason why I want to eventually move to supporting 16-bit RGB. In theory we should also be able to (ab)use that extra data to better drive the dithering engine.
Part of the problem, I suspect, is it appears everyone is still enamored with WS281x style data lines, which are slow (800khz). Moving to 16-bit data would effectively and immediately halve the data rate on an already slow chipset. (Even the LPD1886 had this problem - though they tried to mitigate it a little bit by being 12-bit instead of 16-bit).
Well, I’m not enamoured with any of them chips (except SPARC perhaps, since that’s what I grew up with, but my mother is dead for a long time now, and so seems SPARC [in processor years] to be).
Which other control lines are you reffering? The block diagram of the technical data sheet for TLC59711 just shows VCC, GND, DATA and CLK.
I am currently using this chip with ISP, which doesn’t require more than a clock and a data line (besides VCC and ground), according to the data sheet.
Thanks a lot for your work.
Heh - the SPARC is nothing compared to modern arm processors - especially when you factor in power consumption
If the TLC59711 is really better in terms of the amount of BS needed to drive it (which it looks like it may be) - i’ll look into adding support for it! It looks like the TLC5947 and TLC59711 are much saner than the TLC5940 was - so i’ll revisit adding them. (I figured they were in line with the TLC5940 - which looks like a pain in the ass to support 
Thank you. To support these chips, 12bit (TLC5947) and 16bit (TLC59711) variants of FastLED code is necessary. Modifying your clever inline assembler routines is far beyond for me :-/
I should have something for you to try testing in a couple of days. (Hopefully)
Great! Here is an attempt of a port of hsv2rgb_rainbow() to 16bit:
http://paste.debian.net/hidden/17c3f1d9/
I’ve verrified that chunk porting your code to perl, expanding it to 16bit, then back again to C++ (wtf? well, works for me 
Do you have a branch on github for 16bit stuff so I could submit pull requests?
BTW there are some occurences of the number 255 (and 171, 85, …) which could be made into their respective #defines (K255, K171,…)
Tried your hsv2rgb_rainbow, and its color distribution is an eye opener and such an improvement! Thank you.
Glitches… uint8_t desat should be uint16_t desat at line 191.
I believe mark has the beginnings of 16bit versions of some of this. Here’s my rough plan for ordering of things:
- supporting 12/16bit LEDs using 8bit CRGB data
- supporting 8 bit rgbw chipsets (this involves making a bunch of the code even more generic relative to the type of rgb data object used - and will be some of the heaviest lifting of all of this)
- creating/supporting 16-bit CRGB/CRGBW
At the end of this there will be at least four pixel data types, CRGB, CRGBW, CRGB16, and CRGBW16 - but I have a fair bit of work to do to make sure that this doesn’t require four copies of every piece of code 
…and all instances of uint8_t twothirds and brightness_floor need to be changed as well.