Keep in mind I was displaying static images, not a video stream or animation. THat would be a more involved process.
However, answering your questions: I didn’t use any general format, I created my own. I pre-processed images into the data I wanted. This allowed me to use any random image, JPG, PNG, GIF, etc., and process them into the specific RGB data I wanted. Essentially I read each individual pixel, extracted its RGB values and saved that into a single stream of data in a new file that I then put on the SD card. I kept a “control” file that the program would read and know which images were on the SD card and what order to read them in to display.
When it came time to reading in the data, it was already “sanitized” and in the exact format I needed it to be, so that I could read the data directly into the FastLED leds array and push it out. I didn’t need anything “in between” to re-parse the data for me. This made it so much faster, all I was doing was reading data and pushing it out. The bottleneck became the SD reads themselves. The SD buffer is only so big, so every few read cycles, it would need to refill the buffer and that took a few extra micro seconds. But nothing that was visible to the naked eye.
As for code, I’d have to dig it up for you. I’m house sitting this whole week and only traveling with my laptop and that code is buried at home. However, in its simplest form, the loop did something similar to this:
open file on SD
read in specific amount of data (I was reading in the exact amount of pixel data I needed for the length of strip I was using)
push buffer out to strip and call FastLED.show()
cycle back to read next set of data
You have to add checks for:
- did the file open ok or did you encounter an error, and if you hit an error, then what?
- did you reach the end of the file, and if so, then what? rewind back to the beginning, or go to next file
Probably other checks that I’ve forgotten. Oh, since I also had the control file open, I needed sanity checks for that too. Then there’s the ‘stop’ button that would instantly terminate the display, even if it’s in the middle of displaying an image. There was a lot going on.
But as I said, this was static images, not video. It was for a POV project. Towards the end of development, I took a series of photographs of me spinning the POV stick: Google Workspace Updates: New community features for Google Chat and an update on Currents - the first 8 were dynamically generated in code (so that the end user would always have something to display if there’s no SD card present.) The rest are all static images repeated over and over again as I swung the stick around.
I should revisit this project. LEDs have gotten smaller and I can increase the density for a higher quality image display. Hrm …