Hi, I posted a question about displaying OPC files using FastLED and din't get

Hi, I posted a question about displaying OPC files using FastLED and din’t get any answers so I thought I would look for a different solution.

Basically I want to play animations from my arduino on to my LED strip.

I have done this many times with PixelPushers and also following the Adafruit DotStar tutorials.

But I would prefer to use FastLED to send data to the pixels.

Are there any examples of displaying video / OPC / or any other moving images from SD card to the LEDs?

Thanks,

Phil

Some folks here have done that - they may be able to pipe up with code - but most people using FastLED are using it to actually generate the animations on the fly on the micro controller, and not simply act as a display buffer for OPC/SD card/etc… - and that’s what @Mark_Kriegsman and I have focused on with the library and example code.

I believe folks who do POV type stuff (eg @Ashley_M_Kirchner_No ) have probably done code that steams images off of SD card.

I have! Doing POV is a relatively easy task, in theory: read the data into a buffer, push the buffer out to the string. How that’s done is what determines how well the POV works.

Reading an “animation” is still broken down into reading individual frames of that animation and pushing that data out to the strip (or strips). The amount of pixels and controller used will determine how fast you can push that data out and how “seamless” your animation will look.

@Ashley_M_Kirchner_No this sounds exactly what I want to do.

I currently user this example.

But it’s not very flexible and currently only works with APA102 LEDs.

Do you have example code that would get me going?

Also, what file format do you use to hold the pixel data?

Cheers.

Phil

Keep in mind I was displaying static images, not a video stream or animation. THat would be a more involved process.

However, answering your questions: I didn’t use any general format, I created my own. I pre-processed images into the data I wanted. This allowed me to use any random image, JPG, PNG, GIF, etc., and process them into the specific RGB data I wanted. Essentially I read each individual pixel, extracted its RGB values and saved that into a single stream of data in a new file that I then put on the SD card. I kept a “control” file that the program would read and know which images were on the SD card and what order to read them in to display.

When it came time to reading in the data, it was already “sanitized” and in the exact format I needed it to be, so that I could read the data directly into the FastLED leds array and push it out. I didn’t need anything “in between” to re-parse the data for me. This made it so much faster, all I was doing was reading data and pushing it out. The bottleneck became the SD reads themselves. The SD buffer is only so big, so every few read cycles, it would need to refill the buffer and that took a few extra micro seconds. But nothing that was visible to the naked eye.

As for code, I’d have to dig it up for you. I’m house sitting this whole week and only traveling with my laptop and that code is buried at home. However, in its simplest form, the loop did something similar to this:

open file on SD
read in specific amount of data (I was reading in the exact amount of pixel data I needed for the length of strip I was using)
push buffer out to strip and call FastLED.show()
cycle back to read next set of data

You have to add checks for:

  • did the file open ok or did you encounter an error, and if you hit an error, then what?
  • did you reach the end of the file, and if so, then what? rewind back to the beginning, or go to next file

Probably other checks that I’ve forgotten. Oh, since I also had the control file open, I needed sanity checks for that too. Then there’s the ‘stop’ button that would instantly terminate the display, even if it’s in the middle of displaying an image. There was a lot going on.

But as I said, this was static images, not video. It was for a POV project. Towards the end of development, I took a series of photographs of me spinning the POV stick: Google Workspace Updates: New community features for Google Chat and an update on Currents - the first 8 were dynamically generated in code (so that the end user would always have something to display if there’s no SD card present.) The rest are all static images repeated over and over again as I swung the stick around.

I should revisit this project. LEDs have gotten smaller and I can increase the density for a higher quality image display. Hrm …

This is all good info. As I am already using data from OPC files in my other project, I think I’ll try to figure out the file structure then follow similar steps to you. I’m only pushing about 400 pixels so the data isn’t going to be huge.

Cheers

400 pixels, 3 bytes each, that’s 1,200 bytes per frame. If you’re wanting to push 30 frames a second, that’s 36,000 bytes every second. Shouldn’t be that hard.