Be forewarned: I am either an idiot or overthinking this. Thank you in advance for your forbearance.
How do I convert an int16_t into a uint8_t?
Specifically, I want to do something like this:
uint16_t i = 0;
for (;
{
leds[x] = CHSV(colour, sin8(i), 255);
i++;
}
i.e., fade the saturation of a pixel in and out in a sine wave pattern. The input to sin8() is an unsigned 16 bit int (with 0 representing 0 radians and 65535 representing how many radians?) and the output is a signed 16 bit int, which I want to turn into a uint8_t to plug into the CHSVās saturation. Will simply casting it do what I wantāturning -32767 into 0 and 32767 into 255?
sin16 effectively takes a value from 0-2 radians, expressed fractionally as 0-65535 (e.g. 32768 would be ā1 radianā) - itās a way to get fractional values without using floating point.
As for the output, what you can do is: ((sin16(i) >> 8) + 128) & 0xFF
What thatās doing is effectively dividing the output of sin16 by 256, which will give you a value from -128 to 127, adding 128 shifting you to 0 to 255, and then masking off the bits (just to be sure, that last bit may not be entirely necessary, actually - headās a bit unclear today).
Slightly idiotāin my day job I operate at so much a higher level Iāve lost the ability to reason through such bit-twiddling :-).
I was originally going the other way around and trying to add 32767 to the output of sin8(), storing the output in a long, then shifting back down to a uint8_t, Iām not sure what I was doing wrong but I was losing some precision somewhere. This is obviously the way to do it.
Is there a call for trig methods that natively return uint8_t values? Itās all I ever seem to want to use.
āusin8(ā¦)ā is on my list for soon-version of the library.
Because ditto.