RGB LED - Colour Values - hex

I have a series of RGB LED lights hooked up to my Arduino board, and I'm trying to change the values of the LED, only problem is that I cant seem to find anything on a HEX to RGB converter.
Also, the RGB values aren't like the conventional values you get like (255,255,255) = white. They appear to be in some type of byte format (0x0ff)? Which I'm not familiar with at all.
Could someone point me in the right direction in how I can convert a HEX colour like '9cb261' into an RGB byte value?
Thanks

Hex is just a shorthand way of writing the same numbers, in a format that's a little easier to read if you're concerned about which bits are set and which are not.
The number "255" tells you that there are 2 "100s", 5 "10s", and 5 "1s". Put another way, it's 2 "10^2", 5 "10^1", and 5 "10^0".
Hex is the same idea, but instead of using 10 we use 16. Since there might be more than 10 things in each place, we add the characters a - f after 0-9.
Using a short example, "9c" means 9 instances of "16^1" plus c (12) instances of "16^0". This yields 144 + 12, or 156.
The "0x" prefix just tells you that the following string is to be interpreted as a hex string.
To break apart your example, the Hex color 9cb261 is just 3 bytes (9c, b2, 61).
If we convert the bytes back to decimal, it's (9*16+12, 11*16+2, 6*16+1) or (156, 178, 97)
There's a full write-up at Wikipedia's Hexadecimal article.

Related

Why does RGB use 6 hex digits?

I understand that RGB encodes a color with two hex digits corresponding to the Red, Green, and Blue components. For instance, #ff0000 is pure red. And I understand that each hex digit represents a number from 0-15, or 4-bits of information. But how is it possible to represent every color with 32-bits? Why use two digits for Red and Green and Blue? Why aren't there, for instance, three digits per color?
I dont really know if any of this is related to hardware and whether we can even make colors to such detail but a third hex digit for colors means we will have colors with 4096 times more detail, (meaning if we have X color combinations currently, then we will have 4096 * X colors) which we simply dont need, since human eye wont be able to notice the difference, even in our current color system, let alone one much bigger.
So you'd be sacrificing efficiency for nothing in that case, like watching a movie on 600 frames per second while your eye can only process 60.

Why are colors represented by hexadecimal values in CSS? Is there an historical explanation?

In some programming languages, colors are represented by hexadecimal values. For example, using CSS, to change the text color of a header to maroon-ish, you could type:
h1 {
color: #8B1C62;
}
I'm wondering what the reason is for using a base-16 numeral system to represent colors. You could hypothetically use any numeral system to represent the same values, no?
When did this convention start? Does anybody know where I can read about the history of this phenomenon?
The primary use of hexadecimal notation is a human-friendly representation of binary-coded values in computing and digital electronics.
each hexadecimal digit represent 4 bits. half the byte.
a byte value can be in range of 0 to 255 in decimal but it is more easier to read it as 2 Hexadecimal digit from 00 to FF.
a 6 digit color code hold 256X256X256 combination of red, green and blue ! (8-Bit RGB)
read more about color, color spaces and hexadecimal :
http://www.smashingmagazine.com/2012/10/04/the-code-side-of-color/
http://en.wikipedia.org/wiki/Hexadecimal
http://en.wikipedia.org/wiki/RGB_color_model
You can certainly represent colors in any numeral system. Here's what your maroon-ish color looks like in various different systems:
Binary: 10001011 00011100 01100010
8 bits each for red, green, and blue. That's nice, but who wants to type all those numbers?
Decimal: 9116770
Fewer numbers to type, but how do you manipulate R, G, and B individually? And it feels kind of weird to refer to a color as nine million, one hundred sixteen thousand, seven hundred seventy.
Hexadecimal: 8B 1C 62
Even fewer numbers to type, and we can manipulate R, G, and B easily. Seems like a good candidate for representing colors, but let's try one more.
Base-256: ï [^\] b
Nice: we only have to type one character per color component. But I can never remember what number comes after ï or before the file separator control code, so I'd have to whip out the ASCII table every time I write or read a color. But what if we wrote the components in decimal instead?
Base-256, redux: 139,28,98
Much nicer. Not too many characters to type, and it's very clear which numbers represent R, G, and B.
Thus...
The two common ways to represent color values are hexadecimal and base-256-ish, because... it's easy!

How do browsers handle rgb(percentage); for strange numbers

This is related to CSS color codes:
For hexcode we can represent 16,777,216 colors from #000000 to #FFFFFF
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
According to the specs:
Values outside the device gamut should be clipped or mapped into the gamut when the gamut is
known: the red, green, and blue values must be changed to fall within the range supported by
the device. Users agents may perform higher quality mapping of colors from one gamut to
another. For a typical CRT monitor, whose device gamut is the same as sRGB, the four rules
below are equivalent:
I'm doubtful if browsers actually can render all these values. (but if they do please tell me and ignore the rest of this post)
Im assuming there's some mapping from rgb(percentage) to hex. (but again Im not really sure how this works)
Ideally I'd like to find out the function rgb(percentage)->HEX
If I had to guess it would probably be one of these 3.
1) Round to the nearest HEX
2) CEIL to the nearest HEX
3) FLOOR to the nearest HEX
Problem is I need to be accurate on the mapping and I have no idea where to search.
There's no way my eyes can differentiate color at that level, but maybe there's some clever way to test each of these 3.
It might also be browser dependent. Can this be tested?
EDIT:
Firefox seems to round from empirical testing.
EDIT:
I'm looking through Firefox's source code right now,
nsColor.h
// A color is a 32 bit unsigned integer with four components: R, G, B
// and A.
typedef PRUint32 nscolor;
It seems Fiefox only has room for 255 values for each R,G and B. Hinting that rounding might be the answer, but maybe somethings being done with the alpha channel.
I think I found a solution for Firefox anyways, thought you might like a follow up:
Looking through the source code I found a file:
nsCSSParser.cpp
For each rgb percentages it does the following:
It takes the percentage component multiplies it by 255.0f
Stores it in a float
Passes it into a function NSToIntRound
The result of NSToIntRound is stored into an 8 bit integer datatype,
before it is combined with the other 2 components and an alpha
channel
Looking for more detail on NSToIntRound:
nsCoord.h
inline PRInt32 NSToIntRound(float aValue)
{
return NS_lroundf(aValue);
}
NSToIntRound is a wrapper function for NS_lroundf
nsMathUtils.h
inline NS_HIDDEN_(PRInt32) NS_lroundf(float x)
{
return x >= 0.0f ? PRInt32(x + 0.5f) : PRInt32(x - 0.5f);
}
This function is actually very clever, took me a while to decipher (I don't really have a good C++ background).
Assuming x is positive
It adds 0.5f to x and then casts to an integer
If the fractional part of x was less than 0.5, adding 0.5 won't change the integer and the fractional part is truncated,
Otherwise the integer value is bumped by 1 and the fractional part is truncated.
So each component's percentage is first multiplied by 255.0f
Then Rounded and cast into a 32bit Integer
And then Cast again into an 8 bit Integer
I agree with most of you that say this appears to be a browser dependent issue, so I will do some further research on other browsers.
Thanks a bunch!
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
No, more than that, because the precision is not limited to one decimal place. For example, this is valid syntax:
rgb(23.456% 78.90123456% 0%)
The reason for this is that, while 8 bits per component is common (hence hex codes) newer hardware supports 10 or 12 bits per component; and wider gamut colorspaces need more bits to avoid banding.
This bit-depth agnosticism is also why newer CSS color specifications use a 0 to 1 float range.
Having said which, the CSS Object Model still requires color values to be serialized at 8 bits per component. This is going to change, but the higher-precision replacement is still being discussed in the CSS working group. So for now, browsers don't let you get more than 8 bits per component of precision.
If you are converting a float or percentage form to hex (or to 0 - 255 integer) the correct method is rounding. Floor or ceiling will not spec the values evenly at the top or bottom of the range.

In CSS, can HSL values be floats?

The CSS3 spec only specifies that:
The format of an HSLA color value in the functional notation is ‘hsla(’ followed by the hue in degrees, saturation and lightness as a percentage, and an , followed by ‘)’.
So am I to understand that these values would be interpreted not as integers but as floats? Example:
hsla(200.2, 90.5%, 10.2%, .2)
That would dramatically expand the otherwise small (relative to RGB) range of colors covered by HSL.
It seems to render fine in Chrome, though I don't know if they simply parse it as an INT value or what.
HSL values are converted to hexadecimal RGB values before they are handed off to the system. It's up to the device to clip any resulting RGB value that is outside the "device gamut" - the range of colors that can be displayed - to a displayable value. RGB values are denoted in Hexadecimal. This is the specified algorithm for browsers to convert HSL values to RGB values. Rounding behavior is not specified by the standard - and there are multiple ways of doing rounding since there doesn't appear to be a built-in rounding function in either C or C++.
HOW TO RETURN hsl.to.rgb(h, s, l):
SELECT:
l<=0.5: PUT l*(s+1) IN m2
ELSE: PUT l+s-l*s IN m2
PUT l*2-m2 IN m1
PUT hue.to.rgb(m1, m2, h+1/3) IN r
PUT hue.to.rgb(m1, m2, h ) IN g
PUT hue.to.rgb(m1, m2, h-1/3) IN b
RETURN (r, g, b)
From the proposed recommendation
In other words, you should be able to represent the exact same range of colors in HSLA as you can represent in RGB using fractional values for HSLA.
AFAIK, every browser casts them to INTs. Maybe. If I'm wrong you won't be able to tell the difference anyway. If it really matters, why not just go take screenshots an open them in photoshop or use an on-screen color meter. Nobody here is going to have a definitive answer without testing it, and it takes 2 minutes to test... so...
I wouldn't know exactly, but it makes sense to just put in some floating numbers and see if it works? it takes two seconds to try with a decimal, and without..

Linearly increasing color darkness algorithm

I want to write a function in ruby that given a number between 1 and 500 will output a 6 digit hex color code that gets linearly darker for higher numbers. This doesn't seem that hard but I'm not sure where to begin. How can I implement this?
edit
Hue seems like a more reliable way to go. I'd like to give a reference color, say a shade of green, and then darken or lighten it based on the input number.
input: 10
output: color code (in rgb or HSV) that is a light shade of the reference color
input: 400
output: color code (in rgb or HSV) that is a fairly dark shade of the reference color
edit 2
The only reason I need to use between 1 and 500 is because that's the input I have to work with. It's alright if some numbers that are close together map to the same color.
The 6 digit hex color code is in RGB.
What you want is to work in HSV: pick a Hue and Saturation, and gradually decrease the Value.
Convert from HSV to RGB to output the color.
See here for an example.
Basic linear interpolation?
// Pseudocode
function fade_colour(source, factor)
const max = 500
const min = 1
foreach component in source
output[component] = round(source[component] * (max - value) / (max - min))
endforeach
return output
endfunction
Why not just return a gray level then, #ffffff to #000000? 500 levels of darkness aren't really distinguishable anyway, and grays give you 256 levels.
If you only want to darken your reference color, it's easy. Given an R,G,B color that is the brightest you want to go, multiply each of the 3 values by (500-input) and divide by 499. Convert each of the values to 2 hex digits and append them with a # at the front.

Resources