In CSS we can use several different methods to define a color:
Color word: red
Hexadecimal: #FF0000
Red/Green/Blue channels: rgb(255, 0, 0)
Hue/saturation/lightness: hsl(0, 100%, 50%)
I do realize that using named colors is not a good idea, as different browsers have their own idea of what aquamarine looks like.
Ignoring alpha channel and browser support, are there any differences performance-wise between these 4 methods?
If we were trying to squeeze every last bit of optimization out of our CSS, which one would be preferred, if any? Are the color values converted to a specific format internally, or does the performance of it depend on anything else (like which rendering agent or browser is used)?
Looking for a "technical" answer if possible, references appreciated.
If we assume a modern browser making full use of the GPU then the internal color representation will be RGB floats. Ignoring the color name - which is probably just a map to hex anyway - I think that hex and channels will be the fastest. HSB will undoubtedly be the slowest, as the conversion from HSB to RGB does require some work - about 50 lines of C code.
However, I think that for the purpose of CSS, this is a completely irrelevant question. Even for HSB to RGB the amount of work on one color will be totally trivial. By way of support for this, I have several programs - including those running on mobiles - which do color manipulation at a per-pixel level on largish images where I am doing RGB->HSB->(some manipulation)->RGB. Even performing this operation 100,000 times on an ipad only results in a delay of a couple of seconds - so on this relatively slow platform, I think your typical worst case conversion can be safely assumed to take less then 0.0001 seconds. And that's being pessimistic.
So just use whatever is easiest to code.
ADDED: to support the don't worry about this option. Internally a GPU will manipulate colors as an array of floats, so in C terms
float color[4];
or something similar. So the only conversion being done for the numeric options is a simple divide by 255.
On the other hand conversion of HSB to RGB takes considerably longer - I'd estimate, from having written code to do it, about 10 to 20 operations. So in crude terms HSB is considerably slower, BUT 20 (or even 20,000) operations on a modern GPU isn't worth worrying about - it's imperceptible.
Here are the results including color names, short hex, hex, rgb, rgba, hsl, and hsla. You can run the test yourself here.
I used the same tool from jsperf.com that the others did, and created my own test for different color formats. I then ran the test on IE11, Edge17, FF64 and Chrome71 and gathered all results in a compact excel spreadsheet.
Top three are green, bottom three are red, best and worst are bold.
I don't know why Chrome is so prone to named colors format, but it made me repeat the test many times with the same and different parameters. Results remain constant.
You cannot get conclusive results of any one format being the absolute best, but my conclusion is as follows.
I will keep using hex over named, lowercase over uppercase and start using short over long hex when possible.
Feel free to update results if they change with new versions of browsers.
Typically, css optimization is all about minimizing the number of bytes going over the wire. The hexadecimal colors tend to be the shortest (in your example, #f00 could be used instead of #ff0000).
I realize this isn't exactly answering the question you've asked but I haven't seen any browser tests which attempt to measure how different color representations affect rendering speed.
I too was curious about this (it's a Friday afternoon). Here's a JSPerf for the various CSS colour methods:
http://jsperf.com/css-color-names-vs-hex-codes/18
Edit: Each process has to get down to a binary value for r, g, and b. Hex and rgb bytes are already set up for that, so I guess they might actually be roughly the same speed. The rest have to go through a process to reach a hex/rgb value
#FF0000 = memory values of: 1111 1111 0000 0000 0000 0000
rgb(255,0,0) = memory values of: 1111 1111 0000 0000 0000 0000
Both cases are most likely stored in 3 int variables. So the real question is, which is faster to process into binary values for these integers? HEX or DEC? I think HEX, but I can't back that up. Anyhow, the code just takes the binary values of these variables.
Related
I've wondered how QR codes are working, so i did a research and tried to paint my own in an table in word.
On Wikipedia I found this picture
I understand the configuration, but how you actually store a letter doesnt make sense to me.
With the example letter w.
On even rows black is 0 and on odd rows 1.
So the example should give this binary number 01110011 which would be 115 but w is number 32.
So how do I get the right number
I dont know much about this topic but I found you a video where dude explains it. And from what I understood, there are those cells that are read in order of numbers depending on arrow (there are 4 options here and you posted those yourself). So you simply follow those numbers and write 1s and 0s on paper which results in 8bit number. That video has much more detail.
It is also worth pointing out that it is MSB, meaning if we follow your example (you just considering numbers, not colors since you mislabeled it), it has arrow pointing up, meaning you write right/down to up/left which leads to number : 01110011 which has most significant bit at the left which means its 115
I am using Images.jl in Julia. I am trying to convert an image into a graph-like data structure (v,w,c) where
v is a node
w is a neighbor and
c is a cost function
I want to give an expensive cost to those neighbors which have not the same color. However, when I load an image each pixel has the following Type RGBA{U8}(1.0,1.0,1.0,1.0), is there any way to convert this into a number like Int64 or Float?
If all you want to do is penalize adjacent pairs that have different color values (no matter how small the difference), I think img[i,j] != img[i+1,j] should be sufficient, and infinitely more performant than calling colordiff.
Images.jl also contains methods, raw and separate, that allow you to "convert" that image into a higher-dimensional array of UInt8. However, for your apparent application this will likely be more of a pain, because you'll have to choose between using a syntax like A[:, i, j] != A[:, i+1, j] (which will allocate memory and have much worse performance) or write out loops and check each color channel manually. Then there's always the slight annoyance of having to special case your code for grayscale and color, wondering what a 3d array really means (is it 3d grayscale or 2d with a color channel?), and wondering whether the color channel is stored as the first or last dimension.
None of these annoyances arise if you just work with the data directly in RGBA format. For a little more background, they are examples of Julia's "immutable" objects, which have at least two advantages. First, they allow you to clearly specify the "meaning" of a certain collection of numbers (in this case, that these 4 numbers represent a color, in a particular colorspace, rather than, say, pressure readings from a sensor)---that means you can write code that isn't forced to make assumptions that it can't enforce. Second, once you learn how to use them, they make your code much prettier all while providing fantastic performance.
The color types are documented here.
Might I recommend converting each pixel to greyscale if all you want is a magnitude difference.
See this answer for a how-to:
Converting RGB to grayscale/intensity
This will give you a single value for intensity that you can then use to compare.
Following #daycaster's suggestion, colordiff from Colors.jl can be used.
colordiff takes two colors as arguments. To use it, you should extract the color part of the pixel with color i.e. colordiff(color(v),color(w)) where v would be RGBA{U8(0.384,0.0,0.0,1.0) value.
I plan to be changing the color of a few hundred thousand divs a second and was wondering what the fastest way to do it was.
What are the best formats in terms of performance? rgb triples? hex codes? color words(black, chartreuse)?
I've run this jsPerf, and these are the general results:
basic color keywords is quite fast, and it's the fastest for Chrome. The extended list is a lot slower in some browsers though.
hsl is just the worst, except for IE, where it is actually the fasted (but then again, IE) (apparently this was just a single case, I couldn't reproduce it afterwards)
#RGB or #RRGGBB are both relatively fast in every browser (#RGB is slightly faster in general)
rgb() is generally slow, in every browser
In general, I think #RGB is the fastest format for every browser (on average).
Hex codes would be the fastest. When you say for instance "black", it is read then changed to its hex code #000000
I have a some vector data that has been manually created, it is just a list of x,y values. The coordinate of the points is not perfectly accurate - it can be off by a few pixels and it won't make any perceivable difference.
So now I am looking for some way to watermark this data, so that if someone steal the vector data, I can prove that it's indeed been stolen. I'm looking for some method reliable enough that even if someone take my data and shift all the points by a some small amount, I can still prove that it's been stolen.
Is there any way to do that? I know it exists for bitmap data but how about vector data?
PS: the vector graphic itself is rather random - it cannot be copyrighted.
Is the set of points all you can work with? If, for example, you were dealing with SVG, you could export the file with a certain type of XML formatting, a <!-- generated by thingummy --> comment at the top, IDs generated according to such-and-such a pattern, extra attributes specifically yours, a particular style of applying translations, etc. Just like you can work out from a JPEG what is likely to have been used to create it, you can tell a lot about what produced an SVG file by observation.
On the vectors themselves, you could do something like consider them as an ordered sequence and apply offsets given by the values of two pseudo-random sequences, each starting from a known seed, for X and Y translation, in a certain range (such as [-1, 1]). Even if some points are modified, you should be able to build up an argument from how things match the sequence. How to distinguish precisely what has been shifted could do with a bit more consideration, too; if you were simply doing int(x) + random(-1, 1), then if someone just rounded all values your evidence would be lost. A better way of dealing with this would be to, while still rendering at the same screen size, multiply everything by some constant like 953 (an arbitrary near-1000 prime) and then adjust your values by something in that range (viz, [0, 952]). This base-953 system would be superior to a base-10 system because it's much (much much) harder to see what's happening. If the person changes the scaling, it would require a bit more analysis of values, but it should still be quite possible. I've got a gut feeling that that's where picking a prime number could be a bit helpful, but I haven't thought about it terribly much. If in danger or in doubt in such matters, pick a prime number for the sake of it... you may find out later there are benefits to it!
Combine a number of different techniques for best results, of course.
I have an incomplete QRCode (about 30%). Is it possible to decode just the fragment of it? I would really like a code snippet - the language doesn't matter.
If you mean, can you decode the entire contents of a QR code even if part of the code is obscured or changed, then yes you can -- sometimes.
QR codes can be encoded with varying levels of redundancy, which are known as levels L, M, Q and H, and correspond to about 7%, 15%, 25% and 30% redundancy. This means you can lose up to that much of the barcode and still decode it. The more you lose, the harder it is to decode, but remains possible within those limits.
Note that certain regions of the QR code can't be lost. The finder patterns (squares at corners) must be findable; they can tolerate some distortion but there's no error correction to help that. Also, the regions around the finder patterns encode format and version. They have a different redundancy (2x encoding using BCH, not Reed-Solomon), but, if you lose too much of those tiny areas you'll not be able to decode, regardless of the main error correction.