has anyone worked with https://contrastchecker.com/
I just tried #FF0000 as foreground and #FFFFFF as background. It says AA 12 pt, AAA 12 pt, and AAA 18+ all fail. But then it says under "colors" that I passed and am fully color compliant? How can the colors fail the type test but pass the colors test?
So, there's a few different metrics at play here.
The TYPE test is measuring the contrast between foreground and background – this metric is based first on the relative luminance of the type in the foreground, and the luminance of the background, those luminance values are then used to calculate contrast.
The COLOR test is measuring the hue difference between foreground and background – hue is a different calculation than contrast is, so one can fail while the other passes.
For the most part, it's best practice to make your color choices based on results from the TYPE test (contrast), but there are cases where the result would be less accessible.
The Paciello Group makes a really great contrast analyzer app:
https://developer.paciellogroup.com/resources/contrastanalyser/
I highly recommend it as it has a feature to simulate the effects of different types of colorblindness on the selected foreground / background combination.
An Example:
Red (#FF0000) Foreground and Black (#000000) Background
- Passes at AA small and AAA large text with a ratio of 5.3:1
- Fails color difference with value of 255 (minimum is 500)
- Fails Brightness difference with a value of 76 (minimum is 125)
- 3/5 simulated types of colorblindness show the type as nearly invisible!
Even though the type test PASSES the result is not accessible!
Red (#FF0000) Foreground and White (#FFFFFF) Background
- Passes only at AA large text with a ratio of 4:1
- Passes color difference with value of 510 (minimum is 500)
- Passes brightness difference with a value of 179 (minimum is 125)
- 5/5 simulated types of colorblindness show very legible text!
Even though the type test FAILS the result is more accessible!
As indicated by the tooltip over the "COLOR" test, this does not only test the color difference (for color blind people for instance) but also the brightness difference (which is the contrast):
This is based on brightness and color difference. A pass grade here means you are fully color compliant.
This is based on an old W3C Working Draft named "Techniques For Accessibility Evaluation And Repair Tools". See Checkpoint 2.2 - Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen
This test is no longer recommended and has been replaced by latest WCAG recommendations (and painful - but necessary - calculations)
Related
Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth
I'm trying to generate RGB colors with the same perceived brightness.
The function R*0.2126+ G*0.7152+ B*0.0722 is said to calculate the perceived brightness (or equivalent grayscale color) for a given an RGB color.
Assuming we use the interval [0,1] for all RGB values, we can calculate the following:
yellow = RGB(1,1,0) => brightness=0.9278
blue = RGB(0,0,1) => brightness=0.0722
So, in order to make the yellow tone just as dim as the blue one i can simply perform this simple calculation on yellow for each of the RGB components:
dim_yellow = yellow * 0.0722 / 0.9278
However, when doing the opposite thing, thus "scaling" up the blue color to the same perceived brightness as the original yellow, the B component obviously exceeds 1, which cannot be displayed on a computer screen.
I guess the missing brightness from the excess B component could be "redistributed" to the R and G components, faking a brighter blue color. So what is the best general method to calculate those final RGB values?
THESE AREN'T THE MATHS YOU'RE LOOKING FOR
The function R*0.2126+ G*0.7152+ B*0.0722 is said to calculate the perceived brightness (or equivalent grayscale color) for a given an RGB color.
No this is incorrect, or at least incomplete. Yes, R*0.2126+ G*0.7152+ B*0.0722 are the spectral coefficients, but that is not the complete story.
First, Don't use the term brightness in this context. Brightness is not a measure of light, it is a perception, not a measurable quantity. When we are talking about light and colorimetry, use the term "luminance" (L or Y). Luminance is a linear measure of light, not perception.
Perceptual lightness, or L* (Lstar) from CIELAB, is based on human perception of changes in luminance. It is close to a power curve of about 0.43.
sRGB, the colorspace typically used for computer monitors and the web, is not linear like light, and it is also not exactly like the perceptual L* curve. sRGB's transfer curve is close to a 1/2.2 power curve. That is, the sRGB data/signal is raised to the power of 0.455, and then the monitor applies a power of 2.2.
WHAT'S BROKEN
Your math isn't working because you are not taking the transfer curves into account. You must linearize the sRGB values before applying the coefficients. Then the sum of these will equal a luminance of 1.
#FFFF00 in sRGB equals 0.9278 in luminance, but this is an sRGB value of 96.76% or an L* value of 97.14%
#0000FF in sRGB equals 0.0722 in luminance, but this is an sRGB value of 29.79% or an L* value of 32.3%
Here's a chart of some values, expanding on your example:
So to answer the rest of your question, to get a blue that matches a higher luminance than the monitor is capable of requires desaturating it, adding R and G to increase the lightness.
In this chart, we have the fully saturated but darker red and green to match the 7% blue luminance, then we have 18% luminance (as in an 18% grey card), and here we have to desaturate the blue to bring the luminance value up.
HOW TO CALC
First, you need to linearize the sRGB components, and THEN apply the coefficients, if you need to determine luminance. If you come up with some values doing math on linearized components, then you need to re-gamma encode to get back to sRGB.
I've discussed this is several other answers, such as this here.
I recommend you to use HSV color model instead of RGB since you can easily achive what you want only modifying Value(Brightness) component.
The wiki page also contains how to convert RGB to HSV and back
EDIT:
Try to use CIELAB color space since it approximate human's vision
A color can be represented as mixture of Red,Green and Blue.
Ex: (255,51,153)=pink
Is there an any good formula to get distinct colors by changing one variable?
such as (10x,22x,2x^2). So when x=1,2,3,4,.... It will give separate colors like Red,Green,Cyan,Blue.....etc.
Perhaps you'd be more interested in using HSL/HSV colors. Define the saturation and lightness and adjust the hue to get different colors. Check out the HSL and HSV wiki to learn more. A 15 to 30 degree adjustment of hue will result in a distinctly different color without messing with saturation or lightness.
An example of hsl in CSS is as follows.
<h1 style="color:hsl(0,50%,100%);">HSL Test</h1> //this will be red
The first value at 0 is red and advancing by 120 degrees will bring you to green and another 120 will bring you to blue and the last 120 will bring you back to red since the degree system is based on the 360 degrees of a circle. So 0 and 360 are the same just 60 & 420. The next two values are percentage based from 0% to 100% to define the intensity of that property. They're hard to explain so I made a quick fiddle that demonstrates this.
So to answer your question there is a good formula to adjust color it just depends on how exactly you want to change it. In the RGB world you can make things darker by lowering values uniformly and the opposite by heightening them. You can increase the different color presences by adjusting the individual color values as expected. However if you're trying to cycle the entire color wheel then this is difficult (although entirely possible) using RGB values. The real lesson to take away is that there are a number of ways to define a specific color and with each one different ways to traverse the spectrum. HSL and HSLA are very intuitive for many people since it's values don't really have to be guessed at. Pick a specific hue off the color wheel, Remember ROYGBIV as you imagine a value from 0-359. Define a saturation based on how bold you want the color to be and then a lightness based on how bright. It's far more useful then RGB in the large majority of cases as you'll see in that fiddle. Making a subset of the entire color spectrum with javascript only takes a few lines of code.
There is a similar question here
This javascript library can help you Name the Color Javascript Lib
A Demo of the library
I use the R colorspace package to convert a three-dimensional point into a LAB color. The LAB color is defined with three coordinates, the first one ranges from 0 to 100 and the two other ones range from -100 to 100.
But searching with Google I do not find a cuboidal representation of the LAB color space. Why ?
Short answer
The LAB color space, a.k.a. gamut, contain colors that are impossible to reproduce in nature or on a screen (according to this page).
Elaboration on converting RGB to LAB
I guess the reason you ask is that you want to make some kind of printed material and want to be sure the colors turn out right. I am merely an enthusiastic amateur in this field, but think this paragraph from the wikipedia article on lab color space explains some of the complications.
There are no simple formulas for conversion between RGB or CMYK values
and L*a*b*, because the RGB and CMYK color models are device
dependent. The RGB or CMYK values first need to be transformed to a
specific absolute color space, such as sRGB or Adobe RGB. This
adjustment will be device dependent, but the resulting data from the
transform will be device independent, allowing data to be transformed
to the CIE 1931 color space and then transformed into L*a*b*.
That is, in order to create a lab color cube, you must first find the transformation from your monitor specific color space into absolute color space. This is surprisingly difficult since the mapping is not linear or on any other simple form. The transformation is not likely to be perfect either since the RGB and LAB spaces do not span the same subspace (speculating here). I once talked to a printmaker about this and he said altough the human eye only has 4 types of color receptors (RGB + light intensity) you need about 17 color components on generate the full spectrum of visible colors on paper. Both RGB and LAB compromises on that, optimized for different purposes.
Bottom line
You can calibrate your screen to set up the transformation needed to convert the RGB of the screen to the LAB colors of human eyes, and then go on to make a color cube. However, it will only apply to your very monitor and not be perfect. You are best off test printing different color profiles and choose the one you like best.
Because there is no such thing. The CIELAB colour space has a Cartesian representation (of infinite size), but the (finite) gamut that we can perceive is not cubic, it has a complicated shape. Varying the two coordinates a* and b* independently in a pre-defined range may seem convenient, but this is fundamentally not the way human perception works.
I'm writing a program that works with images and at some point I need to posterize the image. This means I need to bin the colors, but I'm having trouble deciding how to tell how close one color is to another.
Given a color in RGB, I can think of at least 2 ways to see how different they are:
|r1 - r2| + |g1 - g2| + |b1 - b2|
sqrt((r1 - r2)^2 + (g1 - g2)^2 + (b1 - b2)^2)
And if I move into HSV, I can think of other ways of doing it.
So I ask, ignoring speed, what is the best way to tell how similar two colors are? Best meaning most accurate to the human eye.
Well, if speed is not an issue, the most accurate way would be to take some sample images and apply the filter to them using various cutoff values for the distance (distance being determined by one of the equations on the Color_difference page that astander linked to, meaning you'd have to use one of those color spaces listed there with the calculations, then convert to sRGB or something [which also means that you'd need to convert the image into the other color space first if it's not in it to begin with]), and then have a large number of people examine the images to see what looks best to them, then go with the cutoff value for the images that the majority agrees looks best.
Basically, it's largely a matter of subjectiveness; in fact, it also depends on how stylized you want the images, and you might even want to add in some sort of control so that you can alter the cutoff distance on the fly.
If speed does become a bit of an issue and/or you want more simplicity, then just use your second choice for distance calculation (which is simply the CIE76 equation; just make sure to use the Lab* color space) with the cutoff being around 2 or 2.3.
What do you mean by "posterize the image"?
If you're trying to cluster the colors into bins, you should look at
cluster analysis
Just a comment if you are going to move to HSV (or similar spaces):
Diffing on H: difference between 0° and 359° is numerically big but perceptually is negligible.
H difference if V or S are small - is small.
For computer vision apps, more important not perceptual difference (used mostly by paint manufacturers) but are these colors belong to the same object/segment or not. Which means that we might partially ignore V, which can change from lighting conditions.