首页 > 解决方案 > 如何使一个变量生成的 RGB 颜色变暗?

问题描述

我有两个用于 RGB 颜色的变量:

X= (randint(0, 256), randint(0, 256), randint(0, 256))

变量 X 是随机的 RGB 颜色

Y= ???

我怎样才能使变量 Y 生成颜色 X 的稍暗版本,以创建对比度?

样本:

在此处输入图像描述

问候!

标签: pythonrgb

解决方案


If you want to be at least somewhat perceptually accurate with repeatable results, then you want to use a perceptually uniform color or appearance model. HSL or HSV are not perceptually uniform models (not even close).

Contrasting Reasons

The next question then is what is your ultimate goal? For instance:

  • Is this for colors on a computer monitor? Or Print?
  • Do you want to maintain the same hue?
  • What about saturation?
  • What is the purpose for the contrast?
    • For text for readability?
    • For defining user interface elements?
    • Purely for decoration or design reasons?

Your answers to these questions drives the approach. To answer, I am going to make these assumptions:

  • For computer or mobile device displays.
  • Independent and repeatable control of hue, saturation, and lightness relative to the random color.
  • The contrast is needed for functional reasons such as text readability.

Color Made Easy

First of all, color is not strictly "real" it is only a perception of various wavelengths of light. And whereas light in the real world works linearly, our perception is decidedly non-linear, and very contexgt sensitive.

As a result, to get a desired color result through calculation, we need to predict the perception of that color, and that is the motivation for color appearance models such as CIECAM02, Jzazbz, and uniform colorspaces like CIELAB and CIELUV.

For accuracy, use something like CIECAM02 or CIECAM16. Though these may be more complicated than you actually need. CIELAB and CIELUV are Simpler and easier to integrate, and there is a substantial code base of color libraries that support LAB and LUV.

LUV, LAB, and L-Stars

LAB aka L*a*b* is commonly used for reflected colors, though it can be used for illuminants, it has some issues that make it less favorable for illuminants.

LAB has the channels L* also known as "Lstar" which is perceptual lightness, an encoding of luminance (Y) based on our perception. Then a*b* which encode opponent colors, a* for red/green, and b* for blue/yellow.

These independent channels form a 3D cartesian space, but LAB can also be transformed into a polar coordinate space, LCh, lightness, chroma, and hue.

LUV aka L*u*v* is not very useful for reflected colors, but has some features that make it ideal for illuminants, and that extends to self-illuminated displays. US mil-spec defines colored illuminants using uʹvʹ, and LUV was used substantially in television post-production before more modern models came along. (CIECAM02 covers both reflected and emitted colors well and more accurately, but again is significantly more complicated).

LUV has the exact same L* for perceptual lightness, and for the color coordinates, u*v* which are based on the uʹvʹ coordinates of the 1976 UCS chromaticity diagram.

In addition to LUV's 3D cartesian space, it can also be transformed into polar coordinates, LCh and Lsh, lightness, chroma, and hue, but also lightness, saturation, and hue. Having access to independent control of saturation is a useful advantage, as it allows us to keep perceived colorfulness constant when we independently change lightness, and also hue (LUV is not as uniform though as other more advances models).

Whatabout Contrast

Once you convert your RGB values to an appropriate uniform space, then you can calculate the color difference. In the case of LAB or LUV, it is the simple euclidian distance between two colors, in other words, the square root of the sum of the squared differences, so:

∆ = ((L*1 - L*2)2 + (u*1 - u*2)2 + (v*1 - v*2)2 )0.5

Also, there is an interesting LUV implementation at https://www.hsluv.org

But Wait, There's More!

While calculating the euclidian distance to find the difference between two colors will give you a general idea of contrast, our perception of contrast is more complicated than that.

For instance, spatial frequency can have a stronger effect on our contrast perception than a given pair of colors, and higher spatial frequencies (i.e. small and thin fonts) are perceived with lower contrast.

And the color's hue and saturation difference plays a separate part. Our brain processes hue and chroma separately from luminance. The more complicated modern appearance models do try to encompass all of these many aspects of our perception.

And importantly, your perception of a color is extremely context dependent. For instance, take a look at this image:

two yellow dots on a checker board

The two yellow dots are exactly the same color as each other, as are the two grays the dots sit on. They look different due to the different context of the surrounding image.

Readability Contrast

For text on a background, there are a few special considerations. One is the critical contrast, that is, the point that a reader obtains the best reading speed and comprehension. This is at least ten times the contrast needed for mere legibility.

And the "kind" of contrast that is important for readability is luminance or lightness/darkness contrast. Contrast s of saturation or hues does not necessarily help readability, other than some saturated hue combinations can hamper readability, such as red #f00 against blue #00f. So if readability is your aim, ensure the ∆L* is sufficient.

And spatial frequency plays a part here as well, along with the many other aspects of vision such as adaptation, continuous contrast, HK effect, etc.

Full disclosure: we're developing a new contrast method for readability contrast called APCA for web standards use, in part it uses a difference of perception-encoded luminance values.

Implementation

  1. Convert your RGB values to linear, meaning there can not be any "gamma" encoding.
    • If you are generating a random set of RGB values, but are going to send them to an sRGB monitor, then assume the sRGB colorspace and D65.
    • 8 bit values need to be 0-255, but we will need them as decimal floating point 0.0-1.0
  2. Convert the linear RGB values to CIEXYZ space. -Depending on your needs, you could just assume your random values 0.0-1.0 are linear values.
    • Don't forget to make them "gamma encoded" though before sending them to a monitor.
  3. Convert from XYZ to CIELUV.
  4. Optionally convert to polar "Lsh" space for independent control of saturation and hue.

While in LUV space, you can choose the color(s) that meet your contrast needs, and then convert back to RGB by reversing the transforms.

And again, it is the ∆L* that is most important, and in both LAB and LUV you can change the L* without touching the color coordinates.

Maths and Stuff

There is a great resource at Bruce Lindbloom's site, filled with all the math you need to do these operations.

8 bit integer RGB values normally need to be 0-255, but for the conversions we are going to do, we need them as decimal floating point 0.0-1.0 and since you are just generating them randomly, let's just assume they are "linear" RGB values.

       # random numbers set with the low set to prevent very dark colors.
    Rlin = random.uniform(0.012,1.0)
    Glin = random.uniform(0.04,1.0)
    Blin = random.uniform(0.004,1.0)

Since we assume these are already linear, convert to XYZ.

Transform linear RGB to XYZ to LUV & LCH/Lsh

The functions below are modified from my JS based SeeLab Project, and the data for the to matrixes comes from brucelindbloom.com. I have not tested the code in Python as I just moved it over from my JS project, so it's "asIs, void where static, yourMileage may be a variable..."

        # D65 matrix into XYZ from 
    X = Rlin * 0.4124564 + Glin * 0.3575761 + Blin * 0.1804375
    Y = Rlin * 0.2126729 + Glin * 0.7151522 + Blin * 0.0721750
    Z = Rlin * 0.0193339 + Glin * 0.1191920 + Blin * 0.9503041

    Lstar = math.pow(Y, 0.425)  # Create lightness (shortcut version)
       # Note: Lstar above is an unofficial shortcut
       # using a simple power curve instead of the piecewise.
       # The official equation for Lstar is this ternary:
  # Lstar = math.pow(Y, 1/3) * 116.0 - 16.0 if Y > (216/24389) else Y * (24389/27)

        # Process to LUV
    Uref = 0.19783982482140775648
    Vref = 0.46833630293240970476
    divisor = (X + 15.0 * Y + 3.0 * Z)
    UCSu = ((4.0 * X) / divisor)
    UCSv = ((9.0 * Y) / divisor)
    Ustar = 13.0 * Lstar * (UCSu - Uref)
    Vstar = 13.0 * Lstar * (UCSv - Vref)

       # Optional — create polar coordinates LCh and/or Lsh  
    UVchroma = math.pow(Ustar * Ustar + Vstar * Vstar, 0.5)
    UVhue = 180.0 * math.atan2(Vstar,Ustar)/math.pi if UVchroma > 0.01 else 0.0
             # if UVchroma less than 0.01, clamp hue to 0
    UVhue = UVhue + 360.0 if UVhue < 0.0 else UVhue  # Make hue positive if it isn't 
    UVsat = UVchroma / Lstar  # Create the saturation correlate

Thus your three linear RGB values end up as:

  • Lstar perceptual lightness 0.0 - 100.0
  • Ustar & Vstar the chromaticity coordinates
  • UVchroma Colorfulness independent of lightness (ignore if using saturation)
  • UVhue color hue in degrees (around the whitepoint of the UCS)
  • UVsat Saturation, i.e. colorfulness in relation to lightness.

So, if you wanted to keep the hue and colorfulness constant, and only affect the lightness, then only adjust the variable Lstar and then convert back to XYZ using in the following order, first to last:

  • Lstar, to Y, everything is related to lightness
  • UVsat, multiply by Lstar to get Chroma
  • UVchroma, UVhue, these now give you Ustar & Vstar
  • Ustar, Vstar, with Lstar and Y these give you X and Z.

Then convert XYZ to linear RGB, then apply the sRGB TRC (aks gamma) and finally convert the 0.0-1.0 to 0-255.

Transform LUV to XYZ to sRGB

    Y = math.pow((Lstar + 16) / 116, 3) if Lstar > 8.0 else Lstar / (24389/27)

    UVchroma = UVsat * Lstar if (UVsat * Lstar) > 0.1 else 0.0
    Ustar = UVchroma * math.cos(UVhue * (math.pi/180))
    Vstar = UVchroma * math.sin(UVhue * (math.pi/180))
    
    fA = Y * (((39.0 * Lstar) / (Vstar + 13.0 * Lstar * Vref)) - 5.0)
    fB = Y * -5.0
    fC = (((52.0 * Lstar) / (Ustar + 13.0 * Lstar * Uref)) - 1.0) / 3
    fD = -1/3

    X = (fA - fB) / (fC - fD)
    Z = X * fC + fB

       // sRGB D65 matrix
    linRGB = [X * 3.2404542 + Y * -1.5371385 + Z * -0.4985314,
              X * -0.9692660 + Y * 1.8760108 + Z * 0.0415560,
              X * 0.0556434 + Y * -0.2040259 + Z * 1.0572252]

      // Take the linear sRGB and gamma encode the convert to 8 bit int
      // NOTE: This is the "simple" version.
    newRGB = [max(min(int(math.pow(linRGB[0], 1/2.2) * 255), 255), 0),
              max(min(int(math.pow(linRGB[1], 1/2.2) * 255), 255), 0),
              max(min(int(math.pow(linRGB[2], 1/2.2) * 255), 255), 0)]

It's important to know the conversion from linear to gamma-encoded sRGB 8bit shown is using the "simple" method and not the piecewise method fromt he IEC standard. The simple method should be fine for you use case. I discuss the math for the piecewise encoding in this answer, essentially this:

enter image description here

Wikipedia also has an interesting discussion of the piecewise transform.

And you might be interested in my answer to this other contrast question here on Stack.

Cheers.


推荐阅读