When RGB is Not Enough, Redux

Here’s a simple example of when the RGB color model fails to accurately model real life interaction of light and color.

Low-pressure sodium vapor lamp

If you drive through any of the tunnels through the Appalachian mountains on the U.S. East coast, you’ll likely be greeted by the ugly yellow-orange glow of a low-pressure sodium vapor lamp.

LPS lamps only emit light around the yellow-orange wavelength. As a result, in situations where they are the only light source, such as deep within a mountain tunnel, everything loses its own color, instead becoming a shade of yellow-orange. A car which had a lovely blue hue in the full-spectrum light from the sun will suddenly look near-pitch black once you enter the limited spectrum lighting of the tunnel. A white car, which reflects light on multiple wavelengths, will be much more visible, but still entirely yellow-orange. A red car or a green car would probably be just a little bit more visible than the blue car, but you’d be unable to tell that they were red or green if you hadn’t seen them in daylight.

If we wanted to make a 3D animation of a car going through a tunnel, it wouldn’t be enough just to make all the lights in the tunnel yellow-orange. In the RGB model, yellow-orange light is just a mix of red and green light; if we sample a pixel from the photograph above, we find that its color makeup is R: 98%, G: 67%, B: 0%.

Unlike in real life, where a red car would appear to be a very dark yellow-orange, a red car in our computer model would still look like a red car. A green car would be a darker shade of green under this faux yellow-orange light, but it would still be green.

If we’re only concerned with the emotional impact that this unnerving yellow-orange scene will create in the viewer, then we’d probably end up just faking it in post-processing, by desaturating all the colors and throwing a solid yellow-orange layer with a multiplicative blending mode on top.

That would certainly succeed in turning everything the same hue, and would probably be satisfactory for most artistic purposes, but it wouldn’t be an accurate model of the real world. Even a more complex filter, which measured each pixel’s color-space distance from yellow-orange in order to derive its luminosity, wouldn’t be quite right.

The situation grows increasingly complex if we try to model situations with multiple hues of strongly-colored lighting in the same scene. For example, suppose we wanted to recreate the (incredibly disturbing) psychedelic scene on the paddleboat from the classic 1971 Willy Wonka & the Chocolate Factory. To properly model multiple light sources illuminating an object at different times, each source emitting a different mix of wavelengths of light, and each material reflecting certain wavelengths but absorbing others… yikes! Accomplishing such a task using the RGB model would be an astounding feat for even a seasoned professional!

When RGB is Not Enough

Over a year ago, my color studies class visited the Krannert Center for the Performing Arts at the University of Illinois, for a guest lecture/demo on theatre lighting, and the interaction of light with colored objects.

By the end of the demo, I had realized something: RGB just isn’t enough to describe the full range of color interaction. And it’s not just RGB that’s deficient; HSV, HSL, CMYK, etc. all suffer from the same limitation. In fact, any color model which tries to describe a color as a single point will fall short.

Why?

Color is conveyed through photons of various wavelengths. The full visible spectrum of wavelengths covers the familiar rainbow: the longest wavelengths we can see appear red; shorter wavelengths appear orange, yellow, green, and blue, with the shortest wavelengths we see appearing violet.

Generally speaking, any given lightsource will emit photons at multple wavelengths (see e.g. the diagrams at Wikipedia’s article on “emission spectrum”). A light which appears blue, for instance, might be emitting some violet, green, and even red photons, although the majority of the photons emitted will be blue.

Similarly, any given object will absorb photons at multiple wavelengths, while reflecting photons at other wavelengths. An object which appears green (say, a leaf) reflects primarily green photons, while absorbing lots of red and other colors of photons.

It’s the fact that color exists across this spectrum that means it’s impossible to model the full interaction of colored lights and objects; the RGB model completely ignores the effect of photons of all wavelengths except three.

Consider this scenario: there is a tight-focused spotlight shining through three filters. Each filter is made of a special material which fully absorbs photons in a specific, tiny range of wavelengths, while letting all the rest pass through; the first one absorbs reddish photons (say, in the range of 790 to 800 nm), the second absorbs greenish photons (540 to 550 nm), and the third absorbs blueish photons (470-480 nm). The spotlight uses a special bulb which emits light equally at all wavelengths in the visible spectrum.

When we look at the spotlight by itself, the light looks white. If we looked at the spotlight through the three filters, what color would it seem to be?

If we were modelling this scenario on a computer using the RGB color model, the answer would be: black; void; nothing. No light at all would make it through all three filters. The first filter would absorb all the red, letting through the green and blue (the light would appear cyan at this point). The second would absorb all the green, letting through the blue. The third would absorb all the blue, letting through nothing at all.

In the real world (or at least, a real world with these magical light bulbs and filters), you would see… white light. It would be insignificantly dimmer than the light that went in – even if we ignore such things as intensity falloff with distance – but it would appear to be white light, even though all the photons at specific red, green, and blue wavelengths have been filtered out. There would still be plenty of photons at other wavelengths to trigger the light receptor cells in our eyes; it’s doubtful we would even be able to tell that it was missing photons with wavelengths between 790-800 nm, 540-550 nm, and 470-480nm.

I need to get back to work, so I’ll get to the point(s). (“It’s about time,” you say.)

  • RGB is Good Enough for most uses in CG.
  • For special cases where increased realism/control is desired, it would be possible to simulate color interaction using the full visible spectrum.
  • A new color format with values for each of ROYGCBVM (or some subset thereof) would be an excellent compromise between data size and designer control.
  • The colors would need to be converted to RGB to be displayed on a monitor (or CMYK for print), but that could be saved for the last step.
  • Such a color format would be more future-proof, allowing for alternative photon-emitting devices and inks without excessively distorting color.

Out-of-Tuning

Quick thought: A game where the music gets more out of tune and broken as you tumble towards defeat.

Meteos did this, sort of. It would shift between different music clips depending on the scenario. If your blocks were almost to the top of the screen, the music would become frantic. It was all pre-recorded clips, being mixed in realtime. It wasn’t generating music on the fly.

Super Mario Bros. did something almost sortof similar; when time was running out, the song would speed up to double-time, and at a higher frequency as a result, to encourage you to go faster.

But I’m thinking of… well, more like Eternal Darkness’ insanity meter. I don’t know if they did anything with the music in that game (I only played it briefly), but it would be fitting if, as your character started to go insane, notes in the music would start to shift off-pitch a bit, or it would occassionally hit the wrong note, etc.

This would be definitely be easiest with MIDI or MOD style music, rather than with MP3 or OGG where all the waveforms are mixed together and hard/impossible to un-mix.