We can probably agree that this rectangle is pink.
Now, keep the rectangle on display, and stand up so that you’re looking down at your screen from a sharp angle. If you have a laptop, you can tilt your screen to achieve the same effect.
On many LCD screens the square becomes bright blue. (More recent models with higher pixel density, including the Macbook Pros with Retina Displays, have fixed this problem.)
Well, it’s not simply because your screen is flat. If you take a pink marker and color in a square on a piece of paper, you can tilt it all you like and the color won’t change.
To make matters even more confusing, try looking at the square from the left or from the right: nothing changes. If you look from below, the square glides into a shade of red, instead of blue.
To understand why this happens, we need to talk about two aspects of your visual world: (1) brightness and (2) color. In particular, we need to talk about how your computer communicates that information to your eyes when displaying an image.
* * *
How bright an object is is not the same as how bright it looks.
When we say an object is bright, we mean that it’s giving off a lot of light. So, if Light Bulb B emits twice as much light as Light Bulb A, it would make sense to say that “Light Bulb B is twice as bright as Light Bulb A.” If we think of light as a tiny stream of particles called photons, then this is the same thing as saying that twice the number of photons are coming from Bulb B, as from Bulb A. To add some mathematician jargon to the mix, there is a linear relationship between how bright an object is, and the number of photons it emits: five times the number of photons means five times as bright, half the number of photons means half as bright, and so on. So far, so good.
Now, when you look at a light bulb, photons wash into your eyes. Every fraction of a second, your eyes send a status report to your brain, which interprets the raw information and comes up with a conclusion about how bright the bulb is. If you stare at Bulb A for ten seconds, then stare at Bulb B for ten seconds, your eyes will have received twice the number of photons from B as from A, so you’d think that your brain would conclude that B is twice as bright.
Strangely, no. Your brain actually prevents you from figuring out that “brightness” and “photon count” have a linear relationship. If the two bulbs are very bright to begin with, the message from your brain will actually be: “B’s not that much brighter than A.” If the two bulbs are very dim to begin with, the message from your brain will actually be: “B’s WAY brighter than A!” More generally: your brain is much more sensitive to differences between dark shades, than it is to differences between light shades. There is not a linear relationship between how bright an object seems to us, and how many photons it emits: there is instead what is called a logarithmic relationship. In other words: the brighter two objects are, the less pronounced the difference in their brightness.
This is a very good thing. If our brains didn’t curb how bright an object can seem, the world would be overwhelmingly dazzling.
Looking at an image on your computer screen adds an extra layer of complexity to this whole process. Light from the object goes to your camera, which stores the information, and passes it to your computer when you hit “download.” Your computer stores the image, and displays it on command. Light isn’t going directly from the object to your eyes: it’s going from your computer screen to your eyes. Your computer, by “storing the image,” has essentially memorized what it looks like for you: all the colors, all the patches of dark and light.
Your computer has limited memory resources. It would be a waste to spend those limited resources on subtle distinctions between bright shades, because your logarithmic brain can’t distinguish between them anyway. So, when storing an image, your computer allocates more memory to keeping dark shades distinct, than it does to keeping light shades distinct; it saves images logarithmically.
We didn’t evolve to stare at logarithmic computer screens, though; your brain applies its usual logarithmic correction to any photons you receive. If your computer sends you photons from an image that is already logarithmic, then your brain will over-compensate, and you will see too many dark shades and not enough light shades. The image will look distorted. To prevent this from happening, your computer converts its logarithmically stored image to a linear brightness scale, before displaying it to your eyes. The process of preparing an image for view is called gamma correction, and as long as you look at your screen face-on, it works.
And what if you don’t look face-on? Well, try looking at this text from above and below. From above, the letters get bleached out, and from below they get darker and bolder. For a more dramatic example, let’s use this gradient, which has a range of brightnesses. Same procedure: look face-on, then from above, then from below.
Same thing. From above, you see more bright shades, and from below, you see more dark shades.
This is because of the gamma correction. When applying it, your computer makes two assumptions: 1) that your eye will apply its usual logarithmic correction, and 2) that you are looking at your screen face-on. (1) is guaranteed, but (2) is not, particularly when articles are published about what happens when you look at your screen from an angle. If you stand up and look down, the gamma correction is insufficient, which is why there are too many light shades. As you crouch, and peer up at your screen, the gamma correction is too high, which is why there are too many dark shades. It’s only “just right” face-on.
We’ve been thinking of light as a stream of particles. To talk about color, it’s easiest to think of light as a wave, instead:
If it disturbs you to switch so readily between the particle interpretation and the wave interpretation, don’t worry. It should. The marriage of these two interpretations disturbed the 20th century physicists who thought it up. It continues to surprise physicists today. For the purpose of this article, we’re just going to accept it as a fact of nature.
Color is determined by how stretched or scrunched a light wave is. The distance between two peaks is biggest for red waves (they are the most stretched out) and smallest for purple waves (they are the most scrunched.) Here’s a visual:
If you’re ever mixed paints, you know that certain colors can be mixed to create certain other colors. Clearly, colors are related to – and even made of – each other. The same is true of colored light, although it mixes differently than colored paint. Mixing all the colors of light yields white, instead of the gross brown color you get when you mix all your paints.
Your computer screen is a master color-mixer: using only red light, green light, and blue light as ingredients, it creates all the colors that you see. Look at this Venn diagram:
This guides how your computer mixes colors. As you can see, mixing red and green yields yellow, mixing red and blue yields magenta, and mixing blue and green yields a bright blue called cyan. Mixing all three in equal amounts yields white. Alternatively, you can get white with blue + yellow, with magenta + green, or red + cyan. In other words: if you take blue away from white, you get yellow. If you take magenta away from white, you get green. These pairs (blue-yellow, magenda-green, red-cyan) are called color opponents, because the absence of one means the presence of the other.
Armed with this understanding of how colors add and subtract, let’s take a look at your screen again. Your screen is made of lots of “picture elements” (pixels), each of which is made of three subpixels: a red subpixel, a green subpixel, and a blue subpixel. Each subpixel is a gatekeeper for its representative color: the red subpixels, for example, control how much red is able to pass through the screen to your eyes. If each subpixel lets all of its light through, the pixel looks white. If the red and blue subpixels let their light through, but the green shuts up shop, we see magenta. Similarly, if only the red subpixel let their light through, we see a red pixel. The color of each pixel on your screen depends on what its constituent red, blue, and green subpixels are doing.
Each subpixel has remarkably precise control over exactly how much light passes through: it’s not just “all” or “none”. To understand how it does this, we can think of each subpixel as two layers of fences. The first layer has vertical slits. If the outgoing light wave is also oriented vertically, it can slot right through the fence. When it gets to the second fence, though, it’s in trouble, because the second fence layer has horizontal slits: the light wave is blocked.
The subpixel’s function as a gatekeeper comes from the layer of molecules that is sandwiched between the two fences. The molecules can rotate the light wave anywhere between zero and ninety degrees. Zero degrees, and none of the light will make it through the second fence, since it will all be vertically oriented. Ninety degrees, and all of the light will be able to slot through the first and second fence, since it will have started out vertically oriented and ended horizontally oriented. Anywhere in between, and some of the light will make it through the second fence, but not all.
The fact that each subpixel can control exactly how much of its color gets through – that it’s not all-or-nothing – is how we get such a huge variety of colors. The total color we see from one pixel is determined by the precise light-rotating technique of each component subpixel.
Like the gamma correction, though, this process assumes that you’re looking at your screen face-on. When you look at your screen from an angle, the colors that were supposed to be blocked actually leak through.
Let’s get to the punchline. Why does the viewing angle matter? Well, the outer layer has horizontal slits. So, the light that emerges from your screen will always be horizontally oriented. Picture a gigantic fan, held parallel to the ground: it extends in all directions left and right, but not up or down. This is how light emerges from your computer screen: flat, parallel to the ground. So, if you move your head so that you’re above or below the screen, you dramatically reduce the amount of light that reaches you. Remember, though, that from this angle, blocked colors leak through!
Look back at our pink rectangle from before. Each pixel in that rectangle has the same color: a result of the combined efforts of red, green, and blue subpixels. Each red subpixel is letting 100% of its light through (in other words, is rotating the light the full 90 degrees, so that it can slip out of the second gate and into our eyes) while the green and blue subpixels are blocking half of their light: they rotate it part of the way, and only let 50% through. When you look from above, what was once blocked is now leaking: you see more green and blue than you did from face-on. Red was never blocked, though, so you don’t see it from above. The net effect: a combination of green and blue, and no red. That’s cyan! That explains why we see cyan from above.
But shouldn’t that mean that we see cyan from above and below? Yes – if it weren’t for the gamma correction. If you recall, the gamma correction controls brightness. Face-on, the “bright” color is red (since more of it is let through) and the dimmer, darker colors are green and blue. From above, the opposite is true.
Now, recall that from above, the gamma correction undercompensates: there’s too much sensitivity to light shades. That supports the fact that we see cyan. From below, though, the gamma correction overcompensates: there’s too much sensitivity to dark shades! Too much sensitivity to, in this case, red.
And that’s what we see.