Finding your camera’s “truth”
Something I teach, write, and lecture on frequently (ad naseum?) is the photographer’s obligation to understand, not fight, the camera’s vision. Some people seem to get this; others, not so much. So I’ve decided to try a slightly different tack.
Visual “Truth” is relative
Without getting too philosophical, it’s important to understand that, like your camera, your view of the universe is limited and interpreted. In other words, there is no absolute visual truth. Instead, we (you, me, and our cameras) each have our own view of the world that’s based on many factors–some we can control, others we can’t. When you look through a viewfinder, the more you turn off your visual biases and understand your camera’s, the more successful your photography will be.
Before lamenting your camera’s limitations, pause to consider that what you and I see is incredibly limited as well. The visible (to the human eye) portion of the electromagnetic spectrum is a minuscule part of the infinite continuum of electromagnetic radiation bombarding each of us, every instant of every day. For example, X-ray machines “see” waves in the one nanometer (one billionth of a meter) range; TVs and radios “see” waves that are measured in centimeters; humans, on the other hand, only see waves between (about) 400 and 750 nanometers.
Using this knowledge, astronomers peer into space with tools designed to see objects at wave lengths invisible to us. X-rays allow doctors to view bones hidden beneath opaque skin, and night vision technology uses “invisible” (to us) infrared radiation (heat) to see objects complete darkness. In other words, in the grand scheme of things, there’s no single absolute visual standard–it’s all relative to your frame of reference.
The camera has its own frame of reference. While it records more or less the same visible spectrum our eyes do, the camera is missing an entire dimension: depth. Not only that (since we’re not talking about movies here), a camera only returns a snap of a single instant. And we all know about limited dynamic range and depth of field.
Despite these differences, photographers often go to great lengths to force their camera to record what their eyes see. Not only is this impossible, it doesn’t take advantage of the camera’s ability to see things in ways we don’t.
Our visual input is interpreted before we perceive it, in much the same way a camera’s input is processed before it’s output (to a monitor, printer, or whatever). Visual processing happens in our brain, which makes adjustments for things like color temperature, perspective, motion, and so on.
Likewise, every photograph must be processed (interpreted) in some way before it can be viewed, either by the camera (if camera gives you a jpeg or tiff), or by the photographer, using Photoshop or some other processing software.
In most ways, the eye’s ability to capture light exceeds that of even the best cameras. On the other hand, the camera does do a few things our eyes can’t do: In the image above, captured a year ago at Pfeiffer Beach on the Big Sur coast, I used my camera’s ability to accumulate light to reveal things that, while invisible to my eye, were still quite real.
According to the EXIF data (try getting your eye/brain to record that), the sun had set twenty minutes prior, but my camera was still able to see in the limited light. This twenty second exposure revealed more detail than my eye registered. In doing so it smoothed the surf into a gauzy mist, and captured reflected color lost in my visual darkness.
Another thing I really like about my camera’s take on this scene is the way it reveals the transition of light and color as the view moves away from the sun. Though the eye does register it, our brains, influenced by the subconscious misperception that a cloudless sky is a uniform sky, often overlook subtle differences like this. But capture it in an image and the transition is both striking and beautiful.
So what about the blurred water?
People who criticize blurred water images for being “false” because that’s not the way water is, completely miss the point (I won’t get into the whole cliché argument here, which has more validity). My question to them is, how would you choose to capture water? (It’s a trick question.) When they answer frozen sharp, I ask them how many times they’ve actually seen a wave or water droplet suspended in midair. (Checkmate.)
The point is, a still camera simply “sees” motion differently than we do. Rather than holding our images to an unattainable human standard, we should feel free to appreciate and convey our cameras’ unique perspective. In this Pfeiffer Beach scene, I like the way smoothing the water to an ethereal gauze more accurately conveys the inviting mystery of the sea.
What is real?
Is this image real? While it’s nothing like what I saw, it’s still a very accurate rendering of my camera’s reality. Understanding my camera’s vision enabled me to share a perspective that expands my limited vision and transcends human reality. Pretty cool.