It was just after midnight when we parked near the cemetery on the outskirts of Snohomish, Washington and began our assignment. As Team 7, Erin and I had been tasked with searching the grounds of the cemetery: 25 acres of grass, graves, and tombstones.
After putting on radio harnesses and helmets with headlamps, we each grabbed an extra handheld flashlight and began to systematically grid search over well-kept lawns. Our bright lights stabbed out in the dark, sweeping across open spaces and hovering more closely around shrubs and trees. We joked that it would not be long before someone called the police to report us.
Tonight, my team mate and I were searching for a missing teen – possibly lost or injured, or possibly just running away from home. As we neared the east side of the cemetery, we encountered a border of trees along a stream. Realizing this would be a high probability area, we probed carefully, and attempted to access the stream where possible.
On the open lawns, we spaced ourselves widely, knowing that our lights would clearly illuminate the spaces between us. By 2:00AM, we had covered the northern two-thirds of the cemetery grounds. Erie wisps of light fog had begun to settle here and there when suddenly, a faint silvery shape floated up from a tombstone 20 yards ahead of us.
As the hairs on my neck stood at attention, we both froze, trained our lights, and strained our eyes at the ghostly apparition wafting before us. When our hearts resumed beating, we somehow mustered an ounce of courage, and hesitantly walked forward.
Only to realize we were looking at a mylar balloon, tethered as a memorial to the gravesite just before us.
But, in those trembling seconds before we recognized the balloon, what did Erin and I really see? Did our brains tell us we were seeing a ghost?
How does the science of visual perception help us understand what searchers see (or don’t see) on our missions?
Most people have the misperception that the eye functions just like a camera – faithfully transmitting an image of the world to the visual cortex in our brains. Decades of careful research however, including Lettvin’s seminal paper “What the Frog’s Eye Tells the Frog’s Brain,” has shown that visual perception is not a camera system, but a cognitively-biased information processing system.
What this means (simplified drastically) is two things:
(1) You see what your brain expects to see.
When you are walking through the spooky graveyard in the middle of a dark night, and all of a sudden see a silvery apparition floating before you, your visual cortex checks the “Ghost!” box, and sends that information on up the line. It matches your cognitive expectations with the visual input. At that moment, you’re totally convinced you’ve seen a ghost.
Similarly, on two of the three searches when we encountered a dead body lying on the ground, my partners’ brains transformed a totally non-normal sight into a more normal one. “What’s a mannequin doing out here?”
That your brain actively interprets what your eyes detect can work against you on a search. If your brain is expecting to see just one more of hundreds of brush-covered logs in the woods that you are searching, your odds of detecting a person lying in the brush may sadly be reduced.
(2) You don’t see what you don’t expect to see.
This second limitation was well-illustrated by a simple training exercise that I ran several years ago. I prepared a 200-yard trail by randomly placing 20 objects that might have been left by a lost hiker. These included items like a small water bottle, a small day pack, a ball cap, and a set of keys. None of the clues were concealed, most were in plain site along the side of the trail. The SAR students were then instructed to walk the route, being alert for clues, and record each item they spotted. The weather was good, the students were rested, the pace was relaxed.
How did our searchers do?
Not too well.
On average, students missed over 30% of the clues. Some even missed the key chain placed directly in the middle of the trail – they walked right over it. These disappointing find rates are repeatedly observed over and over in such training exercises. Couple this with darkness, fatigue, and stress and you can begin to understand the challenges and limitations faced by searchers. If you’re lost in the woods waiting for SAR to find you, this is sort of depressing to think about.
In formal search theory, the side-to-side detection range of a searcher can be modelled by the concept of Effective Sweep Width. Estimates of Sweep Width are used to assess how easy it would have been to detect what we’re looking for and how effective search coverage has been. Research has shown that sweep width is affected by:
- Terrain and foliage
- The visibility of the lost subject.
- Searcher fatigue
- Darkness
- Searcher age
Intuitively, the concept of sweep width is easy to understand: If we’re looking for a giant orange beachball in an open field on a sunny day, our range of detectability will be quite high. (Sadly, we don’t often encounter these conditions on real missions). Conversely, if fatigued teams have been searching all night in the dark, for an unconscious hunter in full camo, obscured by brush . . . well, you can image how that goes.
Understanding visual perception also has important implications for how we train searchers. We know that it is important to “visually prime” our searcher’s brain to detect what we seek. On a recent search for criminal evidence, we showed searchers a picture of exactly the brand (and color) of the knife that had been used in a murder. We also cautioned them that the knife might be viewed on edge stuck in the ground, or at an odd angle. On this search, this pre-exposure helped our searchers see what their brains expected to see – they found the knife in a grassy field and help put a murderer in jail.
To improve the “search” in Search and Rescue, it’s important to understand the mechanisms and limitations of human visual perception.
On the positive side, humans are excellent at detecting color and motion. We’re pretty good at recognizing what’s familiar to us. During daylight hours, we also have pretty good “foveal” (central) vision – tightly-packed retinal cones that allow us to see with great acuity. We can train searchers to take advantage of these abilities.
On the negative side, human visual perception is hampered by poor night vision, by lapses in attention, and by visual distractors. Our vision is also affected by a brain that works hard to interpret what our eyes tell us – sometimes getting things right, but sometimes getting things wrong. We can train searchers to mitigate the impact of these factors. What the searcher’s eye tells the searcher’s brain is both wonderful and imperfect, and can help us better understand both our successes and our failures when looking for the missing.
0 Comments