Getting around by sound: Human echolocation

By Greg Downey

As any fan of the adventures of Daredevil knows, being blind in comic books can give you superpowers.  Matt Murdoch was blinded by a radioactive accident that he befell because he tried to save a blind pedestrian from the truck carrying the waste (ah, the irony…). Murdoch developed a kind of ‘radar’ sense that allowed him to prowl Hell’s Kitchen, rooting out the miscreants and lowlifes who, like the blind Man Without Fear, preferred to lurk in the dark.

Although his personal life proved that nice guys often finish, if not last, certainly with a heavy burden of angst and personal tragedy, Daredevil built upon the observation that deprivation of one sense can lead to heightened ability in others.

Although the Man without Fear may seem implausible, in fact, researchers have examined a number of blind individuals who seem to develop extraordinarily acute echolocation, a kind of active sonar that they use by clicking to produce echoes from their surroundings.  In a recent edition of PLoS ONE, Lore Thaler from the University of Western Ontario, with Stephen Arnott and Melvyn Goodale, report on brain imaging research that tries to sort out how individuals who can echolocate – who have what one blind activist calls ‘flash sonar’ – accomplish this perception neurologically. Do they use an especially acute sense of hearing, or do they develop another kind of sense, able to transform echoes into spatial perception?

What the researchers found, in short, is that blind individuals who could echolocate did not really have better ‘hearing’; on normal tests of hearing acuity, they scored the same as two sighted subjects who could not echolocate.  However, when a recording had echoes, parts of the brain associated with visual perception in sighted individuals became extremely active, as the echolocators were able to extract information from the echoes that was seemingly not accessible to the control subjects who were sighted.

Although I want to review the results from the recent article, I’m actually interested more generally in several things I think we can learn from human echolocation:

  • Sensing is broader than perception; that is, our nervous system may react to many things that we are not consciously aware it is noting in any meaningful way.  Sensation is ‘bigger than’ consciousness (and I realize I’m using those terms loosely.).
  • Human echolocation highlights, again, that the anthropology of the senses needs to realize that the theory of ‘five senses,’ an idea that we really get from Aristotle it seems to me (although I have no interest in digging any deeper into the intellectual history of the concept), is a bit of cultural common sense and not at all a scientific approach or reflection of verifiable psychological or phenomenological reality.
  • Human echolocation is a capacity of any human being, but the extraordinary skill shown by exemplary practitioners like Daniel Kish and Ben Underwood requires much more than just a human nervous system and the right training: the skill requires a community that ‘gets it’ and supports the capacity.

For me, echolocation is not simply an oddity or a quirky case for the vast human carnival.  Rather, human echolocators tell us something fundamental about the brain’s extraordinary flexibility and power to squeeze perception out of a range of information streams, some of which are normally non-conscious to us.  But plasticity follows very specific patterns which helps to explain why the Canadian researchers report in PLoS ONE that they found activity in certain parts of the visual cortex.

The echolocators

If you’re not already fascinated by echolocation, the following clips, I hope, will ignite some spark of ‘gee whiz-ism’ in even the most blasé reader.

In this first video clip, CBS covers the story of Ben Underwood, a young man who could do remarkable things with echolocation.  Ben passed away in 2009 at the age of 16, when the cancer that had led to his blindness returned.  I remember the first time I saw this video clip and, even though I was already aware of human echolocation, some of what this young man could accomplish simply blew me away.  It’s one thing to hear about subjects’ accuracy locating a 25 cm target at 4 meters distance, another to watch a young man bean a reporter with a throw pillow across a living room.  At some points, it’s simply hard to believe how acute Ben’s echolocation was, easy to wonder if the video is some kind of hoax.

The second clip is of researcher, activist, and echolocation teacher, Daniel Kish, who, in addition to having an extraordinary capacity to maneuver by echolocation, is also one of the subjects studied in the research reported in PLoS ONE. Kish was partially blind at birth, fully blind after 13 months-of-age when his eyes were removed due to retinoblastoma in both.

Because Kish is so remarkable and such an interesting advocate for encouraging the blind to live as independently as possible that I’ve included a number of links to his websites and stories about him at the end of this piece (such as a link to World Access for the Blind).

As the authors of the PLoS ONE article write, their blind subjects used echolocation constantly in daily life, to get around unfamiliar places, to explore cities, hike outdoors, mountain bike, and even play basketball.

Senior author Mel Goodale, Canada Research Chair in Visual Neuroscience, and Director of the Centre for Brain and Mind, says, “It is clear echolocation enables blind people to do things otherwise thought to be impossible without vision and can provide blind and visually-impaired people with a high degree of independence.”  (from Science Daily)

How do they do it?

One of the interesting wrinkles in the study of echolocation by the blind is that, until some path-breaking research in the 1940s, blind individuals and psychologists alike were not sure how the blind were able to get around as well as they did.  Many blind people reported what psychologists came to call ‘facial vision,’ a sensation supposedly of pressure on the face that let them know that they were approaching an obstacle when waking.

Karl M. Dallenbach

Early experiments led by Karl Dallenbach (Cotzin and Dallenbach 1950; Supa et al. 1944; Worchel and Dallenbach 1947; click here for a black and white, silent video of their experiments) on ‘facial vision’ found that blind individuals could detect a board placed in their paths as they walked toward it, but that blindfolded individuals with normal sight quickly also started to detect the obstacle on repeated trials.  By thirty trials, the blindfolded subjects were as accurate as the blind in walking up to an obstacle without touching it.

But the researchers also found that the channel for perception was not touch or a mysterious kind of facial ‘sight,’ but instead linked to the sound emanating from the subjects and reflecting off the target obstacles. When Dallenbach, Supa and colleagues put their subjects in socks on carpeted floors, or interfered with their hearing, ‘facial vision’ dropped off.  The sound of hard-soled shoes on the floor was likely the source of their perception (see Stoffregen and Pittenger 1995 for an excellent review of early research on echolocation).

Stoffregen and Pittenger (1995: 209) point out that echolocation is a distinctive form of perception because the sense is a ‘closed-loop system,’ that is, ‘stimulus energy that is generated by the animal propagates into the environment, is structured by the environment, and returns to receptors.’  In other words, to echolocate people and other animals must produce as well as perceive sound. They are using active rather than passive sonar.  By comparing the outgoing energy with the incoming reflection echolocators can perceive their environment (although there some echolocators evidently can also pull information from passively perceived echoes, that is, of sounds that they themselves do not create).

The active nature of this sense means that the perceiver can query the environment, as we see in the video clips, pushing out more sonic energy, clicking more frequently or loudly, to generate greater amounts of reflected incoming perceptual data.

Echolocation relies upon the fact that sound travels around 300 meters/second, so if you produce a noise close to your ears, and that sound reflects back off a solid surface, a slight time delay — called the ‘pulse-to-echo’ gap — will separate the original source arriving in the ear from its echo. The time lag between the original sound and the echo, however, can be miniscule, so small that subjectively, you may not realize that any pulse-to-echo gap exists, as the two sounds perceptually blend into one.  Nevertheless, your sense of hearing can distinguish the two sounds, in spite of the fact that you have no awareness of their distinctiveness.

From reports of consciously observed sound sensitivity, humans should not be able to perceive objects at short distances (less than two metres) because the sound and echo are subjectively indistinguishable (reported in Stoffregein and Pittenger 1995: 189).  However, early research by Kellogg (1962) found that, in fact, blind subjects were able to detect the distance to an obstacle between 30 and 120 cm to within 10 cm; in other words, although human can’t subjectively hear the sound separation, their ability to judge distances suggests that they were sensitive to pulse-to-echo delays of around .0003 seconds (that’s .3 millisecond—no kidding).

The pulse-to-gap sensitivity in the bat is even shorter: they can detect delays of around 10 to 12 nanoseconds.  That gap is so short that it’s actually quicker than the action potential of neurons, prompting one neuroethologist, as Stoffregen and Pittenger (1995: 189) cite, to observe that ‘the neural mechanisms responsible’ for the acuity ‘are entirely obscure’ (Camhi 1984: 180).

The pulse-to-echo gap, however, is not the only acoustic property that can give some impression of space as volume pitch, interference and timbre of the echo can all be affected by reflection, or that can give information about natural settings.  For example, we sometimes judge how far away a familiar sound is by how loud it is, remaining calm in spite of traffic noise, for example, because the din is sufficiently quiet to signal our distance from an intersection.  (For a much longer discussion of the information potentially available in reflected sound, see Stoffregen and Pittenger 1995).

Ben Underwood’s apparent ability to play video games by sound on the video clip above, for example, cannot be through echolocation (he even shows off by facing away from the television as he plays).  Although I’m stupefied by his ability – who knew that the sound was even reliable enough to give this kind of information?! – this ability to locate on-screen targets would have nothing to do with echolocation.

How would echolocation feel?

The gap between conscious perception and activity-based sensation bring me to my first neuroanthropological point.  In a couple of earlier pieces, I’ve explored the gap between what we can consciously ‘see’ and what sort of information our visual systems (there are at least three) give to our nervous system for its use (see Downey 2007).  In one chapter, I pointed out how the perception-sensation gap poised problems for culture theory, but here I want to focus on the relationship between sensation and consciousness, that we have to realize that we ‘sense’ many things that we do not consciously perceive.

In his landmark work, ‘What Is It Like to Be a Bat?’, philosopher Tom Nagel holds up the bat, especially its perception of the world through echolocation, as ‘a fundamentally alien form of life: ‘bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine’ (emphasis in original, 1974:438).  Ironically, Nagel was just plain wrong, but why he was wrong is so damn interesting!

As philosophers Schwitzgebel and Gordon (2000) point out, normal sighted humans do echolocate, although not consciously and to a much lesser degree than people like Daniel Kish and Ben Underwood.  Nevertheless, we know of cases where people intentionally produce sounds to get information about the environment, such as carpenters knocking on sheetrock to locate supporting studs or on wood to see if it’s ‘sound,’ doctors palpitating the abdomen to assess internal organs, blacksmiths and others striking metal to learn more about its quality, and so forth (Stoffregen and Pittenger 1995:182).

The odd thing is that, even though we all likely have some sense of space from sound (the reason that an acoustically odd space can surprise us, that we can hear a bottle filling, and that we can get a sense of a concert hall in a sound recording from that space), we do not often perceive this awareness consciously, Schwitzgebel and Gordon (2000) assert.  (Of course, arguably, neither does the bat in the rarefied way that we often use ‘conscious.’)

Even in Dallenbach’s early experiments, the very term ‘facial vision’ highlights how easy it is to confound the source of the information, how sensation occurs with very limited consciousness.  In those early experiments, even deaf individuals, who depended upon incidental echolocation every day, sometimes argued that they were feeling the presence of objects through the forehead or face.  One even insisted that the sound was ‘distracting’ although the evidence clearly showed that, without the sound, perception of objects dropped off precipitously.

In other words, Schwitzgebel and Gordon argue that we have a perceptual conduit of which we are not terribly aware that actually conveys information to us.  Or, as Stoffregen and Pittenger (1995: 183) put it, we ‘may engage in echolocation on a regular basis without having any conscious awareness that we are doing so.’  Extraordinary echolocators like Kish, Bushway and Underwood are in part so surprising because we do not see that we too are doing something similar, albeit far less developed and largely unconscious.

Schwitzgebel and Gordon suggest that the failure to recognize the presence of echolocation, and the misunderstanding of the perception as ‘facial vision’ even among those deaf individuals who depended upon the sense, reveals that we can have a ‘gross and pervasive error’ about our own perceptions.  As a person who studies athletes, the possibility of this error is not surprising to me; research often shows that individuals are often not doing what they say they are doing in extreme cases, such as ‘keeping your eye on the ball’ when batting in cricket, as my colleague John Sutton argues.

In fact, my point would be slightly different: that we are frequently unaware of how we are sensing, as our attention focuses much more on what we sense.  The blind individuals using echolocation weren’t that unusual when they misunderstood how they were sensing because what they were sensing was far more important.  Experiments on phenomena like hearing-sight confounds – when you think you hear something that is not present audibly because of what you see – or on non-conscious bodily senses – like equilibrium and proprioception – all show that our nervous system does a lot of sensory processing without consciousness getting involved.  Rosenblum and colleagues (2000: 202) even opine that the ‘this lack of conscious access to the sensory experience of echolocation might speak to the true perceptual nature (or automaticity) of the function.’ For this reason, the anthropological study of the senses needs to take into account the range of non-conscious sensory phenomena.

This conclusion allows us to offer a different sort of answer to the question that Nagel asks about the bat.  Although we cannot know what it is like to be a bat, we suspect that the bat is not perceiving its own shrieks; like echolocating humans (or normal humans, for that matter) the bat is likely perceiving objects through echolocation, not perceiving the fact of echolocating itself.  Certainly, the sense modality affects the phenomenal qualities of everything in the environment, and we cannot on some level understand the phenomenal reality of another species, but this is not because of the absolute alien-ness of echolocation.

In addition, as Stoffregen and Pittenger (1995: 186) highlight, many tests of perceptual acuity, because they ask subjects to report explicitly, cannot accurately assess perceptual ability that is more closely linked to activity, rather than awareness.  We may make errors perceiving things to report about them that we do not make in behaviour (such as when optical illusions do not fool our ability to judge distance).

Underestimating echolocation

Systematic misrecognition of echolocation, however, might extend to experimental settings as well.  I suspect that many of the experiments on the sense actually underestimate the capabilities of echolocating individuals because most of the tests are stationary or use recordings made in a single location.

If you watch the video of Ben Underwood, when he passes an object and can’t tell for certain what it is, he walks alongside the object, clicking.  This moving-sensing combination allows Ben to fill in a sonic image, getting a more three-dimensional and richer sense of what he passed by, just as changing your perspective on an object you’re looking at allows you to better discern what the object is by giving you more information.

In research reported in the early 1960, Kellogg (1962, cited in Stoffregen and Pittenger 1995:185) observed that during echolocation, blind subjects moved their heads, especially rotating them, whereas sighted subjects, given the same tasks while blindfolded, did not vary their head positions. Blind participants, more familiar with echolocation, may have realized on an intuitive level that they could gain additional information from shifting the relative position of their ears to the sound source (or they simply knew better how to make use of this information).  Asking subjects to hold their heads still was an ecologically invalid — that is, true only in a laboratory setting — erosion of their ability.  Similarly, Rosenblum and colleagues (2000) found that echolocation improved in briefly trained, sighted subjects when the individuals were allowed to move.

Neuhoff (2004) reviews research on spatial locations of sound sources more extensively, arguing that many of the research paradigms, although necessary to isolate different sonic factors that contribute to people’s ability to audio-locate, may actually cut down on important strategies for increasing accuracy. Schenkman and Nilsson (2010: 497), for example, found that echoes from a conference room supported more accurate echolocation than those from an anechoic chamber, suggesting that rich sensory environments were less ambiguous and difficult than artificially isolated sounds.  In fact, in the most artificial situations (extremely short sound bursts recorded in anechoic chambers), normal, sighted subjects had their best detection rates, exactly the opposite of the blind subjects who could make use of more complicated information.  Ironically, some of the test situations might disproportionately level the field between skilled and unskilled echolocators.

For example, in experiments in which subjects were asked to locate a sound in horizontal and vertical space by facing it, normal subjects were more accurate when sounds were close to the front and face high; the more peripheral the sound location, the more inaccurate the location.  Being able to move the head would allow a listener to quickly ratchet up the accuracy as a sound continued.  Some early experiments even excluded individuals who vocalized at all (they could only use the sound of their footsteps), excluding their most adept blind individual because she made some type of audible sound.

Echolocating in a big magnetic tube

All of this brings me to the current paper by Thaler, Arnott and Goodale (2011), including introducing some of the challenges that they had to overcome in research design.  As if echolocation wasn’t interesting enough it (even if it’s not Daredevil), Thaler and colleagues managed to come up with a research design that allowed them to use fMRI to better understand the neurological correlates of human sonar.

Thaler and colleagues worked with Daniel Kish (referred to in the paper as ‘EB,’ as in ‘early-onset blindness’), who was 43 at the time of the research.  The other blind subject, Brian Bushway (called ‘LB’ — I’m thinking for ‘late-onset blindness’), was 27 when he was tested, blind from nerve atrophy since the age of fourteen.  (Thanks to Ed Yong of Not Exactly Rocket Science for getting a lot of the back story on the research.)  Bushway learned to echolocate from Kish and has since become quite accomplished.

But trying to study echolocation in a giant, clanging, donut-shaped magnet poses some challenges:

But there was a problem. MRI scanners are long claustrophobic tubes. Inside them, people have to wear ear protection and aren’t allowed to move their heads very much. They’re not very natural environments for sonar users. “I don’t like MRI’s, so that part was a drag,” said Kish.  (Ed Yong, Not Exactly Rocket Science)

The researchers had to create a virtual echolocation space inside the MRI tube, so they used recordings of the blind subjects’ own clicks after they found that both EB and LB could echolocate passively from inner-ear recordings (LB’s discernment dropped off slightly in the passive situation with playback of his own clicks).  They created what we might call ‘empty’ spaces with single objects in an anechoic chamber as well as ‘outdoor’ virtual acoustic spaces.

For studying how the deaf individuals could perceive objects, the subjects were placed in the Beltone Anechoic Chamber at the National Centre for Audiology in London, Ontario.  The chamber reflects virtually no sound, so when the subjects were asked to click, they would get virtually no information in the echoes except for a reflection of an object (either a cube or a safety helmet) placed in front of them.  The experimenters stood behind them (Daredevil would have perceived the experimenters for sure).  High quality microphones were placed in each of the subject’s ears so that the research team could get realistic recordings of the pulse-to-echo cycle to play back inside the MRI.  For outdoor settings, the experimenters created sonic ‘scenes’ in an enclosed courtyard: dual-ear recordings of 20 clicks in front of a car, tree, and lamppost.

Listening to echoes

Once the team had their recordings, and confirmation that the echolocators could in fact discern objects from inner-ear recordings, the team put the two subjects and two controls in the MRI.  I’ll let Ed Yong tell you what happened next:

As Kish and Bushway listened to a variety of different echoes, Thaler watched their brains in action, and compared them to two men who could see and couldn’t echolocate. When all four men listened to the recordings, their auditory cortex – the part of the brain responsible for hearing – lit up on the scans. That was expected.

But there was far more going on in Kish and Bushway’s brains. When he heard the sounds of click echoes, Bushway’s calcarine cortex – a part of the brain that normally deals with vision – lit up. Kish’s reacted even more strongly. And when they heard the sounds of echoes reflecting from moving targets, they showed activity in areas that deal with movement. Neither of the two sighted volunteers reacted in the same way – to them, the recordings were just noises.

Many studies have shown that the brains of blind people reorganise to adapt to their condition, and the areas used for vision take on new roles. But the calcarine cortex seemed to be specifically tuned to echoes, as opposed to other noises. It became far more active when Kish and Bushway heard the sounds of soft echoes than when they heard echo-less recordings, even though the auditory cortex reacted similarly to both sets of sounds. This suggests that both men have diverted a part of their brain, which would normally deal with sights, to handling the sound of echoes.

Both men responded in the same way, but Kish has thirty more years of experience with his sonar than Bushway, and started using it far earlier. It showed in the scans. Kish’s calcarine cortex was more strongly tuned to the sound of echoes, showing blazing activity compared to Bushway’s gentle simmer. It even showed the same handedness that the eyes of sighted people have for light. Echoes coming from the left triggered a response from his right calcarine cortex; those coming from the right triggered the left half. (Ed Yong, Not Exactly Rocket Science)

One of the most interesting wrinkles to me is what happened when the research team played recordings in which the echoes in the virtual sonic space had been scrubbed.  The brain imaging for the control subjects showed no difference, but, without the echoes, the echolocating subjects’ calcarine cortex showed no additional activity. The auditory cortical activity looked similar in all the subjects across the two testing conditions, but the echoes caused a change in brain activity for the expert echolocators. As Thaler and colleagues (2011: 6) explain, the heightened calcarine cortical activity ‘implies that the presence of the low-amplitude echoes activates ‘visual’ cortex in the blind participants (particularly in EB), without any detectable activation in auditory cortex.’

Yes, that V1. In the yellow...

The calcarine cortex isn’t just any old part of the visual cortex; the rear-most part of the brain, the cortex on either side of the calcarine fissure is kind of like ground zero for the visual cortex, also referred to as V1 and as Brodman’s area 17 (or course, the research shows why ‘visual cortex’ might be an awkward name if it’s also ‘echolocation cortex’ in some people).  The calcarine cortex in sighted subjects is spatially organized with matches to the retinal nerves or ‘retinotopic.’

In other words, in the echolocators, echoes — and only echoes — were being processed in a part of the brain that is fundamentally visual, even structured retinotopically, in sighted individuals. A portion of the audible environment was shunted into the calcarine cortex for processing.  Echolocation was neurologically ‘located’ in the neurological architecture, not in auditory regions (which behaved normally), but in a separate part of the brain usually devoted to transferring retinal information.

One implication of this neural response to echoes, for me, is to question whether echolocation is really ‘hearing’ in a neurological sense.  Certainly, the sense makes use of sound, but the echolocating-calcarine cortex network only responds to specific sorts of sound, does not appear to affect normal hearing, and uses a different neurological basis than normal audio perception.  Although it’s not Daredevil’s rader sense, what Kish calls ‘flash sonar’ is not really ‘hearing’ from a neurological perspective; it is a peculiar sense all its own, trainable, surprisingly acute in experts, and capable of facilitating skilled echolocators to do many tasks that they couldn’t without the sense.

Why does echolocation use the ‘visual’ cortex?

The fact that individuals deprived of one sense experience cortical reorganization is well known.  For example Merabet and Pascual-Leone (2010: 45f.) review a range of studies that show how blind individuals who read Braille adapt by devoting greater cortical area to sensory input from the fingertips they use to read, gain greater acuity of pitch perception which correlates with reorganization of audio cortex, and have better-than-normal spatial memory and navigational ability from memory that are reflected in greater-than-normal hippocampus volume.

In blind individuals, ‘cross-modal’ adaptation in the visual cortex can also take place; areas that are linked to sight in others can instead be most active when identifying objects from touch, reading in Braille, or navigating with a ‘sensory substitution device’ (a portable sonar that acts like an ultrasonic cane by converting sonar images to touch information).  In deaf individuals, as well, cortical areas normally associated with hearing appear to be reallocated cross-modally for other sensory tasks (ibid: 46).

So one theory of cortical reallocation might simply be a sort of ‘use-it-or-lose-it’ principle; if your vision doesn’t use your visual cortex (because you’re blind), the region is simply taken over by another sense competing for cortical real estate.   To put it another way, parts of the brain deafferented by blindness or deafness, especially the sensory cortex, are available for recruitment to other neurological purposes.

Another possibility is that the reorganization is simply a skill-related change in the brain areas devoted to tasks at which some individuals are exceptionally well trained.  In this view, echolocators are just like jugglers or musicians or London taxi drivers who exercise a particular brain region so much that it grows, encroaching on its neighbours.

Alvaro Pascual-Leone and colleagues, however, have been proposing a different explanation for cortical reorganization that I think coincides neatly with the data on echolocation, offering a tidy account of how the brain learns to use echoes to perceive space.  Pascual-Leone has suggested that the brain’s organization is ‘metamodal,’ not so much linked to a particular sensory modality (vision, hearing, touch) but rather to the kind of information that these senses may convey. As Pascual-Leone and Hamilton (2001) explain it:

the brain might actually represent a metamodal structure organized as operators that execute a given function or computation regardless of sensory input modality. Such operators might have a predilection for a given sensory input based on its relative suitability for the assigned computation. Such predilection might lead to operator-specific selective reinforcement of certain sensory inputs, eventually generating the impression of a brain structured in parallel, segregated systems processing different sensory signals. In this view, the ‘visual cortex’ is only ‘visual’ because we have sight and because the assigned computation of the striate cortex is best accomplished using retinal, visual information. Similarly, the ‘auditory cortex’ is only auditory in hearing individuals and only because the computation performed by the temporal, perisylvian cortex is best implemented on cochlear, auditory signals. However, in the face of visual deprivation, the ‘striate cortex operator’ will unmask its tactile and auditory inputs to implement its assigned computation using the available sensory information.  (pp. 1-2 in the off-print available online)

Pascual-Leone and Hamilton concede that some sensory experiences may be unimodal, but many ‘operators’ do appear to have this multiple-input, metamodal structure, which we may perceive as sensory confounds or supplementation across different senses.

In the case of echolocation, the metamodal perspective would say that the brain areas that seek to perceive space will use whatever information they can get reliably, perhaps preferring and being biased toward retinal information but taking in audio information as well (even if you’re not aware of it).  When sight is taken away completely, the ability of this ‘spatial’ metamodal system to get its job done using acoustic information can be unmasked and may achieve levels of facility that appear at first glance superhuman (but turns out to be trainable given the right conditions).

(I wrote an early version of this argument in my piece on equilibrium, suggesting that Bach-y-Rita’s development of a ‘vestibular prosthesis’ that used an interface placed under the tongue was a remarkable example of how the brain areas responsible for maintaining balance could use any reliable stream of information, even if from an utterly unprecedented channel such as electrical stimulation on the underside of the tongue.  That earlier post can be accessed here, and I’m working on a couple of different versions for publication.)

In fact, judging from some of what echolocators can do, sound might be better at providing certain kinds of (though certainly not all) information about space: certainly, sound travels around barriers and may even allow better judgment in experts of distance to targets, in addition to working in the dark, of course.

Echolocation isn’t just in your head

One of the points that we frequently have to make at Neuroanthropology is that the brain and nervous system do not exist in a vacuum; capacities that are ‘neurological’ also generally need the correct setting and social interactions in order to manifest, even very basic abilities like language (for example, see my earlier post, Life without Language).

Echolocation is an ideal example of the convergence required to realize a neurological function precisely because the majority of blind individuals do not develop the capacities that people like Daniel Kish and Ben Underwood do.  Simply depriving the calcarine cortex of other stimuli does not automatically cause this part of the cortex to convert into a fully realized ‘echolocation cortex.’ In their research, Schenkman and Nilsson (2010) found two of their ten blind subjects had exceptional echolocating ability, reliably perceiving a 50 cm aluminum disk at 4 m.  After they reviewed their results, Schenkman and Nilsson concluded:

Some blind people may have developed remarkable abilities for the detection of repetition pitch as well as loudness discrimination, which may be the case for the two high-performing blind persons. They are both successful in their respective professions, and our impression is that they are active and mobile. They do not use a guide dog, and appear to be very attentive to acoustic information in the environment, including proficiency in object detection.  (2010: 496)

The advocacy of people like Kish and Underwood assumes that deaf individuals and society alike can change so that echolocation becomes more pervasive; I daresay that they are successfully shifting the neuro-cultural ‘ecology,’ making it more likely that individuals will realize the potential to echolocate.

First, the ability requires active realization and exploration. You’ve got to start clicking and keep doing it so that you can start to hear how the sound behaves in relation to space.  Remember, these children are blind when they’re exploring this sense, so they have to find other ways to get feedback; ‘click – (echo), click – (echo),’ then walk over and explore the space.  The novice echolocators can’t confirm what they are ‘sensing’ through sight, so they’ve got to find another way to bootstrap echolocation.

The whole process can be derailed if the exploration is cut off.  Kish told Michael Finkel about growing up clicking:

Kish can hardly remember a time when he didn’t click. He came to it on his own, intuitively, at age two, about a year after his second eye was removed. Many blind children make noises in order to get feedback — foot stomping, finger snapping, hand clapping, tongue clicking. These behaviors are the beginnings of echolocation, but they’re almost invariably deemed asocial by parents or caretakers and swiftly extinguished. Kish was fortunate that his mother never tried to dissuade him from clicking. “That tongue click was everything to me,” he says.  (from Men’s Journal by Michael Finkel)

Daniel Kish teaching echolocation

The exploratory process requires, or is at least facilitated if, people around the blind individual to allow, even encourage, these sorts of exploratory actions. The video clip about the late Ben Underwood makes a very similar point, highlighting the importance of his mother, Aquanetta, in encouraging his independence and willingness to take risks in order to develop his extraordinary skill.

Kish and other activists, then, must not only teach young people to echolocate, but also work very hard at getting parents, families, and even the blind community to open up to this process so that young people can gain expertise.  The echolocation is part of a larger model for socializing blind children.  As Finkel writes:

Echolocation is an essential element of what Kish terms “a holistic approach” that also includes lessons on comfortable social interactions, confident self-image, and nonvisual conversational cues (a head turn can be noted by the sound of hair swishing; arm gestures by the whisper of skin brushing against clothing; the shift of someone’s body by the creaking of furniture).

The story in Men’s Journal highlights how misplaced sympathy, as far as Daniel Kish is concerned, is part of a social system that may conspire to debilitate blind people, seeking too often to achieve a kind of awareness of ‘disabled’ people that can deny their potential to become highly skilled.  Not known for his tact, Kish once wrote back to a middle school that ‘I have felt beaten and pummeled by many things… misplaced kindness foremost among them,’ when they wrote to tell him about a project to wear blindfolds and be led around by each other.

Young people, says Kish, are especially hard-hit. “Most blind kids hear a lot of negative talk. ‘Don’t do this, don’t do that, don’t move. No, here, let me help you.’ The message you get, if you’re blind, is you’re intellectually deficient, you’re emotionally deficient, you’re in all ways deficient.” A few sighted people have commented to Kish that they’d rather be dead than blind. (from Men’s Journal by Michael Finkel)

In contrast to the scientific research, the popular press articles and news videos on Underwood and Kish help us to see clearly the social world that supports the production of skilled echolocation (although this is not to take anything away from the determination or resourcefulness of the blind individuals themselves).  For example, Kish’s holist approach requires a high tolerance for risk in a blind individual’s support group and the community in which they’re embedded: ‘“Running into a pole is a drag, but never being allowed to run into a pole is a disaster,” [Kish] writes. “Pain is part of the price of freedom.” This attitude is not wildly popular, especially in a safety-first nation like the United States’ (Finkel).  In a society that sees bicycle riding as inherently risky, requiring safety equipment and careful parental monitoring, it’s not clear to me that young blind people will normally get the opportunity to race around on bicycles as their echolocation improves.

Moreover, some of the aids that blind individuals normally use may even make them less likely to develop echolocation.  Although I love dogs, it hardly seems a coincidence that Kish, Underwood, and the two most talented echolocators tested by Schenkman and Nilsson (2010: 496) all got around without the aid of a helper animal.

In summary, echolocation isn’t just the conjunction of a human brain, mouth, ears and objects to reflect back sound; it’s also the product of a social group and society that has its own attitudes and approaches to dealing with blindness.  At the same time that people like Kish are helping to spread techniques like echolocation to an unprecedented number of individuals, we can see that other social forces might decrease the possibility of achieving this perceptual skill.

So why can’t the rest of us echolocate?

Sure, we all do it at some level, but if echolocation is so great, why don’t all human beings do it more?  The research by Thaler and colleagues (2011) reported in PLoS ONE suggests several reasons why echolocation may not be rampant in the rest of the population, including people who are not blind.

First, perhaps blind echolocators have compensated by developing especially acute ability to hear.  Although this option may seem obvious, I would argue it’s a misunderstanding of what echolocators are doing; on standard tests of hearing, neither Kish nor Bushway nor other echolocating blind individuals have super-acute hearing, although they often attend better and more systematically to the sound they hear. Of course, they are accomplishing echolocation through hearing, but the ability seems to be alongside normal forms of hearing, not constructed atop an extraordinarily well-developed normal sense of hearing.

Another possibility is that the blind just practice echolocation more than normally sighted individuals.  Again, this explanation contains a grain of the bleeding obvious: yes, echolocating individuals are highly skilled with heaps of experience.  But the fact that the visual cortex gets involved suggests that a more fundamental neural re-purposing has occurred, not simply a quantitative increase in the strength of familiar mechanisms for hearing.

The problem might be motivational: in the absence of actual blindness, most of us don’t try to echolocate enough or before it’s too late, and a neurological window of opportunity passes us by.  Although the neuroplasticity necessary for some forms of echolocation may, in fact, be age sensitive, the case of Brian Bushway (‘LB’), however, suggests that even people who come to it late in life can learn to echolocate, if sufficiently motivated.

Or, as Ed Yong discusses in his blog, sight and expert echolocation may compete for the same neural resources, the calcarine cortex, and perhaps ‘both senses cannot coexist easily with one another.’ This explanation is the neurological equivalent of ‘first-come-first-served’ principle; neurological resources get co-opted by particular functions, and these functions don’t share well with others.  Again, I think this explanation captures a partial truth, but some additional research on the blindfolded rather than the blind suggests

In fact, a number of studies have found that sighted people can be trained up relatively quickly to echolocate (see, for example, Teng and Whitney 2011).  But the obvious point is that most of us do not echolocate in large part because we keep looking at things, and to the degree that we do echolocate, we’re not even aware that we’re doing it. In this scenario, the calcarine cortex is not so profoundly wedded to visual processing but rather visual stimulation keeps out-competing hearing of echoes for the cortex’s attention.  Some of the most interesting evidence for this last theory comes, not from studies of the blind, but from research on the blindfolded.

Unmasking our other senses by blindfolding

Pascual-Leone and Hamilton (2001) reported on precisely the sort of experience that might unmask our underlying ability to echolocate (see also Pascual-Leone et al. 2005 for a review article).  As part of their training in Spain, instructors for the blind at the time underwent a week living blindfolded at a boarding school. Anecdotal evidence from these future instructors suggested that, while blindfolded, they began to compensate in various ways for being temporarily sightless:

Most of these trainees report noticing improved abilities to orient to sounds and judge distance by sound by the end of the blindfolded week. For example, several describe becoming able to identify people quickly and accurately as they started talking or even as they simply walked by due to the cadence of their steps. Several learned to differentiate cars by the sounds of their motors, and one described the “joy of telling motorcycles apart by their sound.” A few felt that they had become able to detect objects or furniture in their paths by the “echos of sounds”. Similarly, most of them noted an improved ability to differentiate surfaces and identify objects by touch during the 7 blindfolded days.  (Pascual-Leone and Hamilton 2001: 11)

Pascual-Leone’s research team decided to recreate the situation with volunteer subjects in Boston but, realizing that many likely peeked, affixed photographic paper to the backs of the blindfolds to make sure that the subjects were really remaining sightless for the week.  That’s when things started to get interesting.

As the team reports, in a lovely tone, ‘Behaviorally, subjects have been tolerating the blindfolded period well, though all of them have developed visual hallucinations that generally met criteria for Charles–Bonnet syndrome.’

Charles-Bonnet syndrome occurs in individuals with significant vision loss; some estimate that as many as one-fifth of all people over 65 who have vision loss have Charles-Bonnet hallucinations (that’s from Wikipedia, so I’m not going to cite the source).  What makes Charles-Bonnet illusions distinctive is that they are purely visual, are seldom persuasive, and tend to be ‘lilliput hallucinations’ or hallucinations of objects or people that are smaller than usual (I know, sounds fun for the weekend, eh?).

Pascual-Leone’s subjects started hallucinating a few hours after putting on the blindfolds, and for some, the hallucinations got more and more elaborate as the week went on. For one young man, the hallucinations became a constant companion, and ‘by the end of the study week developed into ornate buildings of whitish green marble and cartoon-like figures’ (ibid.: 12)  For another young woman, a cascade of images like a dream occurred when she was blindfolded: ‘a butterfly becomes a sunset, that becomes an otter, that becomes a flower, that becomes several different types of animals, and it all is kind of this big stream.’  (But I digress…)

The subjects’ visual cortexes weren’t just busy producing otters and marble buildings and faerie floss sunrises out of the darkness.  Over the week they were blindfolded, subjects were trained intensely in tactile and auditory spatial discrimination tasks (reading Braille characters and matching tones).  Serial fMRI studies conducted during these discrimination tasks revealed increasingly that ‘these subjects showed activation of the visual cortex during tactile and auditory stimulation of the fingers’ as the week wore on (ibid.).  Starting in the second day, the discrimination tasks seemed to stimulate the somatosensory and auditory cortex less and the ‘visual’ cortex more and more (ibid.: 13).  The speed of the changes was so quick as to rule out the possibility of new neural connections.

Transcranial magnetic stimulation (TMS) to the occipital cortex, which normally causes visual anomalies, affected these new Braille- and tone-matching skills instead (just as TMS does to blind subjects).

All of these effects disappeared soon after the subjects removed their blindfolds.  As Pascual-Leone and Hamilton (ibid.: 14) explain:

These results suggest that the occipital cortex had become recruited for processing tactile and auditory information when it was deafferented of visual input. However, as soon as visual input became available, even transiently, the induced changes rapidly reversed to baseline. The speed of these changes is such that establishment of new connections is not possible. Therefore, it must be assumed that tactile and auditory input into ‘visual cortex’ is present in all of us and can be unmasked if behaviorally desirable.

These results, along with single-neuron microelectrode studies in animals, lead Pascual-Leone and Hamilton to argue that the ‘visual cortex’ is more accurately described as specialized to deal with spatial discrimination.  This occipital region of the brain may ‘prefer’ visual input, but it receives auditory and haptic information as well to subserve its spatial function.  In the presence of visual information flow, the other channels are largely masked, but once vision is stopped, even through relatively short-term blindfolding, the metamodal capacities of the ‘visual’ cortex to handle other sorts of spatial information are revealed.

As Pascual-Leone and Hamilton (2001: 15) describe, the senses compete for the attention of the occipital cortex, but it’s hardly a ‘fair fight’ in a normal sighted individual, so the region winds up being primarily ‘visual.’  Echolocation in the blind reveals that our brain’s biased preference for certain sorts of information does not erase other possibilities.  (For a much more complete discussion of forms of cortical plasticity, including the specific examples of the blind and the blindfolded, see Merabet and Pascual-Leone 2010; and Pascual-Leone et al. 2005.)

The short answer to the question, ‘Why can’t more of us echolocate?’ then is, in many cases, ‘Because your eyes keep winning the battle to inform your sense of space.’

The next step: super-human echolocation

The limits of echolocation in humans, aside from the social and practical that we’ve discussed, are not so much in the brain as they are in the mouth and ears. If we could produce and hear more information-rich noises, we could echolocate like the virtuosi of echolocation, like bats (for more on echolocation in bats).  As Michael Finkel writes:

Some can fly in complete darkness, navigating around thousands of other bats while nabbing insects one millimeter wide. Bats have evolved, over millions of years, to possess the ideal mouth shape and the perfect ear rotation for echolocation. They can perceive high-frequency sound waves, beyond the range of human hearing — waves that are densely packed together, whose echoes give precise detail.

There is evidence that humans could be that good. Bats have tiny brains. Just the auditory cortex of a human brain is many times larger than the entire brain of a bat. This means that humans can likely process more complex auditory information than bats. What we’ll require, to make up for bats’ evolutionary head start, is a little artificial boost.

Actually, two boosts. We need a way to create bat-like sound waves, and we need to be able to hear those waves….

Dolphins who echolocate use special structures in their noses than can produce 200 sonic pulses per second.  Bats can produce 10 to 15 short loud shouts each second, some as loud as 110 decibels (and have muscles to close their own ears for protection when they do).

The human mouth simply can’t generate that much high quality sound, and our sound quality tends to drop off if you keep clicking.  Studies of echolocating humans like Daniel Kish show that they do produce excellent clicks for these purposes, but we just don’t come with the sort of noise-producing equipment that other echolocating species possess.

Sensory substitution devices, or SSDs, try to get around this by providing some sort of artificial way of generating sound, but being able to control precisely the volume and nature of a click provides the echolocator with important comparative information alongside the echoes.

In addition, the sound waves that are at the centre of our hearing range are not necessarily the best for conveying detailed information about objects and the environment. Since humans are not sensitive to sound frequencies over about 20 kHz, the theoretical threshold for an object that we could detect with sound is about 2 cm.  The wave length at 20 kHz sets a lower limit.  In contrast, because of their higher pitched hearing range (up to as high as 200 kHz), a bat can perceive tiny insects using echolocation (see Stoffregen and Pittenger 1995: 186, fn.#3; and Rice 1967).

As Finkel continues, if we put as much money and research into equipment like cochlear implants – technology for remedial sensory normalization – into the extraordinary potential of echolocation, we might see super-human abilities:

If money were no object, Kish believes that blind people could essentially mimic bats within five years. A next generation of K-Sonar [a small cane-attached SSD or personal sonar device], using the input from a global consortium of scientists that Kish has been corresponding with, should have a nearly limitless range. Our hearing, Kish says, can be increased tenfold through surgical augmentation — basically, inner-ear microphone implants. Combine the two and it’s possible that the blind will be able to take up tennis. Kish figures it would require $15 million to prove whether or not his idea is feasible. He fears he’ll never get the opportunity.

If we could implant inner-ear microphones, capable of hearing high frequency sound, and if we could produce a reliable, portable sonar – and even implant a device that would allow humans to produce the kind of dense, high frequency sound that bats and dolphins use, it’s quite likely that the brain could learn to use the information.

But one thing that Thaler and colleagues’ (2011) research reveals, and that Pascual-Leone’s work also suggests, is that Finkel is wrong about one crucial detail: our large audio cortex wouldn’t be doing the interpreting of this superhuman echolocation.  If blind people were given this surgically-enhanced, super-normal sonar, our even larger ‘visual’ cortex would be doing the echolocating.

I’m sure I’m not the only one who’d love to see it…

Acknowledgments

h/t: Daniel Lende for first pointing this one out to me.

Graphic of the visual cortex originally from Logothetis, N., November 1999. Vision: A window on consciousness. Scientific American, downloaded from fMRI 4 Newbies, Department of Psychology, University of Western Ontario.

Links

For more of Greg’s writing on the senses:

References:

ResearchBlogging.orgCamhi, J. M. (1984). Neuroethology. Sunderland, MA: Sinauer.

COTZIN M, & DALLENBACH KM (1950). “Facial vision:” the role of pitch and loudness in the perception of obstacles by the blind. The American journal of psychology, 63 (4), 485-515 PMID: 14790019

Downey, G. (2007) Seeing without Knowing, Learning with the Eyes: Visuomotor ‘Knowing’ and the Plasticity of Perception. In Ways of Knowing: New Approaches in the Anthropology of Knowledge and Learning. Mark Harris, ed. Pp. 222-241. New York and Oxford: Berghahn Books.

Kellogg, W. (1962). Sonar System of the Blind: New research measures their accuracy in detecting the texture, size, and distance of objects “by ear.” Science, 137 (3528), 399-404 DOI: 10.1126/science.137.3528.399

Merabet, L., & Pascual-Leone, A. (2009). Neural reorganization following sensory loss: the opportunity of change Nature Reviews Neuroscience, 11 (1), 44-52 DOI: 10.1038/nrn2758

Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83 (4) DOI: 10.2307/2183914

Neuhoff, John G. 2004. Auditory motion and localization. In J.G. Neuhoff, ed. Ecological Psychoacoustics, pp. 87-106. New York: Academic Press.

Pascual-Leone, A., Amedi, A., Fregni, F., & Merabet, L. (2005). THE PLASTIC HUMAN BRAIN CORTEX Annual Review of Neuroscience, 28 (1), 377-401 DOI: 10.1146/annurev.neuro.27.070203.144216

Pascual-Leone A, & Hamilton R (2001). The metamodal organization of the brain. Progress in brain research, 134, 427-45 PMID: 11702559

Rosenblum, L., Gordon, M., & Jarquin, L. (2000). Echolocating Distance by Moving and Stationary Listeners Ecological Psychology, 12 (3), 181-206 DOI: 10.1207/S15326969ECO1203_1

Schenkman, B., & Nilsson, M. (2010). Human echolocation: Blind and sighted persons’ ability to detect sounds recorded in the presence of a reflecting object Perception, 39 (4), 483-501 DOI: 10.1068/p6473

Schwitzgebel, Eric, and Michael S. Gordon. (2000) How Well Do We Know Our Own Consciousness?: The Case of Human Echolocation. Philosophical Topics 28: 235-246 .

Stroffregen, T., & Pittenger, J. (1995). Human Echolocation as a Basic Form of Perception and Action Ecological Psychology, 7 (3), 181-216 DOI: 10.1207/s15326969eco0703_2

Supa, Michael, Milton Cotzin, and Karl M. Dallenbach. (1944) “Facial vision”: The perception of obstacles by the blind. American Journal of Psychology 57(2): 133-183.

Teng, Shantani, and David Whitney.  (2011) The acuity of echolocation: Spatial resolution in sighted persons compared to the performance of an expert who is blind.  Journal of Visual Impairment and Blindness 105(1): 20-32.  (available here)

Thaler, L., Arnott, S., & Goodale, M. (2011). Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts PLoS ONE, 6 (5) DOI: 10.1371/journal.pone.0020162

WORCHEL P, & DALLENBACH KM (1947). Facial vision; perception of obstacles by the deaf-blind. The American journal of psychology, 60 (4), 502-53 PMID: 20273385


Related Posts Plugin for WordPress, Blogger...

Creative Commons License
Getting around by sound: Human echolocation by Neuroanthropology, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.

This entry was posted in Perception, Plasticity and tagged , , , , , . Bookmark the permalink.

30 Responses to Getting around by sound: Human echolocation

  1. Peter B says:

    Excellent article. Very refreshing to read a full, and properly researched piece. I had read about Kish before, but his ideas about the superhuman hearing are interesting. Pocket change for billionaires, but no money to be made so sadly I don’t see it becoming a reality.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  2. Amazingly thorough write-up! I’m impressed. Also, apparently we’re on the same intellectual kick, you and I! A few weeks ago I wrote a post that got picked up by BoingBoing about the limits of our sensorium:
    http://blog.ketyov.com/2011/05/we-are-all-inattentive-superheroes.html

    Which I then followed up with a post on the Scientific American blogs about the history of echolocation:
    http://blog.ketyov.com/2011/06/my-scientific-american-post-what-bats.html

    Pretty amusing overlap there. :)

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • gregdowney says:

      Thanks Bradley!
      I’m seldom the first to get to a topic, but I try to be thorough. I’ve added links to both your pieces. I especially dug the Scientific American column as I had no idea about the background on early research on echolocation in bats. Very cool.
      Thanks for bringing these pieces to my attention.
      Greg

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  3. Janis says:

    I often think that the area of the brain that is associated with visual processing — the back part — is actually the area of the brain that’s associated with nonlinear and spatial model-building, and vision just happens to be how sighted people do that. Braille readers figure out braille there, echolocators process echos and parallax there.

    Basically, if it involve triangulating of any sort, the back end of the brain does it. Only for most of us, that means eyes. When the eyes don’t work, the fingers and the ears start to do it, but the same part of the brain handles the processing.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  4. Zen Faulkes says:

    Stunning post!

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  5. Janis says:

    Hm, this also reminds me of when a deaf woman I used to know told me that hearing people could lipread better. For deaf people, it’s just a mouth movement, but we hearing people could link it up with another stimulation … although we connect it so powerfully with the other stimulation that we think that’s all it is. We use lips to understand people WAY more than we think, but we just think we’re hearing them. We think we’re only getting one information stream when we’re really processing two.

    I only got what she was saying when I realized how hard it was for me to understand French on the phone or radio, but how easy it was in person or on TV. My facility with French is on the fine edge, such that the lack of visual lipreading cues just tips me over into “having a hard time” territory.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  6. Pingback: ResearchBlogging.org News » Blog Archive » Editor’s Selections: A Pile of Human Excrement, Echolocation, and Radiated Testes

  7. Pingback: Open Laboratory 2011 – submissions so far | A Blog Around The Clock

  8. When I first saw this in the 90s, I went to the local park and tried it out. I was amazed how well it worked and how quickly I picked it up.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  9. bradley foster says:

    Wonderful article! I find anything on brain plasticity absolutely fascinating. This may put whole new spin on the phrase “blind as a bat.”

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  10. Stephen says:

    Regarding this article’s third paragraph, the word ‘deaf’ seems to be a mistake.

    “Although the Man without Fear may seem implausible, in fact, researchers have examined a number of deaf individuals who seem to develop extraordinarily acute echolocation…”

    Aside from that little detail, this is a terrific article!

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • gregdowney says:

      Thanks for the correction. I’ve gone ahead and made the change.

      Thanks, too, Stephen for the feedback!

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  11. Rosa says:

    “Although the Man without Fear may seem implausible, in fact, researchers have examined a number of deaf individuals who seem to develop extraordinarily acute echolocation, a kind of active sonar that they use by clicking to produce echoes from their surroundings. ”

    I think you mean blind, rather than deaf, here.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  12. Tom says:

    Thanks for this wonderful post – for me it was a real journey of exploration, and the final paragraph reminded me that real life can be as strange as any of the science fiction I like to read. I love the description of the advantages of echolocation, too – to be able to tell the texture of something at a distance, and the idea that to an echolocator it might feel like you have a hundred arms reaching out to touch everything around you.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  13. Mark Riggle says:

    Comment on the “[humans are] sensitive to pulse-to-echo delays of around .0003 seconds (that’s .3 millisecond)”. Since the subjects were able to determine near-field distance to an accuracy of 10cm, the assumption underlying the timing sensitivity is they do that by the pulse-to-echo delay time. What would seem more likely is the spectral differences to each ear when the object is close. The head-related-transform (HRTF), which determines the spectral shape, is a function of X,Y and distance. Beyond about 5 feet, distance does not affect it much. This HRTF binaurally does have enough information to accurately locate objects.

    The other part for the bat’s pulse-to-echo resolution time of 10 nanoseconds is a great example of how bad analysis lives on and on. The bat’s timing resolution was determined by changing the delay time electronically and finding the smallest time difference that the bats could detect. They used just different length wire coax cables to adjust the timing, and indeed, by that method they found the bats detecting the difference of 2 cables that caused about 10ns difference. However, not being electrical engineers (or getting bad advice from one), they failed to do proper impedance matching of the cables to the equipment. That mismatched impedance causes a reflection wave to occur (back and forth actually because both ends were mismatched) and the reflection wave will interact with the signal and change its characteristics, especially its spectrum. [The reflection's wave phase to the signal will vary by the cable length.] Since the bats should be very sensitive to spectral cues (to echo-locate in azimuth and elevation), they would have detected the spectral change due to the cables’ length differences and not due to the 10ns difference.
    Sorry for the rather technical talk but really, if 10ns resolution in neurons was taken seriously, then bats would have had something absolutely new and fantastic. They don’t.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  14. Sen says:

    great post! just one correction, unless i’m reading it wrong:

    “when a recording had echoes, parts of the brain associated with visual perception in sighted individuals became extremely active, as the echolocators were able to extract information from the echoes that was seemingly not accessible to the control subjects who were sighted.”

    i think you mean “…visual perception in blind inviduals”

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  15. Myron Zarry says:

    I have written a short (10 page) paper on my thoughts about the possibilities of incorporating recent technologies in the construction of an echo location device for the blind. Some of the concepts for distigushing various objects, at different distances and locations, in the view forward, through audio manipulation might be of interest. An electronic “time stretch”, analogous to slow motion filming and screening, gets around the limits imposed by sound memory duration (0.10 sec.) that makes closely spaced objects audibly indiscernible by natural sonar in humans. Other algorithyms would provide for audible warnings of rapidly approaching objects. The elecronics and processors required for this project seem to be ubiquitous in the form of pads, phones and cameras etc. I don’t think that technology is the impediment to creating a device, and am heartened to see so many thinking on the subject.
    I’m not sure how to post a .pdf to this site, and I sure don’t want to re type.
    Can you send me an email address that I can send to, or brief instructions?
    Thanks

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  16. John Hawkins says:

    Cool. As a child I echo-located my way around the family home at night when it was fully dark on cloudy nights (rural so no street lamps shining in windows etc). I made a variety of sounds using my fingers and fingernails, and could follow walls, identify furniture etc. Nothing as advanced as the individuals described here, but I did find that moving my hands around while echo-locating gave me more information about shapes.

    I’m lost much of my ability now; I guess it is the inevitable age-related loss of hearing in the high frequency range that has done it.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  17. Pingback: Human beings have more than five senses | MrReid.org

  18. I personally would like to take note of this posting,
    “Getting around by sound: Human echolocation | Neuroanthropology” on
    my very own site. Would you mind in the event I actuallydo?
    Thanks a lot ,Lenora

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  19. “Getting around by sound: Human echolocation | Neuroanthropology” was indeed a delightful blog, can’t
    help but wait to read far more of your posts. Time to squander several time on the internet lmao.
    Thank you ,Lizzie

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  20. Charlotte says:

    Wonderful article…I stumbled on it via a Googgle search. A couple of nights ago, my 11-year old son and I were discussing echolocation in bats and then blind people and we made the logical leap to echolocation used in hearing aides for the blind to see. It seemed a logical thought to us that people could use echolocation to determine distance and size and we wanted to see what type of research had been done thus far in that area. I am a lay person (however, I have an undergrad subspeciality in Biology) and my son is a very curious person who enjoys science, we thank you for this well written article and all of the great resource links.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  21. Stephanie Mackey says:

    Thank you Mr Downey, what a fascinating and accessible article. Very useful to me in my role of Independence & Mobility Teacher.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  22. Pingback: Todos llevamos un Matt Murdock dentro, pero hay que estar ciegos… | Historias Cienciacionales

  23. Lee Nelson says:

    This is a subject I’ve been fascinated with for many years, reading everything I could find.

    While studying art a professor turned me on the subject of the psychology of vision in the early 1970’s. The following facts relate to human echolocation:

    1) We see with our brains not our eyes.
    2) Limited sight is possible with input through the tongue:
    http://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues
    http://www.nei.nih.gov/news/briefs/weihenmayer.asp

    I expect that MRI testing users of these tongue devices would give similar results as achieved testing those using echolocation.

    I have no doubt that blind people using echolocation see and that equipment designed to improve ‘click’ production and perception would improve their vision.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  24. Pingback: Getting around by sound: Human echolocation - N...

  25. Pingback: The Eyeborg: Hearing Colors and Our Cyborg Future - Neuroanthropology

  26. Pingback: Hearing Color? | What can you do with anthropology?

  27. Pingback: Reporte Ciencia UANL » The Eyeborg: Hearing Colors and Our Cyborg Future

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>