In my previous post, I talked about what made me want to be an astronaut. Now I’ll talk about how it’s possible to get there.
I have been deaf from birth. A recent genetic test showed it to be congenital deafness, resulting from the deficiency of a protein that’s crucial to the conversion of sound waves to nerve signals. I can only hear sounds above about 85 decibels (a normal human conversation is 60 decibels). Most of the sounds that a person can hear are shut out to me, except for dogs barking, rock concerts, bad mufflers, gunshots, explosions, rocket launches, and jet engines. Because it’s been trained so little, my brain cannot process auditory input very well. That means I have little ability to differentiate between sounds, much less spoken language. And, no, you cannot shout into my ear and expect me to understand what you’re saying. I won’t.
My primary method of communication is sign language. Despite what some (ignorant) people think, it isn’t simply a collection of gestures. It’s a fully-realized language. It’s just as capable of expressing highly complex ideas as English is, probably even more so in the hands of a good enough signer who understands his subject. My interpreters at school had no problem with relaying the details of special relativity, electromagnetism, solid mechanics, and quantum physics to me, even though they don’t know anything about these subjects. It really depends on the interpreter, though. I’ve had many good ones and just as many bad ones.
With crew slots at a premium in space, a sign-language interpreter is not an option. There are two existing technological solutions that allow a deaf person to hear.
The first one is a hearing aid. It’s small and compact and works well for many deaf people. I had a pair until I was ten. Unfortunately, they don’t provide full hearing in the same way a human ear can – they only amplify the sound. The result can be very irritating. I suffered from tinnitus (constant ear ringing) every time I took my aids off in the evenings. It got so bad that I’d often think the fire alarm had gone off. So I ditched them as soon as my parents allowed me to. I’m told the newer versions don’t have that problem, but I have a hard time discriminating between sounds anyway.
The second is a cochlear implant. It requires an expensive surgery that slices open your middle ear, and inserts a device into your cochlea, where it directly stimulates the cilia (hairs) that convert sound waves into electrical signals. This surgery isn’t for everyone. It leaves scar tissue which can be a problem underwater or during a rapid pressure change. Many of my deaf friends have it. It’s certainly helped many of them understand spoken words, but they received that surgery when they were very young, so they’ve had plenty of time to develop their auditory processing abilities. And sometimes the surgery goes wrong and results in chronic pain. A CI would be wasted on me, because my cochleas are working just fine (see genetic testing above).
So is there a better way?
Yes, and I’m part of a team that is working on it.
Enter the iSign system.
Picture a pair of augmented-reality glasses, a wearable speaker, a pair of MYO muscle-reading devices, and a central processing unit, and you’d be very close. I’m under a non-disclosure agreement with Juxtopia, so I can’t reveal too much about it. However, the MYO device is from a separate company and will probably be the key part in the iSign system. You can see it at www.thalmic.com.
To realize this system, the biggest hurdle to overcome is software voice recognition. Some software packages, such as Dragon NaturalSpeaking, can recognize a single person’s voice with enough training, but it takes considerably more computing power to recognize the voices of multiple strangers. We are looking into ways to solve this. The best approach may be a combination of hardware and software. I’m especially looking forward to optical computing, which promises much faster processing speeds without the heating problems that plague electronics.
But what about spacesuits? Aren’t they too rigid to allow signing?
That much is correct. This is because they are essentially pressurized gas-bags. An astronaut must condition his upper body and fingers in order to use it, because the outward pressure inside the suit prevents the fabric from bending and flexing. However, this kind of suit will be obsolete in a decade. There’s a new spacesuit concept under development at MIT and some universities in Australia, called the mechanical counter-pressure (MCP) suit, shown at left in the photo below. Compare with the conventional suit next to it.
Instead of supplying air pressure to your body, this suit fabric presses down on every square inch of your skin, preventing it from expanding in the vacuum of space. It thus acts as a “second skin”. Only the helmet needs to be pressurized. Such a suit would not only enable me to do an EVA (extravehicular activity), it’d also make life much easier for astronauts who would otherwise have to suffer inside a hot, restrictive pressurized suit.
There are some issues that need to be solved, such as how to account for every curve on the human body, and how to don/doff the suit without it losing its tightness. But these are just engineering problems. With enough time, money, and brainpower thrown at them, they will be solved.
What about the roar of launch? Wouldn’t that drown out your iSign system?
A rocket launch is pretty loud from the outside, so you’d think the sound inside the crew capsule would be unbearable, right? Wrong. According to conversations I’ve had with real astronauts, the crew capsule is heavily soundproofed. That’s necessary so the crew can hear each other and the radio. There will be G-forces and intense vibration, but hearing will not be a problem.
I think I’ve answered many objections to that idea. Does it seem more plausible now? Feel free to comment!