Z-Learning: Learning In Your Sleep
For Stephen LaBerge of the Lucidity Institute, virtual reality has nothing on our dreams. The Palo Alto, Calif.-based company is exploring the phenomenon known as lucid dreaming and how it might be controlled. Have you ever had a dream where, say, you soared through the treetops so vividly that you jerked awake from the intensity of the feeling? Chances are you were lucid. By learning how to control that experience, dream-researchers maintain, you're able to consciously guide yourself in your dreams and do and go anywhere you wish—unbound by the laws of physics.
Research at Stanford University has demonstrated that the physical activity in a dream exhibits the same neural impulses in the brain that you'd get if you were awake. Your muscles, however, remain temporarily paralyzed by the rem cycle. The Lucidity Institute has invented a device called the NovaDreamer that professes to help you become lucid by alerting your brain when you're dreaming. The NovaDreamer detects when you've entered rem sleep, then cues you with flashing lights and sounds to remind you to recognize that you're dreaming.
"Research on how to cultivate peak performance suggests that lucid dreaming may prove to be an ideal training ground, not only for athletics, but also for any area in which skill can be developed," DeBerge writes in Exploring The World of Lucid Dreaming (Ballantine, 1990).
In other words, no matter which skill you wish to build, a lucid dream offers a vivid training environment to rehearse and prepare for real-life experiences.
LaBerge predicts that with improved biofeedback techniques that isolate which parts of the brain are activated in the dream state, there will be vast improvements in devices like the NovaDreamer in about 10 years.
"Right now, technology is getting better and better, but I don't think we're going to get better than the computer in our heads," LaBerge says. "Virtual reality is limited in many ways and isn't totally realistic. The dream state is entirely convincing."
In about three decades, trainers may even be able to conduct training sessions in their sleep—to a slumbering class of students—with "dream link technology" computer sensing devices that link an entire class over a "dream network," predicts Ian Pearson, a futurologist at British Telecommunications, Ipswich, England. "With this technology," he says "we'd pick up what you're dreaming ... and send that information across the network to somebody else's dream." A case of z-learning, perhaps?
If lucid dreaming has the possibility of injecting reality into our dreams, then tele-immersion might just enable us to bring our dreams to reality. A consortium of research centers and universities is making remarkable progress with tele-immersion, a technology that would provide high-fidelity, life-sized, holographic projections in which you'd see and hear remote collaborators as plainly as if they were there in the office. In other words, your cubicle wall would double as a virtual doorway where colleagues, no matter their location, could share and explore the same space. Conceivably, you could project your exhibit at a trade show or give a training demonstration without leaving your home base, taking virtual conferencing to new levels.
A hallmark moment in human-computer interaction occurred on May 9, 2000, when the virtual images of a researcher in Armonk, N.Y., and a postdoctoral fellow at the University of Pennsylvania appeared in a telecubicle set up at the University of North Carolina at Chapel Hill. Looking through a pair of polarizing glasses and wearing head-mounted tracking devices, researchers for the Office of the Future project, founded by unc in 1998, watched the cubicle walls dissolve into images of their remote colleagues that appeared not as two-dimensional video feeds, but as a virtual blending of space and time.
The cave project (Cave Automatic Virtual Environment) at the University of Illinois at Chicago is another example of the virtually expanding training room. cave workers are fine-tuning enclosed, 10-foot virtual rooms where mounted projectors splash detailed images onto the walls and floors. cave dwellers wear lightweight stereo glasses and walk through cave as they interact with virtual objects.
Using cave's tele-immersion technology, it will be possible to instantly change or create environments. If you're an introvert enlisted to give a presentation, you could enter the cave and rehearse your speech in front of a virtual audience. Transferred to the office, imagine flooding your workspace with a cave or teleimmersive setup and engaging in a training seminar that projects a remote classroom or an off-site trainer into your office.
Herman Towles, a senior research associate with the Office of the Future project, foresees teleimmersive systems changing educational, scientific and manufacturing paradigms. "Any surface of your office could be a display surface," says Towles. "We envision an array of ceiling-mounted projectors and cameras that are used for real-time extraction of an environment that can even sense and catalog documents that are lying on a remote desk."
Taking tele-immersion even further, haptic sensors are being developed that would allow you to reach out and feel the sensation of a handshake with your remote collaborator. The unc team is exploring the possibilities of haptic sensors in remote surgical procedures and trauma surgery training, among other disciplines.
Along with the Office of the Future project, unc also teams with the National Tele-Immersion Initiative, which includes key partnerships involving Brown University, the University of Pennsylvania, and Advanced Network and Services, Armonk, N.Y. Both entities have targeted tele-immersion as an ideal flagship application of Internet2.
Launched in October 1996, I2 was created as a test-bed environment to develop capable alternatives to the current Internet infrastructure. The consortium has thus far enlisted about 180 universities and corporations such as ibm and Cisco to erect high-speed networks through pipelines that would out-perform today's broadband access speeds by more than 1,000 times. An analogous Internet test-bed, called Next Generation Internet, is funded by multiagency federal research and development that will also make revolutionary applications and advance networking like tele-immersion and cave systems possible.
Training Room Optional
Fashion police may shudder, but soon enough, wearable computers may become more than a fringe accessory. The Human Interface Technology (hit) Laboratory at the University of Washington is developing wearable support systems like retinal scanning devices that project from your retinas the images on a computer screen into your field of vision. Now you're talking training that's delivered when and where you need it, without the need for cumbersome monitors.
Who needs an office or desk—or a laptop for that matter—when you can slip into wearable computer devices that enable you to see into your computer's hard drive? Add remote collaboration with colleagues and you take on-site, instructor-led training to new levels.
By wearing a combination see-through, wraparound eye display, a voice link, a wireless Internet connection, a compact keyboard and batteries strapped to the waist like a high-tech fanny pack, you become a walking, talking workstation. Virtual Retinal Display technology takes what you see on a monitor and compacts it through special glasses directly on the retina in a full-color, high-resolution, wide-field-of-view screen. The image looks like it's floating directly in front of you at about arm's length and can be used in any or even no light source.
First developed by hit in 1991, the eye devices have greatly matured in the last decade. The first prototypes of vrd were about the size of two kitchen tables, according to Matt Nichols, director of communications for Microvision, the commercial developer of vrd. The Bothell, Wash., company's latest display, the Nomad, weighs about 1 pound.
For mobile workers or anyone who requires performance support, wearables that include vrd will allow for unprecedented real-time, on-site assistance in such fields as aerospace, defense, medical, industrial and consumer electronics. Surgeons could perform "image-guided surgery" during delicate operations. Manufac-turing and maintenance professionals could view digitized repair and procedure manuals and share and coordinate blueprints as they worked on them.
A clinical trial conducted by Microvision for novice automotive mechanics demonstrated an increase in training efficiency between 60 and 80 percent when traditional paper-based training was uploaded onto vrd devices. "There, we were moving untrained mechanics quickly along in their discipline the first time they put on the displays," says Nichols. "It redefines on-the-job training, and, unlike virtual reality, the devices don't shut out the real world."
But even office-bound workers will find uses for vrd. Microvision currently is working with a client in a large call center that wishes to enhance the supervisor's ability to monitor workstations. Software developed for the vrd displays would free them from their control desks and allow them to roam the floor and call up screens from any terminal.
"As a professional speaker, I applaud that," says Futurist Barry Minkin. "I can't wait to give a speech and have my notes before my eyes."
With increasingly sophisticated software run on advanced, virtual systems like tele-immersion, cave and wearable computing devices, Minkin predicts that training will increasingly be adapted to individual learning styles. Some people learn better visually, for example, others through oral or written instruction. Human/computer interfaces will create ultra-accessible "intelligent tutoring systems" that understand your learning patterns and offer up content designed and delivered accordingly.
Immediate and detailed feedback will come from artificial intelligence (AI) aides that will judge "how to teach" and "what to teach" for each individual. In addition, Minkin says, superstar teachers and industry leading trainers will become holographically accessible in and outside the training room.
And if the "real thing" is beyond the budget, how about a digital droid? Using Macromedia's Flash as a development tool, Nunica, Mass.-based Media 1 has already developed virtual teachers for online instruction. Using its "Flash Avatar" application, corporate trainers can fashion detailed, animated versions of themselves to facilitate sessions over low-bandwidth connections. The animated avatars are graced with an instructor's actual voice and mannerisms—and, unlike the human version, avatar-led classes and help sessions would be available around the clock.
For Chris Willis, ceo of Media 1, the Flash avatar application is the logical step beyond video training. "With Flash, you can take something the size of a room and make it fit on the head of a pin," she says. "And with the Web, the training is right here, right now and recyclable."
BT's Pearson predicts that in about five years programs will allow us to take the avatar concept to the next level and endow a virtual instructor with the artificial personality of our choice. We could even make them appear to be a movie star, or even a deceased genius like Albert Einstein, opening up the potential for a whole new business of personality licensing.
Such a tool would allow us to "tinker around" with the synthetic personality that we respond to best. "The concept here is that these tools become virtual teachers packaged into a system that would know the domain and content that the student needs," Minkin says. "It's combing an expert system and an instructional model based on AI."
Another tool to evolve from these technologies is Sensory Assist. Coined by Minkin, SA is a compelling set of marketing tools born out of immersive virtual reality and social science techniques used by those who study people—anthropologists, ethnologists and behavioral psychologists.
Sophisticated SA tools will go beyond demographics and psychographics and make for ideal market research and customer service training. Say your company is launching new foodstuffs in Japan and you want to understand the habits of Japanese shoppers. "Audio wearables" strapped onto select consumers and 3-D glasses would allow you to see and hear their shopping practices and preferences.
"By understanding how different ethnic groups react, you get true market immersion, and you'll see improved observational behavior paradigms," says Minkin.
How and when the business world embraces and absorbs all these nascent goodies currently revolving inside the crystal ball are compelling questions. But the most salient issue may be how much. The embattled tech industry continues to pockmark the economy and sap venture capital. Bankrolling such fringe technologies will not come easy.
"The bells, whistles, smoke and mirrors are things that do excite, but there'll be less and less money to do it for a while," says Minkin. "We're going down a road that's tech-push and lack-of-market-pull, and there's very little sense of the customer."
Give tele-immersion another decade of tweaking. The first test run produced very low frame rates with twitchy delays. The projected images were peppered with snow flurry-like confetti that was nearly eliminated in a later test, but the latency issues remained.
With Internet2 and the Next Generation Internet erecting supersonic pipelines through cyberspace, those latency issues will eventually be solved. The price, however, may take a while to be commercially viable. Virtual reality pioneer Jaron Lanier says tele-immersion is about 100 times too expensive to compete with current technologies. He guesses that it will be cheap enough for limited production in five years with widespread use in 10 years.
Another dilemma with tele-immersion is the awkward equipment the participants must wear. To be effective in a professional conference, for example, you need to be able to read people's eyes and body language and see if there's sweat on their foreheads. That's hard to do if you're wearing bulky headgear replete with tracking gadgets. But autostereoscopic equipment, Towles says, holds the promise of eliminating the need to wear any body equipment in about 10 years. "Nobody wants to get suited up to make a phone call," says Towles.
The current price tag for the Nomad retinal scanning device from Microvision is about $10,000. That could buy a lot of laptops. As Nichols explains, market demand and mass distribution is limited by a lack of application developers. "There's so many integration opportunities for people to create, but there's not many out there yet," says Nichols. "There's a whole new cottage industry that's waiting to happen on the software side."
Aside from cost and development issues, another perhaps more pressing matter remains: Is it really worth the effort? "We're going to see an ongoing debate that won't be settled by any technology," says future trends expert Michael Zey, author of The Future Factor (McGraw Hill, 2000). "We'll have to see whether these things will be important to people. I know through training that I've run that the symbolic aspects of somebody actually being there was more important than the information they imparted. That cannot be replicated."
Considering the speed of change, however, the "Virtual Generation" may come sooner than we think. Who knows, perhaps soon we'll be zooming to work in those metallic hovercraft cars many people imagined we'd have by now. Until then, we can only dream.
jeff barbian is associate editor of Training. firstname.lastname@example.org