Future generations of the tiny mechanical devices described by the terms “MEMS” and “NEMS” are full of amazing possibilities.
Today these microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS) are at the heart of advanced inertial measurement units (IMUs) that trigger automobile air bags during collisions.
In the relatively near term, these devices may offer precise switching for RF applications such as military satellite communications and cell phones. They also may provide tiny moving mirrors to steer free-space and optical fiber lasers, as well as provide the foundation for small mass-storage devices for data.
Further into the future, speculation is the stuff of science fiction - smart devices able to attack machines and living organisms at the molecular level, microscopic surveillance devices that could be scattered like dust, and virtually undetectable sensors for chemical, nuclear, and biological agents.
Some futurists of the alarmist stamp also warn of a dark side to nanotechnology - in which I’ll include MEMS until a better all-encompassing term comes along. Some warn that the nanoscale devices of the future will be able to mimic destructive viruses and other microorganisms, and may quickly get out of control once released into the environment.
Despite the dark predictions, however, the promises of nanotechnology sound almost too good to be true. I scarcely need to mention that when something sounds too good to be true, then it usually is.
The more I hear about MEMS, the more I start to remember a previously hot technology of two decades ago for which the promises also sounded too good to be true. This technology, which burst on the scene in the mid 1980s, was called artificial intelligence, better known as AI.
Artificial intelligence described application-specific computers and special software programming that its practitioners intended to emulate human thought, reasoning, and even learning from experience.
Many terms emerged from under the AI umbrella, including artificial neural networks, fuzzy logic, and knowledge-based expert systems. No matter what the trendy name of the time was, however, the intent was the same: computers were going to build
machines that were the equal of - and ultimately that would be superior to - the human brain.
The promised military applications of AI were staggering. We were told of the imminent design of computers that could predict and counter the strategies of enemy generals even before those generals finished their planning. Smart munitions - still in their infancy 20 years ago - would be able to stalk and kill enemy targets like an animal predator hunts its prey. Unmanned aerial vehicles could learn from experience how best to avoid enemy air defenses.
Artificial intelligence, of course, also had its dark futuristic vision. Most of us remember the HAL computer from the movie 2001: A Space Odyssey, that illustrated a machine with human intelligence that gets out of control and wreaks havoc.
Make no mistake; artificial intelligence wasn’t all empty hype. Today we have smart munitions, unmanned vehicles that can fly missions on their own, and battle-management systems that suggest courses of action.
I remember the Pilot’s Associate program of the Defense Advanced Research Projects Agency, for example, that sought to blend information from many different sensors into a coherent display of “situational awareness” maps and icons on head-up displays or on helmet-mounted displays.
Today much of that Pilot’s Associate technology is standard fare for advanced aircraft such as the F-22 and F-35 jet fighters. The only difference is we just don’t call it artificial intelligence anymore. Today it’s simply conventional programming that broadens capability.
It was overexpectation, not failed expectations, that hurt the concept of artificial intelligence. Practitioners and supporters of this computer discipline made the fatal mistake of promising the moon. When the reality fell far short of built-up expectations, the skeptics and the disillusioned alike dismissed it all as nonsense - even though some of the promises eventually came true.
As the 1980s wore on, I can recall many exasperated sighs when the subject of artificial intelligence came up. “Not AI again! Haven’t we been this way before?”
The backlash against the overhype of artificial intelligence, which at the time we called the “AI Winter,” retarded research efforts, enthusiasm, and venture capital, and actually hurt the technology’s development, and set back its eventual benefits perhaps by years.
The parallels I see between AI and nanotechnology initiatives, quite frankly, make me nervous.
Even though scientists are conducting much meaningful research into MEMS and NEMS technology these days, I am seeing many of the same criticisms leveled at nanotechnology today that I did against AI back in the ’80s - MEMS probably won’t be deployed in large numbers for another five to 15 years, which suggests that nanotechnology may be something that is always in the future and never here.
Other things I hear about nanotechnology today I heard about AI back then - it’s too expensive, too complicated, it’s just a laboratory curiosity, there have been too many disappointments in research projects.
Expectations today are tremendously high for nanotechnology. I’d hate to see further disappointments turn researchers against MEMS and NEMS and channel their energies elsewhere, but I’m starting to think that’s what we’re up against.
Nanotechnology, like AI before it, has had notable research successes. Raytheon Co., for example, has demonstrated radio-frequency MEMS switches, which show great potential for microwave applications, but that today have reliability issues.
Honeywell Inc. is placing its expertise and research dollars into MEMS for small inertial measurement units. Northrop Grumman is investigating MEMS for advanced astronomical optics, and Hewlett-Packard is looking into MEMS for data storage.
The trick is to stop overhyping nanotechnology and look at it from a realistic perspective. First, nanotechnology won’t be deployed in a big way for a good time to come, and second, nanotechnology - for all its promises - will not be a technological panacea. Nanotechnology will come in its own good time.