By J.R. Wilson
Automatic software code generation has gained limited support in aerospace applications in recent years, yet remains outside the norm for many influential software engineers. There has been good reason for caution. Despite substantial advances in automated software-engineering technology, code writers rarely get it right the first time around, which means routine expensive software fixes after code delivery.
For many responsible for the complex code necessary for modern space and aviation systems, the argument most often involves not so much about how code is generated, but the process that precedes it. Without strict adherence to proven standard software-engineering processes, experts say code designers will continue to face the same problems that have dogged them for years.
"Require-delay-surprise is the current paradigm," admits Stephen Cross, CEO of the Software Engineering Institute at Carnegie Mellon University in Pittsburgh. "That's perhaps a flippant description, but 60 to 80 percent of the cost of software occurs after the software is delivered, either because of defects in the coding process or because the user requirements really weren't understood until they got a working copy of the system. So a lot of our effort at SEI isn't in the generation of code, per se, but in improving design practices before the coding begins. The cost of repairing defects or bad design decisions are much less before the code is built."
Another issue, especially in military applications, is the huge base of old — or "legacy" — software deployed with fighting forces, says John Carbone, marketing vice president for Green Hills Software in Santa Barbara, Calif.
"In aerospace and defense, we have not seen a great deal of automatically generated code," Carbone says. "The components of applications our customers are building are from legacy codes, where they make an effort to reuse codes developed from previous programs or at least use them as a starting point. As for new development work that is primarily done in C, previously in Ada, much of that is being used as legacy code to handle devices that have the same requirements they've always had.
"We haven't seen examples of a lot of design tools used to any great extent to automatically generate code," Carbone continues. "There is an effort to shorten the development cycle by not writing exclusive code, but I don't think we've seen any product or in-house capability really take off."
Experts often describe automated code generation as drawing a picture — or model — of what the systems integrator wants, then letting the automated tool translate the picture into code. The standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems is the Unified Modeling Language (UML), developed by engineers at Rational Software Corp. in Cupertino, Calif. Basically, UML creates a "blueprint" for constructing complex software.
"In the more highly automated approach, you say what you want rather than how you want it to be coded," says Cordell Green, president of the Kestrel Institute in Palo Alto, Calif. "That's called the what-to-how spectrum. In the what area, you typically have high-level specification- or problem-modeling languages." Examples of this, he says, are Specware in the United States, Zed in the United Kingdom, and B in France. "The automation reduces that to a standard programming language, such as C, C++, Java, and Ada."
Automation, Green cautions, often is not the answer to difficult software engineering challenges. "If you have an automated system for optimizing the code, modern applications are so complex that human programmers have trouble keeping track of all the dependencies," he says. "They hit a limit on the amount of optimization they can do. That manifests in a couple of different ways. One is the programmer can't make it any faster because he has to take into account more things than he can think about. The other is errors."
Part of the argument over whether to use automated coding stems from how to reverse the process, or make a change in the generated code that can then feed back to the original "picture."
"No picture is truly worth a thousand lines of code," says Dr. Adam Kolawa, chief executive officer of Parasoft Corp. in Monrovia, Calif. "What does it mean to automatically generate code? You can't just put electrodes on the brain of the developer and have the code come out. The important element is what language do you use to generate code — pictures or sentences? Some people believe you can write better code with pictures; I don't believe that.
Writing code can be far more complicated than meets the eye, he says. "Writing code is like writing an article — the code communicates information in logical sequences, just as an article communicates information in logical sentences," Kolawa says. "Is it easier to talk with your hands than in writing? Or you could create an outline for an article and have a program generate that, but someone still has to fill in the details — that's the glue code."
Kolawa says he believes too much hype surrounds the notion of automatic code generation; UML does not actually create useful code but merely creates a framework, he says.
"The code that creates the logic and really represents what the application is supposed to do has to be done by hand anyway," Kolawa says. "People come in and write glue code by hand, which is the logic of the entire system, everything that is real in the code to control the engine or whatever," he says.
"When they want to take this [automatically generated] application code and put it back into the visualization engine, which is UML, this round-trip from hand-written code to UML doesn't work," he says. "It's very difficult to do, to get back to that original model. So developers tend to just forget about the original model. So why not just write the code by hand in the first place?"
Kolawa does concede some approaches to automatic code generation work, but argues it is a very limited arena.
Kolawa says there is one way of generating code that is automatic, specific to C++, and is a program called Template. This software, he says, is a part of UML. "It does work, and people do use it," Kolawa says. "But while it allows people to generalize code and get compilers to write it, we're not talking about a high level of automation."
The value of debuggers
Green Hills's Carbone, on the other hand, argues on behalf of debugging programs, including combinations with the Green Hills MULTI 2000 Integrated Development Environment, which he claims can help programmers adjust the "picture" at both ends of the process.
"You construct a series of block diagrams representing the processing flow you want to perform and press a button to generate the code, then compile it, link it, run it, and then you have to debug it," Carbone says. "Presumably the generated code is consistent with your diagram, but may not be exactly what you want, so you have to debug to resolve the discrepancies. On one hand you have the block diagram, on the other compiled code executing on the machine. How do you debug that," he says.
Green Hills engineers have integrated their tools with Rhapsody from I-Logix Corp. of Andover, Mass., as well as with Rational's Rose tool, Carbone explains. "As a result of that integration the user is able to run the test code, use the debugger to zero in on the portion of code that is a problem and perhaps find a variable that needs to be twice as large as written," Carbone says. "So they make a change in the C code to correct that. What Multi does is send that information back up to the UML tool that then remembers the change, so the next time it generates code for that particular block, it will generate the corrected code rather than the original code."
If the user determines that software programmers set up an entire algorithm incorrectly, such as adding two values instead of multiplying them together, the required change can be made at the UML tool level and new code generated.
"So the user can make changes at the high level, dealing with the block diagram, or at the C code or compiler level," Carbone says. "Either way, the fixes are integrated, which means you only have to make the correction once; otherwise, you run the risk of getting out of sync."
That same system can be used on hand-generated code, which is where Kolawa says he believes it to be most useful — not in finding errors after the code is completed, but in preventing them in the first place. Even there, he says he believes the key lies more with human beings than machines or software.
"The most important part is the culture of the development group," Carbone explains. "It takes many years to develop a feeling of responsibility for the code they are building, understanding that development is responsible for making sure the software actually works, and trying to put processes inside the development process that prevent them from making mistakes, which is a fundamental part of error prevention. Then you can automate code verification with different tools and processes."
Andy Lynn, systems architect at FGM Inc. in Dulles, Va., takes somewhat of a middle ground on what he calls developing a "model-driven architecture" (MDA), saying the entire environment is continuing to change, especially with the growing use of Java.
"We are a long way from achieving an A-to-Z MDA capability, where you sit down with a pure design tool element like UML or some other architectural definition tool and push a button to get large amounts of infrastructure code generated. There are folks in some niche environments who are achieving that, but most are not," he says.
"In the short run, we've become less interested in code generation and more interested in XML control file generation," Lynn says. "When we choose to do certain kinds of binding and how loosely or tightly we couple various information operations, we have to support multiple data sources, legacy and other, so various rules apply and we have to manage all this across various versions of the systems we deploy."
FGM operations vice president Ellen Minderman says the ultimate goal is to create an environment in which a quasi-non-programmer can sit down, whip up an interface, and it can be applied to a variety of platforms.
This capability is especially important in projects involving the U.S. Department of Defense (DOD), he points out. "When you deploy a new version of code it goes through a rigorous testing process that often takes a long time to get into the field. And each site has different configurations and wants to configure it to their changing environment without getting a whole new set of code. That is one reason we went to a more loose coupling and fairly generic code driven by specific code files," she says.
"In other words, the code will know it is supposed to connect to whatever source the file tells it to, in the location it defines, in the language it sets, and will pass it to the location it dictates. Some don't use a lot of code-generating capability because they want all that in the code and once you do that you are very tightly coupled."
Lynn says the Java environment effectively provides his engineers with code-generating software. He cites Remote Method Invocation (RMI), which will generate client and server side stubs or software hooks. At the same time, Java is part of the movement against automation.
"From one point of view, code generation is on the way out because we're all tired of running our Java code through several compilers to get all these stubs when that can be done automatically through Java introspection," he says. Java introspection is a technique that enables software engineers to look at the compiled Java class file and determine what operations are performing certain functions. "For example, using RMI to analyze the class file and figure out how to hook up RMI without running it through a separate compiler. One piece of code can inspect what is going on with another. We're trying to minimalize some aspects of code generation and make it more automatic, while at the high end we're trying to work with more graphical, high-level decisions."
Lack of experience
One argument that plays to both sides is the growing lack of highly experienced programmers. Some say that is cause enough to rely increasingly on automatic code generation; others say automatic coding only compounds the problem.
"Even though automatic coding does a lot of the basic functionality for you at the push of a button, it often gets in the way when you are trying to answer your specific requirement. It provides 90 percent easily, but makes that last 10 percent very difficult to do," says FGM's Minderman. "A lot of engineers prefer to retain control throughout, because the 90 percent the code-generator writes is well understood and easy to do for an experienced software engineer. It's not easy for a junior person without a wealth of experience, so code-generation is a real benefit there."
Kestrel's Green says experts at the U.S. National Security Agency ran an experiment in which one team built code the old-fashioned way — writing it in C, producing best practices, and so on. Another team used the method of writing the spec and then generating the code. Neither team knew they were in an experiment or that the other team existed.
Scientists from the University of Maryland at College Park, Md., audited the results, and then an outside agency audited the university's results. The conclusion: automated code had only half as many errors as the handwritten code. There also were indications the critical error rate would have been much higher had the specifications for the automated code been more clearly written; an ambiguity in one specification resulted in a misinterpretation that was spread throughout the resulting C code.
"In the automated approach, you have a series of transformations that move the high level spec down to code and add the detail needed to make it work in that language. You can prove the correctness of these transformations by hand or the proof can be automated for a high-assurance system, such as navigation. There also is a verification approach, where you write the low-level code, then try to prove it has the right properties," Green says.
"By the time you're down to the low level code, however, the proofs are much more complex, so it is much easier to prove the high level specifications and transformations have the desired properties. That is a new technique, still on the research frontier, but I expect it eventually to displace the code verification approach."
Another possible roadblock in the adoption of automatic code generation tools is the resulting in reading automated code, compared to code written by a human.
"A lot of automatic code uses cryptic symbols or numbers to identify variables, where a human might actually write out a descriptive name. As you read through the code at the C level debugging, it is helpful to have variables and comments in the code you can read," Carbone says. "The automatic program hasn't a clue what to call these things, as a human would. If you tried to tell the program to use various descriptions for various functions, you find yourself coming closer and closer to just writing the code by hand in the first place."
Minderman says that issue becomes even more important with the passage of time — an element that always has to be taken into account in military systems.
"What we write now will, in 10 years, be legacy. So if your whole mode of operations is to update the model, push the button and out comes your code, in 10 years you better be sure you still have that software generator. If you no longer have the license or the program is no longer supported, you're cooked," she says. "That is particularly true with DOD contracts — you have to make sure the code generation will stay there and hasn't changed formats and will continue to work with your model for years to come. If a human wrote it, chances are better another human will understand it than if a machine wrote it."
Lynn says it comes down to understanding and accepting trade-offs.
"If you code by hand, you have to document and one of the fall-outs from taking the MDA approach is you get the documentation and design notes — called artifacts — as part of the process," he says. "And, if you make changes to it, the documentation is automatically updated, where if you do it manually, the docs often get out of sync with the software. On the other hand, depending on the code-generating environment, diagnosing problems can be a nightmare because automated systems sometimes use very inscrutable code."
Carbone says even those who are most enthusiastic about automated tools cannot develop a complete application without handwriting a single piece of code.
Hand-written code still necessary
"It seems the more complex the job, the more difficult it is to apply a tool to it; the most complex aspects almost always require human intervention," he says. "A high-volume of code in a short period of time pushes more toward using tools. The development cycles in the aerospace industry may be long partly because of the time required for the software, but it also may be that no matter how quickly the software is developed, other factors would keep that cycle long, where in a commercial application, speed of software development may be key to overall development speed.
"The military is trying to promote the reuse of code to shorten the development cycle — and some of these programs help that by encapsulating some elements so they can be reused. So the benefits of reusing existing code are very strong. But there may be other factors that differ from commercial applications. Automotive design compared to aircraft design is quite different and not subject to the same regulations. There is a lot of testing in aviation that has to be performed before a plane can go into production, far more than an automobile."
He also sees another possibility — a greater difficulty in adopting a new procedure in the military.
"There is a tremendous capability that exists today in some of the available tools and the principle roadblock appears to be changing the programming paradigm from a familiar, comfortable way of developing code," Carbone says. "It is a little scary for the programmers to do it hands-off, just creating diagrams and expecting the right code to come out. There is another perception — sometimes true and sometimes false — that the resulting code is not as efficient as what can be developed by hand. So the two areas that have to be overcome are familiarity and the perception of efficiency."
Cross, however, sees the slower military approach in a more positive light: "In the military, we have an advantage in building software-intensive systems the commercial marketplace doesn't have because it is not driven by first-to-market, so the systems are tested more thoroughly — certainly mission critical systems — so defects are more likely to be found. That also makes them more expensive, of course."
But Cross says there is another source of expense that is purely related to humans, especially when software developers make an attempt to cut corners by grafting a commercial product onto other code, no matter how the software packages were generated.
"A factor we're increasingly concerned about, especially in military systems, is the CERT effort [computer emergency response team]. Through the end of June this year, we'd had 43,000 unique incidents reported, most on commercial systems," Cross reports. "The important thing for software engineering is 95 percent of these incidents are related to a defect in a commercial product that was introduced via imprecise or lazy software practices."
For SEI, the solution begins long before a single line of code is written, by whatever means. Cross says SEI experts have developed three major pre-coding rules to improve software engineering practices: Think strategically about the system, build as little new software as possible, and leverage experience within the organization.
"We think really hard about the architecture, much as you would about the blueprint of a house, taken from different views, such as usability, security, functionality, and sustainability. Then we reuse everything. Software has become so complex and massive, if you have some code that works, don't try to reinvent it," he says.
"Another part of this is called product line practice, which has been applied by the military to avionics for some years but is fairly new in application to software. This means only building new software for new functionality within a common architecture across a family of systems," Cross says. "As a result, software has become more of an assembly practice than a coding practice. That's where a lot of the software industry is going anyway, to components, but it is a new issue within the military, where a lot of non-technology issues have to be resolved before they can be applied."
The final element of experience boils down to never making the same mistake twice.
"So much of the software cost is related to the experience of the people who build it. So we try to help organizations capture and retain as much of that experience as possible. The organization continually improves when you can capture lessons learned," Cross says.
Human vs. machine intelligence
Kolawa says a basic rule to remember is that coding is the process of converting human intelligence into computer intelligence. People can communicate complicated things to each other with the languages we have now; so can computers.
"The issue is not related to automatic code generation but to automatic error prevention," he says. Complex systems don't fail to work because they are complex but because they aren't built properly," he says. "Things are getting more and more complicated and we are getting better and better processes to build them. The reasons planes aren't dropping from the sky are because we have mastered the process of building them and are able to prevent errors in the process.
"The parts of an airplane are made with the help of machines, so that is a degree of automation," Kolawa continues. "But if we just had machines producing them and didn't have other machines verifying the parts produced were correct, we wouldn't have planes that fly. So automation of production is not what really makes the difference — the difference is the control processes added to production lines to make sure the quality of the parts was correct."
Just as applications will always grow to match the computer speed and memory available to them, increasingly complex systems will bring on increasingly complex tools — and vice versa.
"I'm more and more encouraged, not discouraged, by what I see happening in this area," Lynn says. "I don't feel we've lost control of our tools or processes, as some fear, but that it is a very exciting time to be a developer. And I personally reward those companies and endeavors that give me more choice, not less."