LDRA debuts point tool to automate test generation, unit test management

Sept. 26, 2013
WIRRAL, U.K., 26 Sept. 2013. LDRA, a provider of standards compliance, automated software verification, source code analysis, and test tools, introduces LDRAunit, an integrated framework for automating the generation and management of unit tests.

WIRRAL, U.K., 26 Sept. 2013. LDRA, a provider of standards compliance, automated software verification, source code analysis, and test tools, introduces LDRAunit, an integrated framework for automating the generation and management of unit tests.

LDRA, by separating unit testing capabilities from the rest of the LDRA tool suite, delivers a focused test management tool that addresses a need for software unit test without requiring investment in a complete tool chain. LDRAunit is a flexible solution well suited to companies that are committed to software quality but not necessarily required to certify to a specific standard.

LDRAunit follows typical unit testing methodology by taking the smallest piece of testable software in an application, isolating it from the remainder of the code, and determining whether it behaves as expected. LDRAunit tests code units separately before integrating them into modules and then systems to simplify identification of which part of the code might be failing to deliver expected results.

Using control flow and data flow analysis techniques, LDRAunit automatically generates tests in the application language (C, C++, Ada, or Java) and makes them capable of executing on the host or target. LDRAunit also automates stub generation for artifacts, such as methods, constructors, system calls, and packages that are managed within a user interface.

Through eXtreme Testing capabilities, LDRAunit applies a range of return and global parameter values to the managed stubs for testing stub behavior and configurable exception handling to ensure that all code can be tested, minimizing the need for manual intervention.

“Traditionally, creating software tests that exercise the application has required at least as much time as creating the application itself,” says LDRA Operations Director Ian Hennell. “LDRAunit changes that. The tool itself takes over the tedious, error-prone process of manually developing a test harness simply by analyzing the code, generating tests, and applying a range of parameters that ensure against conditions that can cause unexpected results. Automating such comprehensive testing in a stand-alone product significantly improves a development team’s flexibility while minimizing costs.”

By storing groups of tests as sequences, LDRAunit contains the information required to rerun test cases and store the results for regression verification and requirements-based testing. LDRAunit can measure and report structural coverage metrics, including procedure call, statement, branch/decision, modified condition/decision coverage (MC/DC), and linear code sequence and jump (LCSAJ). Coverage data can be presented through a combination of built-in reports, custom reports using a results application programming interface (API), and flow and call graph displays.

Developers can use results to populate compliance reports that give overall pass/fail metrics for industry standards, such as DO-178B/C, with line-by-line views that detail specific statements, branches, and conditions executed by individual tests and combinations of tests.

About the Author

Courtney E. Howard | Chief Editor, Intelligent Aerospace

Courtney enjoys writing about all things high-tech in PennWell’s burgeoning Aerospace and Defense Group, which encompasses Intelligent Aerospace and Military & Aerospace Electronics. She’s also a self-proclaimed social-media maven, mil-aero nerd, and avid avionics and space geek. Connect with Courtney at [email protected], @coho on Twitter, on LinkedIn, and on Google+.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!