With embedded control systems, the risk of software defects becomes evermore critical and qualification of test and development tools is a useful approach for designers, writes Jeremy Twaits
Oil and water - fire and ice - complexity and reliability, are combinations that don't typically mix well together.
Yet in an increasingly software-defined world, engineers are expected to create products faster with fewer defects, conform to more stringent standards and meet lower test costs. This, as you may imagine, is not a straightforward task.
There can be little doubt that software is becoming an increasingly influential aspect of our lives.
The car is a very good example. A system that was once mechanical and electrical in nature has become more and more defined by the software that runs in its engine control units, which contain up to 10 million lines of code in a modern vehicle.
And taking that one step further, Samsung claim that you can "upgrade your life with a Wi-Fi enabled refrigerator". It seems that nothing is safe from the software revolution.
But how does this affect you? For years, standards bodies in regulated industries such as automotive, aerospace and medical have governed the rigourous certifications that products must achieve before being deployed.
Now, as devices become increasingly reliant on embedded control systems, the potential risk posed by software defects has become more critical. In a move to counter this, standards bodies are placing increased scrutiny not only on development tools, but also on the quality and accuracy of test tools.
The automotive industry, for example, is governed by the ISO 26262 standard, which enumerates specific requirements for test tool qualification. These requirements include creating a risk mitigation plan that assesses the impact and criticality of test tools and documenting the steps and processes used to address high-risk areas.
The goal is to ensure that a testing tool can be used to accurately validate embedded software without introducing defects.
The aerospace industry follows a similar tack, with new standards like DO-254 and revision C of DO-178 having been recently developed. DO-178C includes a new section entitled "Software Tools Qualification Considerations", which examines tool development life cycle and documentation artefacts.
Sean Donner, senior software engineer at Lockheed Martin highlighted the effects of this, saying that, "as the complexity of the systems we deliver increases, so does our test equipment; thus, a tight focus on ensuring quality and safety across all phases of development is paramount to both Lockheed Martin and our customers."
"As a result, many of the latest regulatory standards for the defence and aerospace industry place a greater emphasis on employing rigorous and strict software development practices to ensure reliable and accurate test results," pointed out Donner.
To follow these strict procedures, very specific documentation must be produced, making manual validation a time-consuming process. To counter this, National Instruments has worked with a partner, CertTech LLC, to create a qualification kit for TestStand test automation software.
The aim is to have a commercial-off-the-shelf product which removes the need for test organisations in regulated industries to perform their own formal tool qualification.
Whilst the influence of these standards will have the most immediate influence in regulated industries, their reach will eventually stretch to businesses outside of these areas.
For example, when you hit the brake pedal, your car has to stop. When the aircraft you're sitting in is descending, the landing gear has to deploy.
If your smartphone, on the other hand, was to glitch and restart whilst you were trying to check the weather forecast, yes it's annoying, but it's probably not going to endanger your wellbeing.
It may, however, impact the wellbeing of the manufacturer, whose reputation could be damaged by low product quality and even recalls if the software defects are debilitating enough.
As companies realise the financial benefits of catching defects earlier, they will naturally enforce their own standards, without it necessarily being imposed by a regulatory body.
To ensure test code quality, test engineers must follow the same best practices as embedded code developers. The engineering V-model will be familiar to many of you, illustrating the progression from high-level requirements to a deployed solution, as well as the corresponding test and review phases that verify that those requirements have been satisfied. It has typically been the bread and butter of design and systems engineers, and now test engineers are increasingly making use of it.
As the engineer progresses "through the V", there are guidelines and processes to be followed at each stage, such as requirements gathering, software documentation, traceability, unit testing and code reviews.
Software specifications are used to define expected system behaviour under the conditions in the original risk assessment, and these are used to define unit tests integrated into continuous testing cycles for regression testing.
Engineers writing code modules in LabView, for example, can use the unit test framework toolkit, and there are frameworks available for other programming languages you might be using.
Structured environments may go so far as to mandate that multiple unique individuals review and approve any new code incorporated into a project. These structured approaches help to manage and mitigate the risks associated with introducing changes into a system. All in all, the goal is to identify and eliminate software defects as early as possible.
Finding bugs earlier means less development time, and less money spent. Whether or not you work in a regulated industry, financial incentives are proving a compelling reason for companies to drive a greater focus on test software quality.
Following these guidelines, complexity and reliability needn't be like oil and water. On the face of it, they shouldn't mix together, but rather like chilli and chocolate or honey and mustard, in reality they just work.
Jeremy Twaits is automated test product manager with National Instruments UK & Ireland