Unit Testing of Embedded Firmware – Part 1 – Software Confucius
This article is the first of a five-part series covering how to setup up a unit test harness on an embedded software project.
For the purposes of example, I’ll use the CppUTest harness, building within Silicon Labs’ Simplicity Studio (a YACE – Yet Another Customized Eclipse). This setup will be used to unit test components for Silicon Labs’ Thunderboard Blue Gecko SoC (ARM) projects. The unit tests are executed on-host, not on-target.
The steps and process are readily adaptable to alternative tools: CppUnit, Unity, Google Test, Atollic TrueSTUDIO, CodeWarrior, 8051, ATmega, TravisCI, Bitbucket Pipelines, and much more.
The five parts in the series are:
- Software Confucius: The case for unit testing in embedded software development.
- x86 Unit Test Build: Creating a GCC x86 build of the CppUTest harness and tests in Simplicity Studio.
- Running & Debugging: Running and debugging the x86 build within Simplicity Studio.
- Code Coverage: Coverage measurement using LCOV & Gcov.
- Continuous Integration: Building and running CppUTest unit tests in CircleCI.
I won’t go to detail on how to write unit test cases. There are plenty of great resources for that. Instead, I’m interested in getting people over the setup hurdle. Once you have a harness running, you’re all out of excuses for not writing tests.
Software Confucius
Embedded Recalcitrance
Embedded software is late to every party. Agile, Scrum, continuous integration, unit testing, Test Driven Development (TDD), everything.
In my experience, these practices are frequently absent in embedded software. Embedded engineers aren’t “normal” software engineers. We like soldering irons, we like hardware, we like physical things. It’s no mean feat to convince an embedded engineer that testing can have value when it’s not physical.
Setting up a unit test harness is, pound for pound, a little harder for embedded software projects than other software. Most embedded IDEs don’t support it out of the box, as though they’ve never even heard of it. There’s a non-trivial effort required to get started, and it gets used as an excuse to never get started.
On-Target Or Off-Target
Even when I stumble upon an embedded engineer who believes in unit testing, it’s very often the case that they want to unit test on-target. This makes hardly any sense to me, apart from being able to use one toolchain for everything. I think there are many advantages to off-target (on-host) unit testing:
- a faster development micro-cycle that minimizes the number of times code must be deployed to target.
- the ability to make significant progress even when target hardware is unavailable.
- tests that can be developed, executed and debugged using more powerful tools.
- tests that can run on a continuous integration server, and don’t require any other hardware.
- code that, by definition, is more portable. i.e. Built by at least two toolchains, and executed on at least two systems.
I Don’t Like That Type Of Testing
Embedded guys tell me that. They also say they don’t have time.
Here’s why you should like this kind of testing:
- You can test scenarios that you can’t realistically create on-target. Especially panic/assert/insane scenarios.
- It’s easy to create many more input vectors than you can create by testing manually and physically.
- You can take control of the timebase to speed up test execution dramatically. ie. Fast forward, rewind, pause, and jump around.
- You can easily test boundary conditions, including timeouts, at (X-epsilon), X, and (X+epsilon). Something that is often not achievable, and certainly not cost-effective, to test manually and physically.
- You will have fewer errors reach the target system.
- We’ve all inherited code from someone else. And all experienced the pain of refactoring, maintaining and extending it. Can you imagine how easy it would be if that code came with a comprehensive unit test suite that characterized its behaviour? You should create such a suite for your successor. And your employer should demand you to if they care about risk management.
My Own Heresy
As much as I think the value of unit testing is underestimated by many embedded engineers, I personally think the value of Test Driven Development (TDD) is overestimated. At least the very dogmatic tests-must-be-written-before-production-code strain of TDD. To me, it doesn’t matter whether unit test cases are developed minutes or hours before or after the production code. What matters is to achieve good depth and quality of testing before the code is mainlined, and to have cheap regression testing going forward.
James Grenning
Not convinced? I wasn’t either. Maybe do what I did, and try James Grenning: https://wingman-sw.com/renaissance
He was one of the authors of the Agile Manifesto and one of the few who has tried to convert the embedded recalcitrants. He’s also a CppUTest maintainer. Here’s his book: https://www.amazon.com/Driven-Development-Embedded-Pragmatic-Programmers/dp/193435662X
The book is good, and does the job of getting you started. In the end though, the only thing that will convince you is sucking and seeing. Writing plenty of unit test cases.
Very interesting series and blog. Thanks for sharing.
For smaller companies and teams is quite difficult to step up their game and introduce UT (unit-testing) for a few reasons:
– devs in those companies mostly working there for many years and are not familiar with those tools
– even worse, the new team members than may have previous experience with UT are not supported by the seniors and management
– there are cases that management has no technical background and they resist in such changes, because “everything was working so far”
– they feel that their product is not critical for UT and that this is only meant for automotive of life-threaten projects.
– most firmware engineers (especially seniors) are coming from electronic engineering and they’re not software engineers, thus making using “complex” tools difficult.
Finally, I’m also not fun with TDD for embedded as it doesn’t make much sense as a general concept. It makes sense for some modules and functionality but not for the whole project. The same stands with mocking and code coverage. Devs need to have enough experience to use those tools in moderation and not driven by the tools. Tools need to adapt in to the use-case and not the opposite. There is a lot of hype coming from the internet and developers groups around testing, but much of that hype doesn’t make sense even in there case. Testing needs to be pragmatic and not dogmatic.
Thanks for the great comments dimtass.
I concur with pretty much everything you’ve written. I think we seem particularly aligned on TDD dogma!
Certainly devs should not be chasing 100% code coverage. Thinking about a recent project I worked on, we had a CLI with a tonne of commands. There was little value in building unit tests around those commands because we were interacting with them manually on the command line all the time. Conversely, unit tests around (subcutaneous) state machines, aounrd boundary conditions and timings, around scenarios that are rarely produced (or hard to produce) in normal operation, is high value stuff IMO.
“there are cases that management has no technical background and they resist in such changes” – I think the answer to that one is to not tell management. You would only tell management if you think unit testing is a separate activity that costs more time. In my experience, it isn’t and it doesn’t. It just part of the detailed activities performed to deliver functionality. A system of work. One that delivers free regression testing down the track.
Thanks for the article. We are setting up a new software house and want to take a modern and robust approach to developing and testing. It would be good to hear some thoughts about tools like vector cast for managing testing