Announcing Synergy/DE 11.1.1fSeptember 22, 2020
5 Strategies for Creating Educational Content that Customers Will UseOctober 22, 2020
Have you ever met a software developer who has never written a bug? Of course not. Applications are big, complicated, and interconnected systems with many potential points of failure. Every developer eventually introduces logic issues and false assumptions into code, causing it to misbehave, use too many resources, or even crash. Even when everything goes according to plan, issues emerge from application component interactions, increased users, or unexpressed expectations. These issues can be costly for your brand image, for your customers’ business, and for your sales team as they attempt to secure new contracts and leads.
To catch bugs, we test. When writing a feature, we test it against our requirements to ensure it behaves correctly, and we may hand the feature off to a quality assurance (QA) team for additional acceptance testing and validation. If we are lucky, we have automated regression tests that test the product as a whole to ensure that new features do not break other parts of the product.
Unfortunately, even in the best situations these testing layers tend to be time-consuming and costly. Automated regression tests usually take hours to run, sometimes even days or weeks. And if you hand off testing to manual testers, you could be looking at a much longer turnaround time only to discover, perhaps, that the work you recently completed breaks some aspect of the product. If you miss something obvious in developer testing, your feature may go through the entire process of being built and distributed to QA only to have it bounce back relatively untested. Even with a less obvious bug, you may have days or weeks of lag time before you’re able to revisit the issue and resolve it. Then the process starts over again.
This QA cycle can cause organizations a lot of pain. It leads to a sort of source code bottleneck, causing organizations to forego potentially valuable feature development (when it matters most) just to minimize the painful QA process. How many times have you heard yourself, other developers, or development managers citing testing and QA overhead when shutting down feature requests? “That feature does seem really valuable to our customers,” the team agrees, “but we need to ship a product with a focus on quality,” which means several more months of cycling code fixes back and forth before the team can consider adding the feature.
Wouldn’t it be great to get results in minutes, instead of hours or days, and to have developers run tests on their own machines before committing code and sending off a build to QA? Unit tests make this possible. With unit tests, developers shorten the regression and bug-finding feedback loop to near-instant levels. So although developers may write several bugs a week, they find and fix most of those bugs so quickly that they rarely make it to the QA team or out to production.
What is unit testing?
At its core, unit testing is about testing the smallest pieces, subroutines, or units of your application at the lowest level to ensure correct inputs, outputs, and operation. If we completely understand what each small part of the program is supposed to do, what its limits are, and what its failures look like, we can better guarantee that the application will work correctly as a whole.
While unit testing does not guarantee that all parts of an application will interface correctly, the idea is that thorough testing of boundary conditions for discrete units of code helps ensure that external consumers get the correct functionality without encountering bugs. If a bug does crop up, it should be easy enough to isolate the conditions that were not tested sufficiently and add a regression test.
This is not a free process, however. A good unit test requires a concentrated developer effort to evaluate all the code paths for a routine and figure out which sets of inputs cause each line in the routine to be executed. This usually means reading, re-reading, and testing the logic several times to ensure good code coverage. Consider the following:
subroutine repeat in arg1, a in arg2, i out arg3, a endparams record i, i4 proc if (arg1 == "") throw new InvalidArgumentException() if (arg1 == "test") throw new ApplicationException("Test") for i from 1 thru arg2 begin arg3 = %atrim(arg3) + %atrim(arg1) end xreturn endsubroutine
When writing a unit test for this code, a developer might ask the following basic questions:
- For the routine arguments:
What does a standard call look like?
What are the limits (min and max) of the arguments?
What happens when I exceed the boundaries of the arguments?
What happens when I do not pass an argument (if possible)?
- For each conditional block or branching path:
Which argument can I pass to reach this path?
How can I verify that the code in this block executed (output, logging, exceptions, etc.)?
- For each output:
Did the routine return an expected result?
At first, as developers get accustomed to checking all the boundary conditions, this process may take as much time as it takes to develop code for the feature itself. Analyzing code and investigating its failure points is an extra effort that requires time and diligence. Writing unit test cases and structuring tests in useful ways is yet another challenge, and test cases get more complicated when there are external dependencies. For example, you may need to take steps to control the flow of data that comes in from a database, external file, or connected service. You may even need to develop a library of code, for instance, for mock services and mock databases. The library may be for unit testing purposes only and may add no direct value to your shipping application. But…
The overhead is worth it
The effort developers put into writing unit tests leads to good design in general and reduces long-term technical debt. The questions that developers ask when writing these tests tease out hidden requirements and design gaps that may have otherwise been overlooked. Every correctly controlled routine argument and data access improves overall software quality and stability and improves the understanding developers have regarding their own software.
If every input and code path is tested, there is far less to worry about when code interfaces with new services and data sources down the road. For instance, if one of your functions is suddenly exposed by a new HarmonyCore API that opens it up to several new calling scenarios you’ve never considered, you can be reasonably confident that the code will do the right thing because it’s thoroughly tested.
Gradually, the unit test creation process will become a habit for developers and will require less of the overall development time. Furthermore, the more painful types of application patterns that take a lot of analysis will naturally show up less frequently in code. For example, it is very difficult to write tests for software with high cyclomatic complexity or large numbers of branching paths. Each path represents additional testing overhead for the developer, and complex paths are hard to analyze. So developers naturally start breaking code down into smaller, reusable units that can be tested individually. Developers soon learn to avoid writing routines with hundreds or thousands of lines, and by avoiding this they make applications more maintainable. Even if a few large routines remain, the unit tests document these routines, making them easier to understand.
Additionally, when you hire or introduce new developers to a unit-tested source base, the hard questions they’ll ask about how routines and core business logic work should be much easier to answer because the unit tests document the code. Someone more experienced with the code has already had to think about the more complex logic, devise tests that prove its functionality, and verify that it behaves correctly, and this is reflected in unit tests.
All about speed
There are many advantages to unit testing, but what makes unit tests so effective is their speed and how they shorten the feedback loop. Unit tests are written by developers for developers. They can run on developer machines as frequently as needed—for example, whenever a developer has code to test. And because the tests are small and run against small parts of code, they are fast. On average, unit tests will each take between 0-12ms to run. That’s milliseconds, not minutes, hours, or days.
Let’s say, for instance, that you have an application with 5,000 methods and unit tests for those methods. And let’s assume that on average each method has 10 tests that each take 10ms to run. Testing the whole application on my developer machine would take only 500,000ms (5000 x 10 x 10ms = 500,000ms), which is 500 seconds or 8.3 minutes. That doesn’t mean you are going to sit around for something like 8.3 minutes every time you make a change. You will most likely test a subsection that takes less than a minute, look for bugs, and iterate before moving onto a larger chunk. When you finally need to run all your unit tests, results are only a coffee break away.
By the time a feature makes its way to QA for validation, it will already have been through hundreds or thousands of unit test cycles, freeing up QA to focus on usability, completeness, and user experience, rather than on whether or not the feature works at all.
More to come
In this article we’ve explored how unit tests help us eliminate bugs, better understand code, write better code, and mitigate QA cycles that often bog down development—in other words, how they can improve quality while speeding up the development process. Keep your eye on Synergy-e-News and the Synergex blog for more information and learning opportunities related to unit testing, including the new traditional Synergy unit testing feature, which has already been discussed on the Synergex blog and in the Synergy/DE documentation: