Posted by on May 22, 2013 in C#, Testing

I’ll be the first to say that testing and code metrics can improve software quality and increase productivity, but an overzealous application of either could incur a heavy cost.

Tests are code, code is overhead, and while some overhead is necessary and even advisable, overhead is debt and should be minimized whenever possible.

There is no perfect product, bugs will be deployed to live systems (no matter how many tests and quality checks are in place), and for the most part, customers not only tolerate this, they expect it to one degree or another.  What will really win over clients is how fast bug fixes are delivered.

Time is often better spent refactoring existing code, and improving feedback and logging systems. Making code testable will make it far easier to diagnose problems and expedite repairs.  Tests should be put in place to prevent the same bug from surfacing more than once, but more time should be spent improving existing code quality than extending a safety net of tests to compensate for poor code quality.

I hear the term “code coverage” tossed around as if it were a panacea, it’s like the thought is, “if we improve our code coverage, it will solve all our quality problems”.  Code coverage is useful tool, but certainly not the most important measure of quality. Depending on the code, 30% code coverage could be plenty, or in another case, 70% might not be enough – and keep in mind, that even if a particular piece of code is 100% covered, it may still need more testing. Code coverage is only one metric, and at best it’s an incomplete indicator of test coverage. All too often, a rule that requires 80% code coverage encourages testing of things like:

class Foo
{
    public string Bar { get; set; }

    public Foo(string bar)
    {
        Bar = bar;
    }
}

 

[Test]
public void TestFoo()
{
    var foo = new Foo("bar");

    Assert.AreEqual("bar", foo.Bar);
}


This is not testing the “Foo” class or business logic, and while it does increase code coverage, ultimately all that’s being tested is the C# language itself. This is not a good use of time since you can trust the language to do its job correctly.

Testing, code coverage, static code analysis, and related tools and practices, while important, have lower value than improving code quality and testability. Ultimately, no amount of success in testing can compensate for failure in code quality.

About Anders

Anders Lyman is a Software Engineer at Ancestry.com in Story Engineering. He and his team are innovators, creating tools that are often adopted and used by other teams. He enjoys reading, cooking, spending time with his beautiful wife and playing with his 9 month old daughter.

2 Comments

Bressain Dinkelman 

I’m not really sure I agree with the conclusion being made.

“Tests should be put in place to prevent the same bug from surfacing more than once, but more time should be spent improving existing code quality than extending a safety net of tests to compensate for poor code quality.”

I’ve always found that adding tests actually IMPROVES the existing code quality and discourages poor design.

“Code coverage is only one metric, and at best it’s an incomplete indicator of test coverage.”

This is true. You can get to 100% coverage without writing one useful test. But you can also get to (or at least close to) 100% coverage with useful tests. Tests are code and a crappy test is crappy code; you reap what you sow.

“Testing, code coverage, static code analysis, and related tools and practices, while important, have lower value than improving code quality and testability. Ultimately, no amount of success in testing can compensate for failure in code quality.”

Here’s where I disagree. Testing, code coverage and static code analysis are important BECAUSE they improve code quality and testability. And I would argue that if you’re succeeding in testing (and I define succeeding as having quality tests), your code quality is improving as well.

May 30, 2013 at 9:33 pm
DouglasManee 

Testing helps is Verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application. Dynamic testing is an assessment that is conducted while the program is executed; static testing, on the other hand, is an examination of the program’s code and associated documentation. Dynamic and static methods are often used together.

Douglas

December 19, 2013 at 5:57 am

We really do appreciate your feedback, and ask that you please be respectful to other commenters and authors. Any abusive comments may be moderated.

Commenting is open until Wednesday, 5 June 2013