The most obvious problem with failed test automation is that teams spend more time keeping tests up and running than on developing new features. That’s not where you want to be. Remember Lehman’s law of continuing change? Without new features, your software becomes less useful.
Another less obvious cost is psychological. Consider an ideal system. There, a broken test would be a loud cry to pause other tasks and focus on finding the bug causing the test to fail.
But when we start to accept failing tests as the normal state of affairs, we’ve lost. A failing test is no longer a warning signal, but a potential false positive. Over time, you lose faith in the tests and blame them. As such, a failed test-automation project costs more than just the time spent on maintaining test environments and scripts.
This is why you learned to set up a safety net to catch such potential problems early. The next natural step is to generalize that safety net. Let’s have a look at how we can do the same for different software architectures as well.