Heuristics are common in everyday life. We use heuristics all the time for our decisions and judgments. Real-life choices are a bit like modifying legacy code: we have to make decisions based on incomplete information and uncertain outcomes. In both situations, we aim for solutions that we believe have a high probability of leading to desirable outcomes.
Heuristics by definition are imprecise. A common source of error is to substitute a difficult question for a simple one. Because the mental processes are unconscious, we’re not even aware that we answered the wrong question.
One example is availability bias: we base decisions on how easily examples come to mind. In a classic study by Paul Slovic in Decision Making: Descriptive, Normative, and Prescriptive Interactions [SFL88], researchers asked people about the most likely causes of death. The participants could choose between pairs such as botulism or lightning, or murder or suicide. Respondents misjudged the probabilities in favor of the more dramatic and violent example—for example, choosing murder over suicide and lightning over botulism, although statistics show that the reverse is much more likely.
We’re not immune to these biases during software development, either. Suppose you recently read a blog post describing a data access implementation. If you were asked where the problems are in your own system, the availability bias might well kick in, and you’d be predisposed to answer “data access.” And that’s even if you didn’t recall that you had read that blog post.
Our constant reliance on heuristics is one reason why we need techniques like the ones in this book. These techniques support our decision-making and let us verify our assumptions. We humans are anything but rational.
When you started this chapter, you’d already identified some hotspots. Now you’ve learned about simple ways to classify them. By using the name of the potential offender, you can sort out true problems from false positives.
Heuristics are mental shortcuts. When we rely on them, we trade precision for simplicity. There’s always a risk that we may draw incorrect conclusions. Remember how we saw a warning signal as we categorized Configuration.java in Check Your Assumptions with Complexity? That’s just a risk we have to take.
With hotspots such as SessionImpl.java and SessionFactoryImpl.java, we want to refactor these files. Such large-scale refactorings are challenging and require more discipline than local changes. It’s way too easy to code yourself into a corner. To support such refactorings, have a look at Appendix 1, Refactoring Hotspots, which uses names as a guide during the initial refactoring effort once the offending code is found.
We also want to consider whether the hotspot code is deteriorating further or improving over time. Many teams actively refactor code, so perhaps the area flagged in the hotspot is actually in better shape now than it was a year ago. In that case, the code may be heading in the right direction. One way to investigate that is by looking at the complexity trends over time. In the next chapter, we’ll investigate a fast, lightweight metric that lets us calculate and understand trends with a minimum of overhead.