Software Quality Metrics

How can a one measure overall performance of the software engineering teams and track it over  a period of time to ensure continuous improvement?

It boils down to defining a number of metrics and tracking those metrics on a regular basis. Here are a few I have found useful:

  1. Number of new issues opened in a week. These could be bugs, confusing user interface issues, lack of certain features or just a customer’s dislike of some behavior in the software. These issues need to be classified under different buckets and tracked separately in addition to tracking the aggregate issues count.
  2. Number of issues closed in a week. How many of the open issues the team was able to close in a week is an important metric to get an understanding of how much time the team is working to address open issues vs. implementing new features dictated by the product road map and/or other client commitments not identified as issues from a software quality perspective.
  3. Daily current open issues count.
  4. Software errors not noticed by customers but logged by the software in some log file/database. A weekly review of all issues which were not reported by customers but detected by internal exception handling code of the software gives a good insight into how well the software is being tested before deployment. Since many software applications are data driven and it is practically impossible to test all data scenario for all software features, many times it is the customer’s data which will make the software take a path leading up to some internal error which may or may not be visible to the customer.
  5. Classification of issues based on where and by whom the error was reported. It is useful primarily for measuring the effectiveness of the internal test and QA teams. If most of the issues were discovered in the development and staging environments, it says good things about the internal test and QA effectiveness. Also percentage wise we want to see a trend where a lower percent of the issues are reported by customers in the production environment.
  6. Feature-wise error count over time. Since software products are continuously evolving, it is not fair to call the software too buggy is most of the bugs are concentrated in a few newly added features.  One should treat the software product a collection of mini-products and track the above mentioned metrics on a per feature basis. The idea is that one should see the issue count for features decline as the feature ages.

Sometimes it may be necessary to track the above metrics on a per customer basis. This is true when the software is used by a few high dollar value customers and it has certain customer specific features.

Add a Comment

Your email address will not be published. Required fields are marked *