The question which I get asked often by teams is how to get started on metrics which can provide insights from code, expose pain areas, and help to improve quality.
I ask them to focus on metrics from following three areas. Some of the metrics are based on an excellent book Your Code as Crime Scene by Adam Tornhill.
- Metrics from source Code, collected/enforced via build process
Following four metrics are quite standard, and provides good enough insights.
- Maintainability Index
- Cyclomatic complexity
- Class Coupling
- Lines of Code
From my experience these metrics should be treated as guidelines instead of rules. Teams should avoid temptations to modify the code just to reduce Cyclomatic complexity from 21% to 19%, unless it really improves code quality. Tools like NDepend or CodeCity offers more ways and insights into code metrics.
2. Metrics from source control change logs, collected offline
- Hot Spot Detection : Find files with high churns to identify possible issues and candidates for refactoring.
- Identify temporal coupling : Find files which get updated together to find coupling which often cannot be discovered by static analysis tool.
In order to get realistic data from this analysis, team should avoid clubbing unrelated things in a change list. It’s a best practice to keep change list as atomic as possible. This also makes it easy to rollback, and selectively merge with other branches.
3. Metrics from Code Review systems, collected offline
- Number of iterations/change list : This should be a live dashboard, which can help leadership team to take appropriate actions if a change list is undergoing multiple iterations. This is especially true when reviews happens across multiple teams.
- Number of comments/iteration or number of comments/change list : It is normal to get a lots of comments during early stages of development as team is getting used to project structure, guidelines, and overall design. Typically number of comments should be reduced over the period of time as well as higher iterations are expected to have less comments.
- Nature of comments : Reviewers needs to categorize their comments (Like “Security Issues”, “Naming Style”, “Redundant Code”) as well as assign a severity. Retrospection of comments can help to identify recurring issues, and to take corrective actions.
These set of metrics are useful on large diverse product teams which are spread across multiple locations, and multiple cross functional teams are involved in reviews.
A word of caution
- The metrics should help to identify trends, spot an area of improvement, and enable teams to take proactive actions.
- They should be NOT considered as a set of compliance parameters against which teams are measured. It is responsibility of leadership team to build and grow the required culture.
- Not all metrics and their thresholds should be considered as rule, but as guiding principles and something to track the trends.
- Metrics which qualifies as rule (based on teams consensus), must be part of gated commit process from the beginning of development phase. If enabling them at later stages, such rules can be added as warnings first, and as team fix them across the codebase (project wise) set them as errors.
- Avoid subjective things which cannot be measured via automated process.
- Code metrics values
- NDepend A static analysis tool
- CodeCity A Code Visualization Tool
- Book Your Code as a Crime Scene by Adam Tornhill for many more ideas and tooling.
Thanks for reading, hope you have found it useful.