The Software Maintainability Index Revisited

Tag Archives: maintainability.
Posted in by The Maintainability Index was introduced at the in 1992.
To date, it is included in (since 2007).

In the recent (2012) and metrics reporters for Javascript and Python

and in older metric tool suites such as.

The Maintainabilty Index was introduced in 1992 by Paul Oman and Jack Hagemeister

originally presented at the International Conference on Software Maintenance and later refined in a paper that appeared in.
It is a blend of several metrics, including (HV), (CC), (LOC), and percentage of comments (COM).
For these metrics, the average per module is taken, and combined into a single formula: The maintainability index attracted quite some attention.

Also because the Software Engineering Institute () promoted it

for example in their 1997.
This report describes the Maintainability Index as “good and sufficient predictors of maintainability”, and “potentially very useful for operational Department of Defense systems”.
Furthermore, they suggest that “it is advisable to test the coefficients for proper fit with each major system to which the MI is applied.” Visual Studio Code Metrics were in February 2007.
A November 2007 clarifies the specifics of the maintainability index included in it.
The formula Visual Studio uses is slightly different.

Based on the 1994 version: Maintainability Index = MAX(0

(171 – 5.2 * ln(Halstead Volume) – 0.23 * Cyclomatic Complexity – 16.2 * ln(Lines of Code) ) * 100 / 171) MI >= 20 High Maintainabillity 10 <= MI < 20 Moderate Maintainability MI < 10 Low Maintainability I have not been able to find a justification for these thresholds. The paper used 85 and 65 (instead of 20 and 10) as thresholds, describing them as a good “rule of thumb”. The metrics are available within Visual Studio, and are part of the code metrics , which can also be used in a continuous integration server.

I encountered the Maintainability Index myself in 2003

when working on in collaboration with.
Later, researchers from SIG published a thorough analysis of the Maintainability Index (first when introducing their for assessing maintainability and later as section 6.1 of their paper on technical quality and ).

The Maintainability Index is based on the average per file of

e.g., cyclomatic complexity.
However, as emphasized by , these metrics follow a , and taking the average tends to mask the presence of high-risk parts.
These concerns are consistent with a recent (2012) , in which one application was independently built by four different companies.
The researchers used these systems two compare maintainability and several metrics, .

Including the Maintainability Index

Their findings include that size as a measure of maintainability has been underrated, and that the “sophisticated” maintenance metrics are overrated.
If you are a tool smith or tool vendor, there is not much point in having several metrics that are all.
Check correlations between the metrics you offer, and if any of them are strongly correlated pick the one with the clearest and simplest explanation.

Paul Omand and Jack Hagemeister

“Metrics for assessing a software system’s maintainability”.

Proceedings International Conference on Software Mainatenance (ICSM)

1992, pp 337-344.
().
Paul W.
Oman, Jack R.
Hagemeister: Construction and testing of polynomials predicting software maintainability.
Journal of Systems and Software 24(3), 1994, pp.
251-266.
().
Don M.
Coleman, Dan Ash, Bruce Lowther, Paul W.
Oman.
Using Metrics to Evaluate Software System Maintainability.
IEEE Computer 27(8), 1994, pp.
44-49.
(, ).
Kurt Welker.
The Software Maintainability Index Revisited.
CrossTalk, August 2001, pp 18-21.
().
Code Analysis Team Blog, blogs.msdn, 20 November 2007.
Ilja Heitlager, Tobias Kuipers, Joost Visser.
A practical model for measuring maintainability.
Proceedings 6th International Conference on the Quality of Information and Communications Technology, 2007.
QUATIC 2007.
().
Dennis Bijlsma, Miguel Alexandre Ferreira, Bart Luijten, and Joost Visser.
Faster Issue Resolution with Higher Technical Quality of Software.
Software Quality Journal 20(2): 265-285 (2012).
(, ).

Page 14 addresses the Maintainability Index

Khaled El Emam, Saida Benlarbi, Nishith Goel, and Shesh N.
Rai.

The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics

IEEE Transactions on Software Engineering, 27(7):630:650, 2001.
(, ).
Dag Sjøberg, Bente Anda, and Audris Mockus.
Questioning software maintenance metrics: a comparative case study.
Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement (ESEM), 2012, pp.
107-110.
(, ).
Included discussion on Sjøberg’s , the thresholds in Visual Studio, and the problems following from averaging in a power law.
, ,.
Search  (1).
(1).
(2).
(1).
(1).
(1).
(1).
(1).
(1).
(2).
(1).
(1).
(1).
(1).
(2).
(2).
(1).
(1).
(1).
(1).
(1).
(1).
(1).
(2).
(1).
(1).
(1).
(1).
(1).
(1).
(1).
(3).
(1).
(1).
(1).
Post to bloggers like this:.