Here is some Complexity Metrics from stan4j.
An eclipse class structure analyze tool.
I like this tool and the metrics. I treat the metrics as statistics, indicators, warning messages. Sometime due to some methods or some classes really has some complicated logic made them to be complex, what shall be done is keep an eye on them,
review them to see if there is an need to refactor them or review them carefully, due to normally they are error prone. Also I use it as analyze tool to learn source code, due to I like to learn from complex to simple.Actually it includes some other metrics such as Robert C. Martin Metrics, Chidamber & Kemerer Metrics,Count Metrics
But I like this one best
Complexity Metrics
Cyclomatic Complexity Metrics
Cyclomatic Complexity (CC)
The cyclomatic complexity of a method is the number of decision points in the method's control flow graph incremented by one. Decision points occur at if/for/while statements, case/catch clauses and similar source code elements, where the control flow is not just linear. The number of (byte code) decision points introduced by a single (source code) statement may vary, depending e.g. on the complexity of boolean expressions. The higher the cyclomatic complexity value of a method is, the more test cases are required to test all the branches of the method's control flow graph.
Average Cyclomatic Complexity
Average value of the Cyclomatic Complexity metric over all methods of an application, library, package tree or package.
Fat Metrics
The Fat metric of an artifact is the number of edges in an appropriate dependency graph of the artifact. The dependency graph type depends on the metric variant and the chosen artifact:
Fat
The Fat metric of an application, library or package tree is the edge count of its subtree dependency graph. This graph contains all the artifact's children in the package tree hierarchy, thereby also including leaf packages. (To see the appropriate graph in the Composition View, the Structure Explorer's Flat Packages toggle has to be disabled. The Show Libraries toggle has to be enabled if the chosen artifact is a library, otherwise it has to be disabled.)
The Fat metric of a package is the edge count of its unit dependency graph. This graph contains all top level classes of the package.
The Fat metric of a class is the edge count of its member graph. This graph contains all fields, methods and member classes of the class. (This graph and the Fat value are only available if the code analysis was performed with Level of Detail Member, not Class.)
Fat for Library Dependencies (Fat - Libraries)
The Fat for Library Dependencies metric of an application is the edge count of its library dependency graph. This graph contains all libraries of the application. (To see the appropriate graph in the Composition View, the Structure Explorer's Show Libraries toggle has to be enabled.)
Fat for Flat Package Dependencies (Fat - Packages)
The Fat for Flat Package Dependencies metric of an application is the edge count of its flat package dependency graph. This graph contains all packages of the application. (To see the appropriate graph in the Composition View, the Structure Explorer's Flat Packages toggle has to be enabled and the Show Libraries toggle has to be disabled.)
The Fat for Flat Package Dependencies metric of a library is the edge count of its flat package dependency graph. This graph contains all packages of the library. (To see the appropriate graph in the Composition View, the Structure Explorer's Flat Packages and Show Libraries toggles have to be enabled.)
Fat for Top Level Class Dependencies (Fat - Units)
The Fat for Top Level Class Dependencies metric of an application or library is the edge count of its unit dependency graph. This graph contains all the top level classes of the application or library. (For reasonable applications it is too large to be visualized and thus can not be displayed in the Composition View. Unit dependency graphs may only be displayed for packages.)
the number of StyleCop warnings + 10 * the number of FxCop warnings + 2 to the power of the number of disabled warning types
. Only after the value of that metric is as small as possible, is it worth for a human to start reviewing the code (in my opinion). In sum: sophisticated tools rather than simplistic formulas can help improve code quality. This is probably off-topic though. – NuminousI just don't see the point of them in isolation or as arbitrary standards of quality.
- Who would think of using metrics in isolation or as arbitrary standards of quality? – Houseless