Do you find cyclomatic complexity a useful measure? [closed]
Asked Answered
M

15

57

I've been playing around with measuring the cyclomatic complexity of a big code base.

Cyclomatic complexity is the number of linearly independent paths through a program's source code and there are lots of free tools for your language of choice.

The results are interesting but not surprising. That is, the parts I know to be the hairiest were in fact the most complex (with a rating of > 50). But what I am finding useful is that a concrete "badness" number is assigned to each method as something I can point to when deciding where to start refactoring.

Do you use cyclomatic complexity? What's the most complex bit of code you found?

Morissa answered 14/4, 2009 at 0:2 Comment(0)
A
40

We refactor mercilessly, and use Cyclomatic complexity as one of the metrics that gets code on our 'hit list'. 1-6 we don't flag for complexity (although it could get questioned for other reasons), 7-9 is questionable, and any method over 10 is assumed to be bad unless proven otherwise.

The worst we've seen was 87 from a monstrous if-else-if chain in some legacy code we had to take over.

Aspidistra answered 14/4, 2009 at 0:13 Comment(4)
87? That's a very thorough implementation of the Arrow Anti-Pattern... Sincere condolences.Aboutship
So basically a highly sequential function containing 10 if statements in a row would fail the test?Harris
I just dug into CC tonight as I was trying to provide a valid plan of attack for code cleanup of a project. Worst offenders were 450 for a single method and 1,289 for a class (And no i didn't write any of it). Good game all. SIGH............Foothold
Just joined a comapny, and found one windows form has 1518Roentgenotherapy
P
19

Actually, cyclomatic complexity can be put to use beyond just method level thresholds. For starters, one big method with high complexity may be broken into several small methods with lower complexity. But has it really improved the codebase? Granted, you may get somewhat better readability by all those method names. But the total conditional logic hasn't changed. And the total conditional logic can often be reduced by replacing conditionals with polymorphism.

We need a metric that doesn't turn green by mere method decomposition. I call this CC100.

CC100 = 100 * (Total cyclomatic complexity of codebase) / (Total lines of code)

Philippeville answered 28/6, 2009 at 7:0 Comment(8)
But testability has improved: separate methods can (in principle) be tested separately, even if the logic doesn't change. Of course this doesn't hold if the methods also depend on a lot of global state, but that's a problem in its own right.Veroniqueverras
+1 for the hyperlink to an interesting slide show. I recently spent some thoughts on exactly this issue and am happy to find more material on it.Recha
replacing conditionals with polymorphism may reduce cyclomatic complexity, but it also decreases its local comprehensibility.Erminiaerminie
@Erminiaerminie OO-code is meant to be comprehended more by its interface (encapsulation) than by implementation - at least at the point of usage (method calls).Philippeville
@Philippeville yes, seems I didn't really get your point - now it seems that you criticise the usage of classical CC metrics, stating that CC100 would help to detect over-complicated code?Erminiaerminie
@Veroniqueverras testability depends also on the quality of decompositionErminiaerminie
@Philippeville CC100 link is deadAffectional
@ScottStorch It was 15 years old. Fixed.Philippeville
D
14

It's useful to me in the same way that big-O is useful: I know what it is, and can use it to get a gut feeling for whether a method is good or bad, but I don't need to compute it for every function I've written.

I think simpler metrics, like LOC, are at least as good in most cases. If a function doesn't fit on one screen, it almost doesn't matter how simple it is. If a function takes 20 parameters and makes 40 local variables, it doesn't matter if its cyclomatic complexity is 1.

Dismay answered 14/4, 2009 at 1:0 Comment(1)
I'd say all these parameters and local variables are for logic flow. Thus, they are for CC. Just out of my head thinking.Guarnerius
T
8

Until there is a tool that can work well with C++ templates, and meta-programming techniques, it's not much help in my situation. Anyways just remember that

"not all things that count can be measured, and not all things that can be measured count" Einstein

So remember to pass any information of this type through human filtering too.

Titanate answered 14/4, 2009 at 0:42 Comment(0)
S
7

We recently started to use it. We use NDepend to do some static code analysis, and it measures cyclomatic complexity. I agree, it's a decent way to identify methods for refactoring.

Sadly, we have seen #'s above 200 for some methods created by our developers offshore.

Slaby answered 14/4, 2009 at 0:6 Comment(3)
In an earlier life, I remember having seen more than 300.Eudoca
A colleague of mine has encountered cases of over a 1000.Veroniqueverras
ITS OVER 9000!!!!!! .... Sorry, couldn't help myself. Anything over 200 would be mind bogglingAnteroom
D
6

You'll know complexity when you see it. The main thing this kind of tool is useful for is flagging the parts of the code that were escaping your attention.

Daciadacie answered 14/4, 2009 at 1:5 Comment(1)
There's also a very interesting thing: often changed code with high complexity is the bug breeding ground. So, counting complexity automatically can be a good thing.Guarnerius
P
4

I frequently measure the cyclomatic complexity of my code. I've found it helps me spot areas of code that are doing too much. Having a tool point out the hot-spots in my code is much less time consuming than having to read through thousands of lines of code trying to figure out which methods are not following the SRP.

However, I've found that when I do a cyclomatic complexity analysis on other people's code it usually leads to feelings of frustration, angst, and general anger when I find code with cyclomatic complexity in the 100's. What compels people to write methods that have several thousand lines of code in them?!

Pohl answered 14/4, 2009 at 1:2 Comment(1)
I've seen some of those huge methods you're talking about, and it's usually about putting out fires. Once a fire is out, there's no reason to refactor (it works damnit!) and now that chunk of code is that much bigger, and has another fire in a few weeks/months.Decrescendo
D
3

It's great for help identifying candidates for refactoring, but it's important to keep your judgment around. I'd support kenj0418's ranges for pruning guides.

Decrescendo answered 14/4, 2009 at 1:12 Comment(0)
C
2

There's a Java metric called CRAP4J that empirically combines cyclomatic complexity and JUnit test coverage to come up with a single metric. He's been doing research to try and improve his empirical formula. I'm not sure how widespread it is.

Cloe answered 14/4, 2009 at 0:52 Comment(0)
G
1

I haven't used it in a while, but on a previous project it really helped identify potential trouble spots in someone elses code (wouldn't be mine of course!)

Upon finding the area's to check out, i quickly found numerious problems (also lots of GOTOS would you believe!) with logic and some really strange WTF code.

Cyclomatic complexity is great for showing areas which probably are doing to much and therefore breaking the single responsibilty prinicpal. These's ideally should be broken up into mulitple functions

Gleesome answered 14/4, 2009 at 0:7 Comment(0)
M
1

I'm afraid that for the language of the project for which I would most like metrics like this, LPC, there are not, in fact, lots of free tools for producing it available. So no, not so useful to me.

Muriel answered 14/4, 2009 at 0:15 Comment(1)
Heh. Somebody knows the story.Muriel
H
1

+1 for kenj0418's hit list values.

The worst I've seen was a 275. There were a couple others over 200 that we were able to refactor down to much smaller CCs; they were still high but it got them pushed further back in line. We didn't have much luck with the 275 beast -- it was (probably still is) a web of if- and switch-statements that was just way too complex. It's only real value is as a step-through when they decide to rebuild the system.

The exceptions to high CC that I was comfortable with were factories; IMO, they are supposed to have a high CC but only if they are only doing simple object creation and returning.

Hobbyhorse answered 26/5, 2009 at 18:19 Comment(0)
A
1

After understanding what it means, I now have started to use it on a "trial" basis. So far I have found it to be useful, because usually high CC goes hand in hand with the Arrow Anti-Pattern, which makes code harder to read and understand. I do not have a fixed number yet, but NDepend is alerting for everything above 5, which looks like a good start to investigate methods.

Aboutship answered 9/6, 2009 at 9:39 Comment(0)
A
1

Yes, we use it and I have found it useful too. We have a big legacy code base to tame and we found alarming high cyclomatic complexity. (387 in one method!). CC points you directly to areas that are worth to refactor. We use CCCC on C++ code.

Anarchist answered 9/9, 2009 at 10:51 Comment(0)
A
1

Cyclomatic Complexity is just one composant of what could be called Fabricated Complexity. A while back, I wrote an article to summarize several dimensions of code complexity: Fighting Fabricated Complexity

Tooling is needed to be efficient at handling code complexity. The tool NDepend for .NET code will let you analyze many dimensions of the code complexity including code metrics like: Cyclomatic Complexity, Nesting Depth, Lack Of Cohesion of Methods, Coverage by Tests...

including dependencies analysis and including a language (Code Query Language) dedicated to ask, what is complex in my code, and to write rule?

Amoebic answered 28/8, 2010 at 8:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.