Cyclomatic complexity metric practices for Python [closed]
Asked Answered
D

4

12

I have a relatively large Python project that I work on, and we don't have any cyclomatic complexity tools as a part of our automated test and deployment process.

How important are cyclomatic complexity tools in Python? Do you or your project use them and find them effective? I'd like a nice before/after story if anyone has one so we can take a bit of the subjectiveness out of the answers (i.e. before we didn't have a cyclo-comp tool either, and after we introduced it, good thing A happened, bad thing B happened, etc). There are a lot of other general answers to this type of question, but I didn't find one for Python projects in particular.

I'm ultimately trying to decide whether or not it's worth it for me to add it to our processes, and what particular metric and tool/library is best for large Python projects. One of our major goals is long term maintenance.

Dissimilar answered 13/7, 2016 at 14:33 Comment(3)
Have a look at sobolevn.me/2019/10/complexity-waterfall It is a great article about code complexity.Dorchester
Ruff supports McCabe complexity.Encamp
@Encamp I just started using Ruff on a project at work. It's pretty wonderful overall. Thanks for the tip!Dissimilar
H
14

We used the RADON tool in one of our projects which is related to Test Automation.

RADON

Depending on new features and requirements, we need to add/modify/update/delete codes in that project. Also, almost 4-5 people were working on this. So, as a part of review process, we identified and used RADON tools since we want our code maintainable and readable.

Depend on the RADON tool output, there were several times we re-factored our code, added more methods and modified the looping.

Please let me know if this is useful to you.

Herod answered 14/7, 2016 at 5:48 Comment(1)
I have quite started liking radon. Although it is harder to include this in the automatic build process, but the effort is worth it.Stripling
D
7

wemake-python-styleguide supports both radon and mccabe implementations of Cyclomatic Complexity.

There are also different complexity metrics that are not covered by just Cyclomatic Complexity, including:

  • Number of function decorators; lower is better
  • Number of arguments; lower is better
  • Number of annotations; higher is better
  • Number of local variables; lower is better
  • Number of returns, yields, awaits; lower is better
  • Number of statements and expressions; lower is better

Read more about why it is important to obey them: https://sobolevn.me/2019/10/complexity-waterfall

They are all covered by wemake-python-styleguide. Repo: https://github.com/wemake-services/wemake-python-styleguide Docs: https://wemake-python-stylegui.de

Dorchester answered 14/10, 2019 at 14:28 Comment(0)
E
6

Python isn't special when it comes to cyclomatic complexity. CC measures how much branching logic is in a chunk of code.

Experience shows that when the branching is "high", that code is harder to understand and change reliably than code in which the branching is lower.

With metrics, it typically isn't absolute values that matter; it is relative values as experienced by your organization. What you should do is to measure various metrics (CC is one) and look for a knee in the curve that relates that metric to bugs-found-in-code. Once you know where the knee is, ask coders to write modules whose complexity is below the knee. This is the connection to long-term maintenance.

What you don't measure, you can't control.

Enchanter answered 13/7, 2016 at 16:56 Comment(2)
Do you use any specific tool in python? Is radon good? How did you figure out your cyclo comp metric threshold? Was it just by tracking issues on JIRA and at the end of every release/sprint you go back and manually figure out what you've bugfixed/hotfixed?Dissimilar
We build metrics tools using unusual programming DSLs for which CC doesn't make as much sense. (We are actually interested in productivity which we get by building better internal DSLs). Most people trying to understand metrics usually combine information from multiple sources: the version control system (e.g., the source code), used to compute metrics on modules, and information from bug tracking (errors tracked to a module). You run all this data collection on some periodic basis to track trends. (For metrics tools, see my bio and follow links to my site).Enchanter
F
1

You can also use mccabe library. It counts only McCabe complexity, and can be integrated in your flake8 linter.

Frogmouth answered 30/4, 2019 at 2:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.