(score is 7) function get_first_user(data) { first_user = data[0]; return first_user; }
Score better than this:
(score is 8) function get_first_user(data: User[]): Result<User> { first_user = data[0]; return first_user; }
I mean, I know that the type annotations is what gives the lower score, but I would argue that the latter has the lower cognetive complexity.
Not really. TypeScript introduces optional static type analysis, but how you configure TypeScript also has an impact on how your codebase is transpiled to JavaScript.
Nowadays there is absolutely no excuse to opt for JavaScript instead of TypeScript.
With source maps configured, debugging tends to work out of the box.
The only place where I personally saw this becoming an issue was with a non-nodejs project that used an obscure barreler, and it only posed a problem when debugging unit tests.
> Just feels like an extra layer of complexity in the deployment process and debugging.
Your concern is focused on hypothetical tooling issues. Nowadays I think the practical pros greatly outnumber the hypothetical cons, to the point you need to bend yourself out of shape to even argue against adopting TypeScript.
No? first_user = data[0] assigns User | undefined to first_user, since the list isn't guaranteed to be non-empty. I expect Return to be implemented as type Return<T> = T | undefined, so Return<User> makes sense.
I don't know about transpiling or performance, but cyclomatic complexity is associated with both cognitive complexity and code quality.
I mean, why would code quality not reflect cognitive load? What would be the point, then?
It may not be perfect in its outputs but I like it for bringing attention to arising (or still existing) hotspots.
I've found that the output is - at least on a high level - aligning well with my inner expectation of what files deserve work and which ones are fine. Additionally it's given us measurable outcomes for code refactoring which non technical people like as well.
I think you didn't bothered to pay attention to the project's description. The quick start section is clear on how the "score" is an arbitrary metric that "serves as a general, overall indication of the quality of a particular TypeScript file." Then it's quite clear in how "The full metrics available for each file". The Playground page showcases a very obvious and very informative and detailed summary of how a component was evaluated.
> Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
Anyone can look at the results of any analysis run. They seem to be extremely detailed and informative.
austin-cheney•7h ago
I prefer redundancy analysis checking for duplicate logic in the code base. It’s more challenging than it sounds.
motorest•5h ago
That's a failure to understand and interpret computational complexity in general, and cyclomatic complexity in particular. I'll explain why.
Complexity is inherent to a problem domain, which automatically means it's unrealistic to assume there's always a no-branching implementation. However, higher-complexity code is associated with higher likelihood of both having bugs and introducing bugs when introducing changes. Higher-complexity code is also harder to test.
Based on this alone, it's obvious that is desirable to produce code with low complexity, and there are advantages in refactoring code to lower it's complexity.
How do you tell if code is complex, and what approaches have lower complexity? You need complexity metrics.
Cyclomatic complexity is a complexity metric which is designed to output a complexity score based on a objective and very precise set of rules: the number of branching operations and independent code paths in a component. The fewer code paths, the easier it is to reason about and test, and harder to hide bugs.
You use cyclomatic complexity to figure out which components are more error-prone and harder to maintain. The higher the score, the higher the priority to test, refactor, and simplify. If you have two competing implementations, In general you are better off adopting the one with the lower complexity.
Indirectly, cyclomatic complexity also offers you guidelines on how wo write code. Branching increases the likelihood of bugs and makes components harder to test and maintain. Therefore, you are better off favoring solutions that minimize branching.
The goal is not to minimize cyclomatic complexity. The goal is to use cyclomatic complexity to raise awareness on code quality problems and drive your development effort. It's something you can automate, too, so you can have it side by side with code coverage. You use the metric to inform your effort, but the metric is not the goal.
socalgal2•2h ago