frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
1•beardyw•4m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•4m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•7m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
1•surprisetalk•7m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•7m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
1•pseudolus•7m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•8m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•9m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•9m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
2•obscurette•9m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•11m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•11m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•14m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•15m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•15m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•15m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•16m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•17m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•18m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
6•derriz•18m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•18m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•19m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•19m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•22m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
1•edward•23m ago•1 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•25m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•26m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•27m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•29m ago•2 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•30m ago•0 comments
Open in hackernews

Fast TypeScript (Code Complexity) Analyzer

https://ftaproject.dev/
58•hannofcart•3mo ago

Comments

austin-cheney•3mo ago
That looks cool. I have never been a fan of cyclomatic complexity analysis. At some point I suspect a perfect score would be branchless code, but that isn’t maintainable.

I prefer redundancy analysis checking for duplicate logic in the code base. It’s more challenging than it sounds.

motorest•3mo ago
> At some point I suspect a perfect score would be branchless code, but that isn’t maintainable.

That's a failure to understand and interpret computational complexity in general, and cyclomatic complexity in particular. I'll explain why.

Complexity is inherent to a problem domain, which automatically means it's unrealistic to assume there's always a no-branching implementation. However, higher-complexity code is associated with higher likelihood of both having bugs and introducing bugs when introducing changes. Higher-complexity code is also harder to test.

Based on this alone, it's obvious that is desirable to produce code with low complexity, and there are advantages in refactoring code to lower it's complexity.

How do you tell if code is complex, and what approaches have lower complexity? You need complexity metrics.

Cyclomatic complexity is a complexity metric which is designed to output a complexity score based on a objective and very precise set of rules: the number of branching operations and independent code paths in a component. The fewer code paths, the easier it is to reason about and test, and harder to hide bugs.

You use cyclomatic complexity to figure out which components are more error-prone and harder to maintain. The higher the score, the higher the priority to test, refactor, and simplify. If you have two competing implementations, In general you are better off adopting the one with the lower complexity.

Indirectly, cyclomatic complexity also offers you guidelines on how wo write code. Branching increases the likelihood of bugs and makes components harder to test and maintain. Therefore, you are better off favoring solutions that minimize branching.

The goal is not to minimize cyclomatic complexity. The goal is to use cyclomatic complexity to raise awareness on code quality problems and drive your development effort. It's something you can automate, too, so you can have it side by side with code coverage. You use the metric to inform your effort, but the metric is not the goal.

socalgal2•3mo ago
Sound like coding to the metrics would lead to hard to read code as you find creative and convoluted ways to multiply by one and zero so to pretend you aren’t branching
nosefurhairdo•3mo ago
"When a measure becomes a target, it ceases to be a good measure."

You are free to interpret the score within the broader context of your own experience, the problem domain your code addresses, time constraints, etc.

strogonoff•3mo ago
Measuring complexity shouldn’t lead to finding creative ways to avoid complexity, but instead be used as a tool to encapsulate complexity well.

It could be misapplied, of course, like every other principle. For example, DRY is a big one. Just like DRY, there are cases where complexity is deserved: if nothing else, simply considering that no code used in real world context can ever be perfect, it is useful to have another measure that could hint on what to focus on in future iterations.

svieira•3mo ago
Oh, it does. That's what experience teaches you - that the measure is not the target.
devjab•3mo ago
Maybe I'm doing things wrong, but I assume this tool is meant to focus on cognetive complexity and not things like code quality, transpiling or performance, but if that's true then why does this:

(score is 7) function get_first_user(data) { first_user = data[0]; return first_user; }

Score better than this:

(score is 8) function get_first_user(data: User[]): Result<User> { first_user = data[0]; return first_user; }

I mean, I know that the type annotations is what gives the lower score, but I would argue that the latter has the lower cognetive complexity.

k__•3mo ago
Maybe because the type can be inferred and it potentially adds effort for changes in the future.
zenmac•3mo ago
Then why use TypeScript at all? Just write js and put a TS definition on top. TS is a linter anyway. Now that will make the code easier to read, and in the end it is the code that will be interpreated by the browser or whatever JS runtimes.
motorest•3mo ago
> TS is a linter anyway.

Not really. TypeScript introduces optional static type analysis, but how you configure TypeScript also has an impact on how your codebase is transpiled to JavaScript.

Nowadays there is absolutely no excuse to opt for JavaScript instead of TypeScript.

zenmac•3mo ago
What about debugging. Or with proper sitemap the code on the client-side can be debugged with the right map to the TS code? Just feels like an extra layer of complexity in the deployment process and debugging.
motorest•3mo ago
> What about debugging.

With source maps configured, debugging tends to work out of the box.

The only place where I personally saw this becoming an issue was with a non-nodejs project that used an obscure barreler, and it only posed a problem when debugging unit tests.

> Just feels like an extra layer of complexity in the deployment process and debugging.

Your concern is focused on hypothetical tooling issues. Nowadays I think the practical pros greatly outnumber the hypothetical cons, to the point you need to bend yourself out of shape to even argue against adopting TypeScript.

devjab•3mo ago
I'm not sure how you can infer types on this. Even if you input an array of users from a different function. How would we know that data[0] is a User and not undefined?
uallo•3mo ago
I get the same overall FTA score of 7 for both of your examples. When omitting the return type (which can be inferred), you get the exact same scores. Not just the same FTA score. Also note that `Return<User>` should be just `User` if you prefer to specify the return type explicitly. That change will improve several of the scores as well.
whilenot-dev•3mo ago
> Also note that `Return<User>` should be just `User` if you prefer to specify the return type explicitly.

No? first_user = data[0] assigns User | undefined to first_user, since the list isn't guaranteed to be non-empty. I expect Return to be implemented as type Return<T> = T | undefined, so Return<User> makes sense.

uallo•3mo ago
You are correct if `noUncheckedIndexedAccess` is enabled. It is off by default (which is a pity, really).

I assumed `Return<User>` was a mistake, not a custom type as you suggest. But your interpretation seems more likely anyway.

devjab•3mo ago
Both score 7 now though.

This scores 6: function a(b) { return b[0]; }

This scores 3: const a = (a) => a;

motorest•3mo ago
> (...) I assume this tool is meant to focus on cognetive complexity and not things like code quality, transpiling or performance (...)

I don't know about transpiling or performance, but cyclomatic complexity is associated with both cognitive complexity and code quality.

I mean, why would code quality not reflect cognitive load? What would be the point, then?

DeadlineDE•3mo ago
For a refactoring project I've built the reports of the tool into the CI pipeline of our repository. On every PR it will create a fixed post with the current branches complexity scores comparing it to the target branch and reporting a trend.

It may not be perfect in its outputs but I like it for bringing attention to arising (or still existing) hotspots.

I've found that the output is - at least on a high level - aligning well with my inner expectation of what files deserve work and which ones are fine. Additionally it's given us measurable outcomes for code refactoring which non technical people like as well.

yboris•3mo ago
Mildly related: TypeScript Call Graph - CLI to generate an interactive graph of functions and calls from your TypeScript files - my project.

https://github.com/whyboris/TypeScript-Call-Graph

paularmstrong•3mo ago
This is a bit underwhelming because it gives a score and says, "Needs improvement", but has no real indication of what it considers problematic about a file. Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
motorest•3mo ago
> This is a bit underwhelming because it gives a score and says, "Needs improvement", but has no real indication of what it considers problematic about a file.

I think you didn't bothered to pay attention to the project's description. The quick start section is clear on how the "score" is an arbitrary metric that "serves as a general, overall indication of the quality of a particular TypeScript file." Then it's quite clear in how "The full metrics available for each file". The Playground page showcases a very obvious and very informative and detailed summary of how a component was evaluated.

> Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.

Anyone can look at the results of any analysis run. They seem to be extremely detailed and informative.

paularmstrong•3mo ago
I definitely did pay attention to the description and the playground. The "full metrics" give more information, but they're still just numbers and don't explain to someone _what_ they should do to make something “better”. Again, they're just numbers, not recommendations. Most people could probably just gamify the whole thing by making every file as small as possible. Single functions with as few lines as possible. That doesn't make code less complex, it just masks it.