frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
1•mfiguiere•1m ago•0 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
1•meszmate•3m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•5m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•20m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•25m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•29m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•30m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•31m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•36m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•39m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•42m ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•49m ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•50m ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•53m ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•55m ago•0 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•55m ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
3•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
3•alephnerd•1h ago•4 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
6•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
19•SerCe•1h ago•14 comments

Octave GTM MCP Server

https://docs.octavehq.com/mcp/overview
1•connor11528•1h ago•0 comments

Show HN: Portview what's on your ports (diagnostic-first, single binary, Linux)

https://github.com/Mapika/portview
3•Mapika•1h ago•0 comments

Voyager CEO says space data center cooling problem still needs to be solved

https://www.cnbc.com/2026/02/05/amazon-amzn-q4-earnings-report-2025.html
1•belter•1h ago•0 comments

Boilerplate Tax – Ranking popular programming languages by density

https://boyter.org/posts/boilerplate-tax-ranking-popular-languages-by-density/
1•nnx•1h ago•0 comments

Zen: A Browser You Can Love

https://joeblu.com/blog/2026_02_zen-a-browser-you-can-love/
1•joeblubaugh•1h ago•0 comments
Open in hackernews

Show HN: MistSeeker – a map of what is safe to change in large codebases

2•Convia•1mo ago
MistSeeker started from a very simple observation.

When something goes wrong in a large codebase, the hardest part is usually not fixing it, but knowing where you should touch — and where you shouldn't.

In real development work, we usually know one of two things:

- Something has already broken, or - Nothing has broken yet, but something feels fragile.

What we usually don't know is:

- Which parts of the system are structurally safe - Which changes are likely to amplify risk

Reading all the code is unrealistic. So we started by making this judgment visible.

## What MistSeeker actually does

MistSeeker is not a bug-finding tool.

The question we're trying to answer is simpler:

Is this code structurally suitable for change, or is it likely to fail during modification?

To answer that, we evaluate code from three independent perspectives.

1) COI — Structural fitness

COI looks at how code is organized.

- How responsibilities are divided - How deeply logic is nested - How much structural duplication or entanglement exists

A high COI does not mean "perfect code." It means the structure is less likely to cause unexpected ripple effects when changed.

Low-COI code, on the other hand, often turns small edits into wide-reaching consequences.

2) ORI — Execution stability

ORI focuses on behavior, not structure.

- Hidden I/O dependencies - Global state mutations - Logic dependent on time, randomness, or environment

Code can look clean and well-organized, yet still be fragile at runtime. ORI surfaces these invisible execution risks.

3) GSS — Semantic stability

GSS addresses a pattern that appears frequently in AI-assisted coding environments:

Code works correctly and passes tests, but its intent collapses easily with small changes.

MistSeeker does not claim to "understand" code semantics. Instead, it measures how much structural and behavioral change is triggered by small edits.

If minor modifications cause disproportionate shifts, GSS drops. This pattern appears often in generated code or after repeated refactoring.

## What the scores tell you

Each file or module ends up with a profile:

- Is it structurally fit for change? - Is it risky from an execution standpoint? - How easily does its meaning break when modified?

From this, we derive a single stability score (GSI) and a risk level.

The goal is not to rank code. There is only one question we want to answer:

When upgrades or refactoring are needed, where is the safest place to start? And which areas require extra caution?

## Try it (no signup)

HN readers: 5-day Pro evaluation key (no credit card required)

If the command fails, pull the image directly. Windows (CMD/PowerShell): see the install guide on our site. Just set the license key to the value below: License key: 716f3617b11685ba1af36bea74f929a3 Docker image: tongro2025/mistseeker

## Where this has been useful

In practice, it helped in situations like:

- Setting refactoring priorities instead of changing things blindly - Reviewing AI-generated code changes - Identifying areas that should not be touched during upgrades - Finding structurally fragile areas even when tests pass

## What this is not

- It does not replace linters - It is not a bug detector - It is not an auto-fix tool

MistSeeker is not a mechanic. It's a map.

## Why I’m sharing this

I'm curious about others' experiences.

- How do you decide what is "safe to change" in large codebases? - Have you had systems that passed tests but became increasingly hard to modify? - Does the idea of structural fitness and change risk resonate with you?

Opinions, counterarguments, and real-world examples are all welcome. If useful, I'm also happy to discuss boundaries and limitations.

guide / project(manual) page: https://convia.vip

Comments

Convia•1mo ago
OP here.

A concrete eval method: run it on a file that caused trouble in the past, then on the same file after patch/refactor. Before vs. after tends to be clearer than looking at scores in isolation.

Tech notes: multi-language via tree-sitter (Python, JS/TS, Java, Go, Rust, C/C++, etc.), 100% local (no telemetry / no external APIs), deterministic (no LLMs for the scores).

Happy to answer questions about the metrics.

Convia•1mo ago
Installation is done via Docker download (on-prem).
tongro•1mo ago
Glad to finally show this to the HN community! We've been dogfooding this internally for a while to find structural risks that tests often miss. It’s still an evolving map, so we’re really curious about the "edge cases" you might find in different languages.