frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
28•mbitsnbites•3d ago•2 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
52•momciloo•7h ago•10 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
294•isitcontent•1d ago•39 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
44•sandGorgon•2d ago•20 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•1h ago•1 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
362•eljojo•1d ago•218 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
374•vecti•1d ago•171 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
2•davidcondrey•2h ago•1 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
97•antves•2d ago•70 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
85•phreda4•1d ago•17 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
2•latentio•4h ago•0 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
156•bsgeraci•1d ago•65 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
29•dchu17•1d ago•12 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
2•shubham-coder•6h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
55•nwparker•2d ago•12 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•7h ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
3•xeouz•8h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
23•NathanFlurry•1d ago•11 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
18•denuoweb•2d ago•2 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
2•ivanglpz•9h ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
3•anipaleja•9h ago•0 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•11h ago•1 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
27•JoshPurtell•2d ago•5 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•1d ago•8 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
9•sakanakana00•12h ago•2 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•12h ago•1 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
7•rahuljaguste•1d ago•1 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•2d ago•2 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
22•keepamovin•17h ago•6 comments
Open in hackernews

Show HN: Autofix Bot – Hybrid static analysis and AI code review agent

37•sanketsaurav•1mo ago
Hi there, HN! We’re Jai and Sanket from DeepSource (YC W20), and today we’re launching Autofix Bot, a hybrid static analysis + AI agent purpose-built for in-the-loop use with AI coding agents.

AI coding agents have made code generation nearly free, and they’ve shifted the bottleneck to code review. Static-only analysis with a fixed set of checkers isn’t enough. LLM-only review has several limitations: non-deterministic across runs, low recall on security issues, expensive at scale, and a tendency to get ‘distracted’.

We spent the last 6 years building a deterministic, static-analysis-only code review product. Earlier this year, we started thinking about this problem from the ground up and realized that static analysis solves key blind spots of LLM-only reviews. Over the past six months, we built a new ‘hybrid’ agent loop that uses static analysis and frontier AI agents together to outperform both static-only and LLM-only tools in finding and fixing code quality and security issues. Today, we’re opening it up publicly.

Here’s how the hybrid architecture works:

- Static pass: 5,000+ deterministic checkers (code quality, security, performance) establish a high-precision baseline. A sub-agent suppresses context-specific false positives.

- AI review: The agent reviews code with static findings as anchors. Has access to AST, data-flow graphs, control-flow, import graphs as tools, not just grep and usual shell commands.

- Remediation: Sub-agents generate fixes. Static harness validates all edits before emitting a clean git patch.

Static solves key LLM problems: non-determinism across runs, low recall on security issues (LLMs get distracted by style), and cost (static narrowing reduces prompt size and tool calls).

On the OpenSSF CVE Benchmark [1] (200+ real JS/TS vulnerabilities), we hit 81.2% accuracy and 80.0% F1; vs Cursor Bugbot (74.5% accuracy, 77.42% F1), Claude Code (71.5% accuracy, 62.99% F1), CodeRabbit (59.4% accuracy, 36.19% F1), and Semgrep CE (56.9% accuracy, 38.26% F1). On secrets detection, 92.8% F1; vs Gitleaks (75.6%), detect-secrets (64.1%), and TruffleHog (41.2%). We use our open-source classification model for this. [2]

Full methodology and how we evaluated each tool: https://autofix.bot/benchmarks

You can use Autofix Bot interactively on any repository using our TUI, as a plugin in Claude Code, or with our MCP on any compatible AI client (like OpenAI Codex).[3] We’re specifically building for AI coding agent-first workflows, so you can ask your agent to run Autofix Bot on every checkpoint autonomously.

Give us a shot today: https://autofix.bot. We’d love to hear any feedback!

---

[1] https://github.com/ossf-cve-benchmark/ossf-cve-benchmark

[2] https://huggingface.co/deepsource/Narada-3.2-3B-v1

[3] https://autofix.bot/manual/#terminal-ui

Comments

nickphx•1mo ago
"shifted bottleneck to code review"... understatement of decade.
_pdp_•1mo ago
What is the difference between this and let's say Claude Code using something like semgrep as a tool?

Also I don't think this tool should be in the developer flow as in my experience it is unlikely to run it on the regular. It should be something that is done as part of the QA process before PR acceptance.

I hope this helps and good luck.

dolftax•1mo ago
On the OpenSSF CVE Benchmark[1], Semgrep CE hits 56.97% accuracy vs our 81.21%, and nearly 3x higher recall (75.61% vs 26.83%).

On when to run it, fair point. Autofix Bot is currently meant for local use (TUI, Claude Code plugin, MCP). We're integrating this pipeline into DeepSource[2], which will have inline comments in pull requests, that fits the QA/pre-merge flow you're describing.

That said, if you're using AI agents to write code, running it at checkpoints locally keeps feedback tight.

Thanks for the feedback!

[1] https://github.com/ossf-cve-benchmark/ossf-cve-benchmark

[2] https://deepsource.com/

tarun_anand•1mo ago
Congratulations!! Anchoring is important. What about other parts of the code review like coding guidelines, perf issues etc?
dolftax•1mo ago
We flag performance issues today alongside security and code quality. We're working on respecting AGENTS.md, detecting code complexity (AI generated code tends toward verbose, tangled logic), and letting users/teams define custom coding guidelines.
tarun_anand•1mo ago
The AI tools already have a rules engine for coding guidelines etc.

I guess the real question is can Deepsource be the "judge" of whether the guidelines were followed, NFR will be met by humans and AI alike

ramon156•1mo ago
How does this compare to gemini-code-assist? Rn its one of the best imo
sanketsaurav•1mo ago
We haven't included Gemini Code Assist or Gemini CLI's code review mode in our benchmarks[1] (we should do that), but functionally, it'll do the same thing as any other AI reviewer. Our differentiator is that since we're using static analysis for grounding, you'll see more issues with lower false positives.

We also do secrets detection out of the box, and OSS scanning is coming soon.

[1] https://autofix.bot/benchmarks/

yoelhacks•1mo ago
$8/100k tokens strikes me as potentially a TON if the idea is that we're going to be running this as part of the iterative local development cycle (or god forbid letting agents run it whenever they decide). As you mentioned, one of the issues with AI generated code is often that it writes too much and needs direction on shrinking down.

I could easily see hitting 10k+ LOC on routine tickets if this is being run on each checkpoint. I have some tickets that require moving some files around, am I being charged on LOC for those files? Deleted files? Newly created test files that have 1k+ lines?

sanketsaurav•1mo ago
> $8/100k tokens strikes me as potentially a TON

It's $8/100K lines of code. Since we're using a mix of models across our main agent and sub-agents, this normalizes our cost.

> I could easily see hitting 10k+ LOC on routine tickets if this is being run on each checkpoint. I have some tickets that require moving some files around, am I being charged on LOC for those files? Deleted files? Newly created test files that have 1k+ lines?

We basically look at the files changed that need to be reviewed + the additional context that is required to make a decision for the review (which is cached internally, so you'd not be double-charged).

That said, we're of course open to revising the pricing based on feedback. But if it's helpful, when we ran the benchmarks on 165 pull requests [1], the cost was as follows:

- Autofix Bot: $21.24 - Claude Code: $48.86 - Cursor Bugbot: $40/mo (with a limit of 200 PRs per month)

We have several optimization ideas in mind, and we expect pricing to become more affordable in the future.

[1] https://github.com/ossf-cve-benchmark/ossf-cve-benchmark

yoelhacks•1mo ago
Ah sorry, you were very clear on the pricing page and I meant 100k LoC, not tokens.

In your explanation here, you mention running it per PR - does this mean running it once? Several times?

dlahoda•1mo ago
we use rust, sql, typescript. how statically covered these?
dolftax•1mo ago
All three covered — TypeScript, Rust, and SQL[1].

[1] https://deepsource.com/directory