frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•29s ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•33s ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
1•vinhnx•1m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
2•tosh•6m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•10m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•14m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•16m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•17m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
3•okaywriting•23m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•26m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•27m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•28m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•29m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•29m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•29m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•30m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•34m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•34m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•35m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•35m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•44m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•44m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•46m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•46m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•46m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•47m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•47m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•48m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•49m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•49m ago•0 comments
Open in hackernews

Show HN: ContextPacker code context API for your agent without vector databases

https://contextpacker.com/
2•rozetyp•2mo ago
Every time I wanted to give an LLM context from a codebase, I ended up doing this:

- Set up a vector DB - Write chunking logic - Run an indexer - Keep it in sync with git on any change

For just answer this question about this repo, it felt a bit too much. So I built a small API instead: you send a repo + question, it sends back the files an LLM actually needs (not sure how novel this is at all?).

What it does:

You call an HTTP endpoint with a GitHub repo URL + natural language question (better if specific, but this will also work: How does auth work? What validates webhook signatures? etc).

The API returns JSON with 1–10 ranked files: - `path`, `language`, `size`, full `content` - plus a small `stats` object (token estimate, rough cost savings)

You plug those files into your own LLM / agent / tool. There's no embeddings, no vector DB, no background indexing job. It works on the very first request.

Why I built this?

I just wanted to ask this repo a question without:

- Standing up Pinecone/Weaviate/Chroma - Picking chunk sizes and overlap - Running an indexer for every repo - Dealing with sync jobs when code changes

This API skips all of that. It's meant for:

- One-off questions on random repos - Agents / tools that hop across many repos - Internal tools where you don't want more infra

Does it work at all?

On a small internal eval (177 questions across 14 repos, mix of Python, TS, monorepos + private ones):

- A cross-model LLM judge rated answers roughly on par with a standard embeddings + vector DB setup - Latency is about 2–4 seconds on the first request per repo (shallow cloning + scanning), then faster from cache - No indexing step: new repos work immediately

Numbers are from our own eval, so treat them as directional, not a paper. Happy to share the setup if anyone wants to dig in.

How it works:

1. On first request, it shallow clones the repo and builds a lightweight index: file paths, sizes, languages, and top-level symbols where possible.

2. It gives an LLM the file tree + question and asks it to pick the most relevant files.

3. It ranks, dedupes, and returns a pack of files that fits in a reasonable context window.

Basically: let an LLM read the file tree and pick files, instead of cosine-searching over chunks.

*imitations:

- Eval is relatively small (177 questions / 14 repos), all hand-written – directional, not research-grade - Works best on repos with sane structure and filenames - First request per repo pays the clone cost (cached after)

Try it:

- Live demo: https://contextpacker.com - DM me for an API key – keeping it free while I validate the idea.

If you're building code agents, "explain this repo" tools, or internal AI helpers over your company's repos – I'd love to hear how you'd want to integrate something like this (or where you think it will fall over). Very open to feedback and harsh benchmarks.