frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•4m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•6m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•8m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•8m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•10m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•11m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•16m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•17m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•17m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•18m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•20m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•23m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•26m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•32m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•34m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•39m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•41m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•41m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•44m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•45m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•47m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•48m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•51m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•52m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•55m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•56m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•56m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•58m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•1h ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

Show HN: ContextPacker code context API for your agent without vector databases

https://contextpacker.com/
2•rozetyp•2mo ago
Every time I wanted to give an LLM context from a codebase, I ended up doing this:

- Set up a vector DB - Write chunking logic - Run an indexer - Keep it in sync with git on any change

For just answer this question about this repo, it felt a bit too much. So I built a small API instead: you send a repo + question, it sends back the files an LLM actually needs (not sure how novel this is at all?).

What it does:

You call an HTTP endpoint with a GitHub repo URL + natural language question (better if specific, but this will also work: How does auth work? What validates webhook signatures? etc).

The API returns JSON with 1–10 ranked files: - `path`, `language`, `size`, full `content` - plus a small `stats` object (token estimate, rough cost savings)

You plug those files into your own LLM / agent / tool. There's no embeddings, no vector DB, no background indexing job. It works on the very first request.

Why I built this?

I just wanted to ask this repo a question without:

- Standing up Pinecone/Weaviate/Chroma - Picking chunk sizes and overlap - Running an indexer for every repo - Dealing with sync jobs when code changes

This API skips all of that. It's meant for:

- One-off questions on random repos - Agents / tools that hop across many repos - Internal tools where you don't want more infra

Does it work at all?

On a small internal eval (177 questions across 14 repos, mix of Python, TS, monorepos + private ones):

- A cross-model LLM judge rated answers roughly on par with a standard embeddings + vector DB setup - Latency is about 2–4 seconds on the first request per repo (shallow cloning + scanning), then faster from cache - No indexing step: new repos work immediately

Numbers are from our own eval, so treat them as directional, not a paper. Happy to share the setup if anyone wants to dig in.

How it works:

1. On first request, it shallow clones the repo and builds a lightweight index: file paths, sizes, languages, and top-level symbols where possible.

2. It gives an LLM the file tree + question and asks it to pick the most relevant files.

3. It ranks, dedupes, and returns a pack of files that fits in a reasonable context window.

Basically: let an LLM read the file tree and pick files, instead of cosine-searching over chunks.

*imitations:

- Eval is relatively small (177 questions / 14 repos), all hand-written – directional, not research-grade - Works best on repos with sane structure and filenames - First request per repo pays the clone cost (cached after)

Try it:

- Live demo: https://contextpacker.com - DM me for an API key – keeping it free while I validate the idea.

If you're building code agents, "explain this repo" tools, or internal AI helpers over your company's repos – I'd love to hear how you'd want to integrate something like this (or where you think it will fall over). Very open to feedback and harsh benchmarks.