frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Data Box; Why "Smarter" AI Feels Dumber

https://blog.nimbial.com/pages/the_data_box
1•ajayarama•1m ago•0 comments

Erdős Problem #347 Solved (AI assisted math)

https://www.erdosproblems.com/forum/thread/347
1•tzury•3m ago•0 comments

Designing an Authentication System: A Dialogue in Four Scenes (1997)

https://web.mit.edu/kerberos/www/dialogue.html
1•vismit2000•10m ago•0 comments

Oldest cave painting could rewrite human creativity timeline

https://www.bbc.com/news/articles/czx1pnlzer5o
1•griffzhowl•15m ago•0 comments

Anthropic's new Claude 'constitution': be helpful, and don't destroy humanity

https://www.theverge.com/ai-artificial-intelligence/865185/anthropic-claude-constitution-soul-doc
1•xparadigm•16m ago•0 comments

Starlink in Iran: How the regime jams the service and what helps against it

https://www.heise.de/en/background/Starlink-in-Iran-How-the-regime-jams-the-service-and-what-help...
2•DeathArrow•24m ago•0 comments

Semantica: Open-source semantic layers, knowledge graphs, and GraphRAG

https://github.com/Hawksight-AI/semantica
2•kaifahmad1•29m ago•1 comments

New Security Vulnerability Database Launches in the EU

https://www.forbes.com/sites/kateoflahertyuk/2026/01/20/new-security-vulnerability-database-launc...
2•cedricbonhomme•31m ago•1 comments

Why Greenland Looks (It's Not) [video]

https://www.youtube.com/watch?v=tK7yTJ8Mk7A
1•handfuloflight•35m ago•0 comments

Graph of All Human Languages

https://dr.eamer.dev/datavis/poems/language/network.html
3•samwho•36m ago•0 comments

Mixing incentives and penalties found key to cutting carbon emissions long term

https://phys.org/news/2025-12-incentives-penalties-key-carbon-emissions.html
1•PaulHoule•36m ago•0 comments

With this tool, you can enjoy NAS functionality even without a NAS

https://quicksend.chat/
1•foodhome•38m ago•0 comments

The Tighter Weave: On Editing and Not Editing

https://hedgehogreview.com/issues/place-and-revolution/articles/the-tighter-weave
1•samclemens•39m ago•0 comments

OpenSkills – Stop bloating your LLM context with unused agent instructions

1•twwch•39m ago•0 comments

Rare Data Hunters [video]

https://www.youtube.com/watch?v=IU4ByUbDKNc
1•DiscourseFan•42m ago•0 comments

Video for ROS2

https://github.com/stryngs/rosVid
1•stryngs42•45m ago•1 comments

We are updating Dokploy's Open Source license

https://dokploy.com/blog/we-are-updating-dokploys-open-source-license
1•raybb•53m ago•1 comments

Show HN: Scribefully is a portfolio/HN-style community for academics & pros

https://scribefully.com/
1•hoag•55m ago•0 comments

CAP theorem: Why Pick Two Misses the Point

https://www.blog.ahmazin.dev/p/cap-theorem-explained
1•artmonk•57m ago•0 comments

US science after a year of Trump

https://www.nature.com/immersive/d41586-026-00088-9/index.html
6•newman314•1h ago•0 comments

Blue4est Paper – BPA-Free Thermal Print Camera Compendium

https://thermalprintcameras.wordpress.com/blue4est-paper/
1•walterbell•1h ago•0 comments

Ask HN: Why does Google Maps still use mercator projection?

2•hbarka•1h ago•1 comments

Show HN: Aident, agentic automations as plain-English playbooks

https://aident.ai/
4•ljhskyso7•1h ago•0 comments

Why AGI Would Shape Humanity in the Shadows the Revelation Trap

1•unspokenlayer•1h ago•0 comments

Governance in the Age of AI, Nuclear Threats, and Geopolitical Brinkmanship [video]

https://www.youtube.com/watch?v=XACETcmQAeM
1•measurablefunc•1h ago•0 comments

Ask HN: Is there any good open source model with reliable agentic capabilities?

1•baalimago•1h ago•0 comments

Show HN: MCP server for searching and retrieving 200k icons

https://github.com/better-auth/better-icons
2•bekacru•1h ago•0 comments

Government Agencies Mandate CSPM for Federal Cloud Contracts

https://www.systemtek.co.uk/2025/05/executive-protection-in-the-digital-age-how-ceos-are-becoming...
2•cybleinc•1h ago•0 comments

DRAM are the mini-mills of our time

https://siliconimist.substack.com/p/dram-the-steel-mini-mills-of-our
1•johncole•1h ago•0 comments

How Shopify's Tobi Lütke Works – David Senra [video]

https://www.youtube.com/watch?v=ZSM2uFnJ5bs
2•simonebrunozzi•1h ago•0 comments
Open in hackernews

Show HN: Deterministic, machine-readable context for TypeScript codebases

https://github.com/LogicStamp/logicstamp-context
2•AmiteK•1h ago
Hi HN,

I built a CLI that extracts a deterministic, structured representation of a TypeScript codebase (components, hooks, APIs, routes) directly from the AST.

The goal is to produce stable, diffable “codebase context” that can be used in CI, tooling, or reasoning workflows, without relying on raw source text or heuristic inference.

It supports incremental watch mode, backend route extraction (Express/Nest), and outputs machine-readable data designed for automation.

Repo + docs: https://github.com/LogicStamp/logicstamp-context

Happy to answer questions or hear where this would (or wouldn’t) be useful.

Comments

verdverm•1h ago
How does this work mid-chat if the agent changes code that would require these mappings to be updated?

I put this information in my AGENTS.md, for similar goals. Why might I prefer this option you are presenting instead? It seems like it ensures all code parts are referenced in a JSON object, but I heavily filter those down because most are unimportant. It does not seem like I can do that here, which makes me thing this would be less token efficient than the AGENTS.md files I already have. Also, JSON syntax eats up tokens with the quotes, commas, and curlies

Another alternative to this, give your agents access to LSP servers so they can decide what to query. You should address this in the readme as well

How is it deterministic? I searched the term in the readme and only found claims, no explanation

AmiteK•1h ago
Good question.

LogicStamp treats context as deterministic output derived from the codebase, not a mutable agent-side model.

When code changes mid-session, watch mode regenerates the affected bundles, and the agent consumes the latest output. This avoids desync by relying on regeneration rather than keeping long-lived agent state in sync.

verdverm•1h ago
> watch mode regenerates the affected bundles, and the agent consumes the latest output

How does this work in practice? How does the agent "consume" (reread) the files, with a tool call it has to decide to invoke?

AmiteK•1h ago
Yes. In the MCP setup the agent doesn’t decide to regenerate arbitrarily.

When stamp context --watch is active, the MCP server detects it. The agent first calls logicstamp_watch_status to see whether context is being kept fresh.

If watch mode is active, the agent can directly call list_bundles(projectPath) → read_bundle(projectPath) and will always read the latest regenerated output. No snapshot refresh is needed.

If watch mode isn’t active, the workflow falls back to refresh_snapshot → list_bundles → read_bundle.

So “consume” just means reading deterministic files via MCP tools, with watch mode ensuring those files stay up to date.

verdverm•1h ago
As soon as you involve the agent and having it make tool calls, it is no longer deterministic

This is in fact the very reason I set out to build my own agent, because Copilot does this with their `.vscode/instruction/...` files and the globs for file matching therein. It was in fact, not deterministic like I wanted.

My approach is to look at the files the agent has read/written and if there is an AGENTS.md in that or parent dirs, I put it in the system prompt. The agent doesn't try to read them, which saves a ton on token usage. You can save 50% on tokens per message, yet my method will still use fewer over the course of a session because I don't have to make all those extra tool calls

AmiteK•56m ago
I think there’s a conflation here between artifact determinism and agent behavior.

LogicStamp’s determinism claim is about the generated context: same repo state + config ⇒ identical bundles. That property holds regardless of how or when an agent chooses to read them.

Tool calls don’t make the artifacts non-deterministic; they only affect when an agent consumes already-deterministic output.

verdverm•47m ago
LSP is equally deterministic in that regard and more token efficient

Can you address the token inefficiencies from having to make more tool calls with this method?

AmiteK•41m ago
On tool-call overhead: in the MCP flow it’s typically 1–2 calls per task (watch_status once, then read_bundle for the relevant slice). In watch mode we also skip snapshot refresh entirely.

Token-wise, the intent isn’t “dump everything”. it’s selective reads of the smallest relevant bundles. If your workflow already achieves what you want with AGENTS.md + LSP querying, that may indeed be more token-efficient for many sessions.

The trade-off LogicStamp is aiming for is different: verifiable, diffable ground-truth artifacts (CI/drift detection/cross-run guarantees). Tokens aren’t the primary optimization axis.

verdverm•33m ago
I'm not sure the comparison I'm hoping to draw out is coming through

This seems more similar in spirit to AGENTS.md than LSP, so I'll make the comparison there. Today, I require zero tool calls to bring my AGENTS.md into context, so this would require me making more tool calls, each of is a round trip to the LLM with the current context. So if I have a 30k context right now, and you are saying 1-2 calls per task, that is 30-60k extra tokens I need to pay for, for every one of these AGENT.md files that needs to be read / checked to see if in sync.

I use git for the verifiable / diffable ground truth artifacts. I can have my LSP query at different commits, there is no rule it can only access the current state of the code

AmiteK•27m ago
I think we’re optimizing for different constraints. If your goal is zero extra round trips and you’re happy with AGENTS.md auto-injection + LSP queries, then I agree LogicStamp won’t be a win on token cost for that setup.

The AGENTS.md comparison isn’t “same thing” - it’s a different layer. AGENTS.md encodes human intent/heuristics. LogicStamp generates semantic ground-truth contracts (exports, APIs, routes) from the AST so they can be diffed and validated mechanically (e.g. CI drift detection).

Git + LSP can diff/query source across commits, but that’s still a query workflow. LogicStamp’s goal is a persistent, versioned semantic artifact. If your workflow already covers that, then it may simply not add value - which is totally fine.

verdverm•11m ago
I think none of us know what we are doing with the new toy and are still trying to figure it out lol. So in some sense, there are a lot of ideas being offered, often in many shapes themselves. Just look to the agent sandboxing space

I still have a hard time seeing why I want something like this in my agentic or organic software development. I tried something nearly identical for Go, and having all that extra bookkeeping in context wrecked things, so on an experiential level, the custom DSL for giving the agent an index to the code base hurt overall coding agent performance and effectiveness.

What works far better is having very similar in content, but very curated "table of contents" for the agent. Yes, I also use the same methods to break it down by directory or other variables. But when it reads one of these, the majority is still noise, which is why overall performance degraded and why curation is such a difference maker.

Do you have evaluations for your project that it leads to better overall model capabilities, especially as they compare to a project that already uses AGENTS.md?

btw, I put this stuff in AGENTS.md now, you can put whatever you want in there, for example I auto generate some sections with Go's tooling to have a curated version of what your project does. I don't see it as a "different layer" because it is all context engineering in the end.

AmiteK•1h ago
Adding a bit more context since I didn’t see your expanded comment at first:

AGENTS.md and LogicStamp aren’t mutually exclusive. AGENTS.md is great for manual, human-curated guidance. LogicStamp focuses on generated ground-truth contracts derived from the AST, which makes them diffable, CI-verifiable, and resistant to drift.

On token usage: the output is split into per-folder bundles, so you can feed only the slices you care about (or post-filter to exported symbols / public APIs). JSON adds some overhead, but the trade-off is reliable machine selectability and deterministic diffs.

Determinism here means: same repo state + config ⇒ identical bundle output.

verdverm•1h ago
Having been working on my agent's context engineering heavily of late, the following is based on my personal experience messing with how that stuff works in some fundamental ways

I don't really think dumping all this unnecessary information into the context is a good idea

1. search tools like an LSP are far superior, well established, and zero maintenance

2. it pollutes context with irrelevant information because most of the time you don't need to know all the details you are putting in there, especially the breadth, which is really the main issue I see here. No control over breadth or what is or is not included, so mostly noise for any given session, even with the folder separation. You would need to provide evals for outcomes, not minimizing token usage, because that is the wrong thing to primary your optimizations on

AmiteK•59m ago
That’s fair, and I agree LSP-style search is excellent for interactive, local exploration. LogicStamp isn’t trying to replace that.

The problem it targets is different: producing stable, explicit structure (public APIs, components, routes) that can be diffed, validated, and reasoned about across runs - e.g. in CI or long-running agents. LSPs are query-oriented and ephemeral; they don’t give you a persistent artifact to assert against.

On breadth/noise: the intent isn’t to dump everything into one prompt. Output is sliced (per-folder / per-contract), and the assumption is that only relevant bundles are selected. Token minimization isn’t the primary goal; predictability and selectability are.

In practice I see them as complementary: LSPs for live search, generated contracts for ground truth. If your workflow is already LSP-driven, LogicStamp may simply not add much value - and that’s fine.

verdverm•27m ago
From the readme

> Pre-processed relationships - Dependency graphs are explicit (graph.edges) rather than requiring inference

I suspect this actually is the opposite. Injecting some extra, non-standard format or syntax for expressing something requires more cycles for the LLM to understand. They have seen a lot of Typescript, so the inference overhead is minimal. This is similar to the difference between a Chess Grandmaster and a new player. The master or llm has specialized pathways dedicated to their domain (chess / typescript). A Grandmaster does not think about how pieces move (what does "graph.edges" mean?), they see the board in terms of space control. Operational and minor details have been conditioned into the low level pathways leaving more neurons free to work on higher level tasks and reasoning.

I don't have evals to prove one way or the other, but the research generally seems to suggest this pattern holds up, and it makes sense with how they are trained and the mathematics of it all.

Thoughts?

AmiteK•14m ago
That’s a reasonable hypothesis, and I agree LLMs are very good at inferring structure from raw TypeScript within a local reasoning window. However, that inference has to be repeated as context shifts or resets.

The claim I’m making is narrower: pre-processed structure isn’t about helping the model understand syntax, it’s about removing the need to re-infer relationships every time. The output isn’t a novel language - it’s a minimal, explicit representation of facts (e.g. dependencies, exports, routes) that would otherwise be reconstructed from source.

Inference works well per session, but it doesn’t give you a persistent artifact you can diff, validate, or assert against in CI. LogicStamp trades some inference convenience for explicitness and repeatability across runs.

I don’t claim one dominates the other universally - they optimize for different failure modes.