frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
624•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
926•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•24 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•27 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
219•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
210•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
322•vecti•15h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
369•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
358•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
477•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•160 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
14•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
3•theblazehen•2d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•188 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•17h ago•62 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
280•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
132•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

Show HN: Pickaxe – A TypeScript library for building AI agents

https://github.com/hatchet-dev/pickaxe
70•abelanger•7mo ago
Hey HN, Gabe and Alexander here from Hatchet. Today we're releasing Pickaxe, a Typescript library to build AI agents which are scalable and fault-tolerant.

Here's a demo: https://github.com/user-attachments/assets/b28fc406-f501-442...

Pickaxe provides a simple set of primitives for building agents which can automatically checkpoint their state and suspend or resume processing (also known as durable execution) while waiting for external events (like a human in the loop). The library is based on common patterns we've seen when helping Hatchet users run millions of agent executions per day.

Unlike other tools, Pickaxe is not a framework. It does not have any opinions or abstractions for implementing agent memory, prompting, context, or calling LLMs directly. Its only focus is making AI agents more observable and reliable.

As agents start to scale, there are generally three big problems that emerge: 1. Agents are long-running compared to other parts of your application. Extremely long-running processes are tricky because deploying new infra or hitting request timeouts on serverless runtimes will interrupt their execution. 2. They are stateful: they generally store internal state which governs the next step in the execution path 3. They require access to lots of fresh data, which can either be queried during agent execution or needs to be continuously refreshed from a data source.

(These problems are more specific to agents which execute remotely -- locally running agents generally don't have these problems)

Pickaxe is designed to solve these issues by providing a simple API which wraps durable execution infrastructure for agents. Durable execution is a way of automatically checkpointing the state of a process, so that if the process fails, it can automatically be replayed from the checkpoint, rather than starting over from the beginning. This model is also particularly useful when your agent needs to wait for an external event or human review in order to continue execution. To support this pattern, Pickaxe uses a Hatchet feature called `waitFor` which durably registers a listener for an event, which means that even if the agent isn't actively listening for the event, it is guaranteed to be processed by Hatchet and stored in the execution history and resume processing. This infrastructure is powered by what is essentially a linear event log, which stores the entire execution history of an agent in a Postgres database managed by Hatchet.

Full docs are here: https://pickaxe.hatchet.run/

We'd greatly appreciate any feedback you have and hope you get the chance to try out Pickaxe.

Comments

almosthere•7mo ago
What I really like about it, is that this kind of project helps people learn what an agent is.
abelanger•7mo ago
Thanks! Our favorite resources on this (both have been posted on HN a few times):

- https://www.anthropic.com/engineering/building-effective-age...

- https://github.com/humanlayer/12-factor-agents

That's also why we implemented pretty much all relevant patterns in the docs (i.e. https://pickaxe.hatchet.run/patterns/prompt-chaining).

If there's an example or pattern that you'd like to see, let me know and we can get it released.

randomcatuser•7mo ago
Oh this is really cool! I was building out a bit of this with Restate this past week, but this seems really well put together :) will give it a try!
abelanger•7mo ago
Thanks! Would love to hear more about what type of agent you're building.

We've heard pretty often that durable execution is difficult to wrap your head around, and we've also seen more of our users (including experienced engineers) relying on Cursor and Claude Code while building. So one of the experiments we've been running is ensuring that the agent code is durable when written by LLMs by using our MCP server so the agents can follow best practices while generating code: https://pickaxe.hatchet.run/development/developing-agents#pi...

Our MCP server is super lightweight and basically just tells the LLM to read the docs here: https://pickaxe.hatchet.run/mcp/mcp-instructions.md (along with some tool calls for scaffolding)

I have no idea if this is useful or not, but we were able to get Claude to generate complex agents which were written with durable execution best practices (no side effects or non-determinism between retries), which we viewed as a good sign.

zegl•7mo ago
As a long time Hatchet user, I understand why you’ve created this library, but it also disappoints me a little bit. I wish more engineering time was spent on making the core platform more stable and performant.
abelanger•7mo ago
Definitely understand the frustration, the difficulty of Hatchet being general-purpose is that being performant for every use-case can be tricky, particularly when combining many features (concurrency, rate limiting, priority queueing, retries with backoff, etc). We should be more transparent about which combinations of use-cases we're focused on optimizing.

We spent a long time optimizing the single-task FIFO use-case, which is what we typically benchmark against. Performance for that pattern is i/o-bound at > 10k/s which is a good sign (just need better disks). So a pure durable-execution workload should run very performantly.

We're focused on improving multi-task and concurrency use-cases now. Our benchmarking setup recently added support for those patterns. More on this soon!

revskill•7mo ago
Hatchet is not stable.
jskalc92•7mo ago
If I understand it correctly, tools and agents run() method works in a similar way to react hooks, correct?

Depending on execution order, tool is either called or a cached value returned. That way local state can be replayed, and that's why "no side effects" rule is in place.

I like it. Just, what's the recommended way to have a chat assistant agent with multiple tools? Message history would need to be passed to the very top-level agent.run call, isn't it?

gabrielruttner•7mo ago
yes, similar and we've been toying around with some feedback to have a `pickaxe.memo(()=>{})` utility to quickly wrap small chunks of code similar to `useMemo`.

we'll be continuously improving docs on this project, but since pickaxe is built on hatchet it supports concurrency [1]. so for a chat usecase, you can pass the chat history to the top level agent but propagate cancelation for other message runs in the session to handle if the user sends a few messages in a row. we'll work an example in pattern section for this!

[1] https://docs.hatchet.run/home/concurrency#cancel-in-progress

blixt•7mo ago
I see the API is rarely mentioning exact message structure (system prompt, assistant/user history, etc) or the choice of model (other than defaultLanguageModel). And it's not immediately clear to me how `toolbox.pickAndRun` can access any context from an ongoing agentic flow other than within the one prompt. But this is just from skimming the docs, maybe all of this is supported?

The reason I ask is because I've had a lot of success using different models for different tasks, constructing the system prompt specifically for each task, and also choosing between the "default" long assistant/tool_call/user/(repeat) message history vs. constantly pruning it (bad for caching but sometimes good for performance). And it would be nice to know a library like this could allow experimentation of these strategies.

gabrielruttner•7mo ago
gabe, hatchet cofounder here. thanks for this feedback and i agree!

under the hood we're using vercel ai sdk to make tool calls so this is easily extended [1]. this is the only "opinionated" api for calling llm apis which is "bundled" within the sdk and we were torn on how to expose it for this exact reason, but since its so common we decided to include it.

some things we were thinking is overloading `defaultLanguageModel` with a map for different usecases, or allowing users to "eject" the tool picker to customize it as needed. i've opened a discussion [2] to track this.

[1] https://github.com/hatchet-dev/pickaxe/blob/main/sdk/src/cli...

[2] https://github.com/hatchet-dev/pickaxe/discussions/3

cmdtab•7mo ago
I think providing examples and sample code is better than tying your API to AI sdk.

Due to how fast AI providers are iterating on their APIs, many features arrive weeks or months later to AI SDK (support for openai computer use is pending since forever for example).

I like the current API where you can wait for an event. Similar to that, it would be great to have an API for streaming and receiving messages and everything else is handled by the person so they could use AI sdk and stream the end response manually.

blixt•7mo ago
Appreciate it, looking forward to see how Pickaxe evolves!
golergka•7mo ago
Fantastic. That's exactly what I wanted to make for a long time but never got around to, writing ad-hoc, lacking, overlapping stuff each time.
movedx01•7mo ago
This is great, and I keep my fingers crossed for Hatchet!

One use case I imagine is key here is background/async agents, so OpenAI Codex/Jules style, so that's great if I can durably run them with Pickaxe(btw I belive I've read somewhere in temporal docs or some webinar that Codex was built on that ;), but how do I get that real-time and resumable message stream back to the client? The user might reload the page or return after 15 minutes, etc. I wasn't able to think of an elegant way to model this in a distributed system.

gabrielruttner•7mo ago
perfect use case and this was one of the reasons we built pickaxe, we have a number of coding agents/pr review platforms powered by hatchet with similar patterns already... more to come on the compute side for this use case soon

we'll have agent->client streaming on the very short term roadmap (order of weeks), but haven't broadly rolled out since its not 100% ready for prime time.

we do already have wait for event support for client->agent eventing [1] already in this release!

[1] https://pickaxe.hatchet.run/patterns/human-in-the-loop

cmdtab•7mo ago
Pretty much was my first question if there was support for streaming events. Any way we could be beta tester? ༼ つ ◕_◕ ༽つ
gabrielruttner•7mo ago
100% shoot me a note at gabe [at] hatchet [dot] run and we can share some details here for the signatures that exist, but are going to change
awaseem•7mo ago
Love to see more frameworks like this in the Typescript eco-system! How does this compare to Mastra: https://mastra.ai/
gabrielruttner•7mo ago
thanks! we think of mastra and other frameworks as "batteries included" for patterns like memory and reasoning. this is great for many but not all projects. i think mastra is doing a great job balancing some of this by simply wrapping vercel's ai sdk (we took some inspiration here in our tool picker and it is recommendation for llm calls).

we're leaning away from being a framework in favor of being a library specifically because we're seeing teams looking to implement their own business logic for most core agentic capabilities where things like concurrency, fairness, or resource contention become problematic (think many agents reading 1000s of documents in parallel).

unlike most frameworks we've been working on the orchestrator, hatchet, first for over a year and are basing these patterns on what we've seen our most successful companies already doing.

put shortly - pickaxe brings orchestration and best practices, but you're free to implement to your requirements.

[1] https://github.com/hatchet-dev/hatchet

j_rosenthal•7mo ago
The library name is confusing given https://pickaxe.co/, a nicely done low code platform for building/monetizing chatbots and agents that's been around for 2.5 years or so.

(No connection to pickaxe.co other than using the platform)

nathanielmhld•7mo ago
Agreed, but great job nonetheless!
muratsu•7mo ago
How does this compare to agent-kit by inngest?
abelanger•7mo ago
(we haven't looked too deeply into agent-kit, so this is based on my impression from reading the docs)

At a high level, in Pickaxe agents are just functions that execute durably, where you write the function for their control loop - with agent-kit agents will execute in fully "autonomous" mode where they automatically pick the next tool. In our experience this isn't how agents should be architected (you generally want them to be more constrained than that, even somewhat autonomous agents).

Also to compare Inngest vs Hatchet (the underlying execution engines) more directly:

- Hatchet is built for stateful container-based runtimes like Kubernetes, Fly.io, Railway, etc. Inngest is a better choice if you're deploying your agent into a serverless environment like Vercel.

- We've invested quite a bit more in self-hosting (https://docs.hatchet.run/self-hosting), open-source (MIT licenses) and benchmarking (https://docs.hatchet.run/self-hosting/benchmarking).

Can also compare specific features if there's something you're curious about, though the feature sets are very overlapping.

gerardosuarez•7mo ago
It would be great to have a section in the README about how the code looks without using the library, contrasting it with the example you already have. I would need a significant time-saving reason to use a new external library. This is because, for new libraries like yours, we don't know how long you plan to support it. For that reason, using an external library for a core part of my business is a huge risk for me.

My use case: cursor for open-source terminal-based coding agents.

necatiozmen•7mo ago
Cool work, durable execution and resumability are hugely underrated problems in the agent space, especially once you move beyond toy demos. Love the checkpointing + waitFor pattern, and the Hatchet integration makes a lot of sense for infraheavy setups.

I’m one of the maintainers of VoltAgent , a TypeScript framework focused on agent composition, modular memory/tool flows, and built-in observability (tracing, input/output history..) (https://github.com/VoltAgent/voltagent)

We’re solving a different layer of the stack more on the orchestration/dev experience sid.

Definitely bookmarking this.