frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
1•witnessme•52s ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
1•aloukissas•4m ago•0 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•7m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•13m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
2•alephnerd•15m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•16m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•18m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•19m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•30m ago•4 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•31m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•33m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
3•duxup•36m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•37m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•49m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•51m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•52m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•54m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•58m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
41•chwtutha•1h ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments
Open in hackernews

Show HN: Realizing Karpathy's dream of Natural Language Programming

3•amthewiz•3mo ago
https://github.com/playbooks-ai/playbooks

Comments

amthewiz•3mo ago
Andrej Karpathy posted in early 2023 (https://x.com/karpathy/status/1617979122625712128) -

> "The hottest new programming language is English"

I've built a Natural Language Programming stack for building AI Agents. I think it is the first true Software 3.0 stack.

The core idea: Use LLMs as CPUs! You can finally step debug through your prompts and get reliable, verifiable execution. The stack includes a new language, compiler, developer tooling like VSCode extension.

Programs are written as markdown. H1 tags are agents, H2 tags are natural language playbooks (i.e. functions), python playbooks. All playbooks in an agents run on the same call stack. NL and python playbooks can call each other.

Quick intro video: https://www.youtube.com/watch?v=ZX2L453km6s

Github: https://github.com/playbooks-ai/playbooks (MIT license)

Documentation: https://playbooks-ai.github.io/playbooks-docs/getting-starte...

Project website: runplaybooks.ai

Example Playbooks program -

    # Country facts agent
    This agent prints interesting facts about nearby countries

    ## Main
    ### Triggers
    - At the beginning
    ### Steps
    - Ask user what $country they are from
    - If user did not provide a country, engage in a conversation and gently nudge them to provide a country
    - List 5 $countries near $country
    - Tell the user the nearby $countries
    - Inform the user that you will now tell them some interesting facts about each of the countries
    - process_countries($countries)
    - End program

    ```python
    from typing import List

    @playbook
    async def process_countries(countries: List[str]):
        for country in countries:
            # Calls the natural language playbook 'GetCountryFact' for each country
            fact = await GetCountryFact(country)
            await Say("user", f"{country}: {fact}")
    ```

    ## GetCountryFact($country)
    ### Steps
    - Return an unusual historical fact about $country
There are a bunch of very interesting capabilities. A quick sample -

- "Queue calls to extract table of contents for each candidate file" - Effortless calling MCP tools, multi-threading, artifact management, context management

- "Ask Accountant what the tax rate would be" is how you communicate with other agents

- you can mix procedural natural language playbooks, ReAct playbooks, Raw prompt playbooks, Python playbooks and external playbooks like MCP tools seamlessly on the same call stack

- "Have a meeting with Chef, Marketing expert and the user to design a new menu" is how you can spawn multi-agent workflows, where each agent follows their own playbook for the meeting

- Coming soon: Observer agents (agents observing other agents - automated memory storage, verify/certify execution, steer observed agents), dynamic playbook generation for procedural memory, etc.

I hope this changes how we build AI agents going forward for the better. Looking forward to discussion! I'll be in the comments.

amthewiz•3mo ago
It has puzzled me why someone hasn't already done this yet, given LLMs are so good at language now.

Probably the short answer is that is it hard to get this to actually work. There were many open questions that one has to tackle simultaneously -

- What is the right balance between relying on LLMs to do the right thing vs the runtime around LLMs? For example, I went back and forth a few times getting LLMs to manage stack as you call one playbook from another one. Finally decided that it is most reliable to let the runtime take care of that.

- Context engineering - What to put in the prompt, in what order, how to represent state, how to handle artifacts, how to make sure that we use LLM cache optimally as context grows, how to "unwind" context as playbook calls return, how to compact specific types of information, how to make sure important context isn't lost, etc

- LLMs today have vastly different capabilities than 2 years ago. I have had to rewrite the whole stack from scratch 4 times to adjust. Wasn't fun, but had to be done.

- Language(s): How to represent the pseudocode so that it is both fluid natural language and a capable programming language? How to transform it so that it can be executed reliably through LLMs? How to NOT lose the flexibility and fluidity in the process (e.g. easy to convert to a graph like LangGraph, but then you are stuck with control flow), how to create a semantic compiler for that transition, what primitives to use for the compiled language that I call Playbooks Assembly Language [1].

- Agents and multi-agent system considerations - How to represent agents, how they should communicate. Agents are classes and they expose public playbooks that other agents can call). Agents can send natural language messages to each other and engage in conversations. Agents can call multi-party meetings. How can the behavior across all these interaction patterns be defined so it remains intuitive. For example, lifetime of a meeting is tied to "meeting: true" playbooks so the agent simply returns from the playbook to exit a meeting and meeting lifecycle is tied to the host returning from its meeting playbook.

- Which LLMs to support? Go for "Bring your own LLM" or restrict the set? Which LLM? LLM selection impacts how all the internal prompts are implemented so prompt building had to occur simultaneously with LLM selection.

It felt like playing an N-dimensional game of chess!

[1] https://playbooks-ai.github.io/playbooks-docs/reference/play...