frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
1•jesperordrup•4m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•5m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•5m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•12m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•20m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
3•keepamovin•21m ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•23m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•25m ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•26m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•31m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•32m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•32m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•35m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
3•breve•36m ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•39m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•40m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•43m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•44m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
6•tempodox•45m ago•3 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•49m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•52m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
8•petethomas•55m ago•3 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•1h ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•1h ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•1h ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments
Open in hackernews

Show HN: ART – a new open-source RL framework for training agents

https://github.com/OpenPipe/ART
116•kcorbitt•9mo ago
Hey HN, I wanted to share a new project we've been working on for the last couple of months called ART (https://github.com/OpenPipe/ART).

ART is a new open-source framework for training agents using reinforcement learning (RL). RL allows you to train an agent to perform better at any task whose outcome can be measured and quantified.

There are many excellent projects focused on training LLMs with RL, such as GRPOTrainer (https://huggingface.co/docs/trl/main/en/grpo_trainer) and verl (https://github.com/volcengine/verl). We've used these frameworks extensively for customer-facing projects at OpenPipe, but grew frustrated with some key limitations:

- Multi-turn workflows, where the agent calls a tool, gets a response, and calls another, are not well supported. This makes them a non-starter for any task that requires an agent to perform a sequence of actions.

- Other frameworks typically have low GPU efficiency. They may require multiple H100 GPUs just to train a small 7B parameter model, and aren't able to keep the GPUs busy consistently during both the "rollout" and "training" phases of the training loop.

- Existing frameworks are typically not a convenient shape for integrating with existing agentic codebases. Existing trainers expect you to call raw text completion endpoints, and don't automatically provide industry-standard chat completion APIs.

ART is designed to address these limitations and make it easy to train high-quality agents. We've also shared many details and practical lessons learned is in this post, which walks through a demo of training an email research agent that outperforms o3 (https://openpipe.ai/blog/art-e-mail-agent). You can also find out more about ART's architecture in our announcement post (https://openpipe.ai/blog/art-trainer-a-new-rl-trainer-for-ag...).

Happy to answer any questions you have!

Comments

kcorbitt•9mo ago
Figured now was a good time to post this since we recently got surprisingly good results on training an email research agent. Link is above, but will put it here as well since I think it's a good example of RL's promise: https://openpipe.ai/blog/art-e-mail-agent
bradhilton•9mo ago
Contributor here, we developed the Agent Reinforcement Trainer (ART) library to make it easy to train LLMs for anything.

No callbacks or straitjacket flows. Instead we serve an OpenAI API-compatible endpoint that you can use as a drop-in replacement for any proprietary APIs you may be hitting.

After collecting responses from the inference API, you can tune the model with your own custom rewards and repeat the process as long as you like, until performance converges. We believe this level of flexibility will make it easier for you to train state-of-the-art models for your own use cases, much like Kyle's new email agent[1].

Also happy to answer any questions you have about the framework.

[1] https://openpipe.ai/blog/art-e-mail-agent

tcdent•9mo ago
I really like this concept.

Do you have documentation for the API response from the `/_train_model` endpoint?

bradhilton•9mo ago
Hi, we don't have reliable documentation for the HTTP API endpoints yet, mostly as they are still subject to change.

However, to briefly provide some context, `/_train_model` returns a stream of line delimited JSON objects for each gradient step as the model trains on the provided trajectories so the client can monitor progress. The final version of this endpoint may provide the option for both streaming & non-streaming responses, and/or potentially return a "training job" that can be polled instead.

someguy101010•9mo ago
Thanks for sharing this! A couple of questions come to mind:

- How does training with RL differ from fine tuning?

- When would it make sense to fine tune instead of using RL?

kcorbitt•9mo ago
Ok good questions here.

By fine-tuning in this context I assume you mean "supervised fine-tuning", or SFT. SFT trains a model to produce a specific string of output tokens, given an input. With SFT, if you were trying to train an assistant to solve math problems using a code interpreter, you might train it on a dataset that looks like:

    input: 'What is 934+1208'  
    output: `print(934+1208)`

    input: 'how many "r"s in strawberry'
    output: `print(len([l for l in "strawberry" if l == 'r'])`
etc, etc.

RL, on the other hand, just means training a model not to produce a concrete string of output tokens, but rather to create an output that maximizes some reward function (you get to decide on the reward).

For the example above, you might create the following dataset for RL training:

    input: 'What is 934+1208'
    ground_truth: 2142

    input: 'how many "r"s in strawberry'
    ground_truth: 3
You would then train the model to write python code that produces the ground_truth output. Your training code would take the model's output, run the python it produced, and then check whether the output matches the expected ground_truth. Importantly, this doesn't require you actually writing the code to solve the problem (you don't even have to know if it's solvable, technically!). Over time, the training loop would make the model more likely to produce outputs that get high rewards, which hopefully means it gets better at producing valid and applicable python.

This is useful in lots of domains where it's easier to check the answer than actually produce it. In the blog post[1] linked above, we train the agent to effectively use keyword search to try to find the correct emails in an inbox. As the model trainer, I didn't actually know what the right strategy was to choose keywords that would most quickly find the relevant email, but through training with RL, the model was able to figure it out on its own!

[1]: https://openpipe.ai/blog/art-e-mail-agent?refresh=1746030513...

someguy101010•9mo ago
Thank you for the detailed response!
jeffchuber•9mo ago
the table with comparable models is a really great way to show off things here
pama•9mo ago
Was the name influenced by the ship in the murderbot diaries?
schainks•9mo ago
Seconded.
gitroom•9mo ago
Perfect, I've always wanted an easier way to mess with RL frameworks. Gonna mess around with this asap.
bradhilton•9mo ago
Awesome! If you run into any problems or have questions feel free to open an issue or drop by the discord [1] server.

[1] https://discord.gg/zbBHRUpwf4