frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•2m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•8m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•9m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•13m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•15m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•21m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•25m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•25m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•29m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•30m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•32m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•35m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•37m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•38m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•40m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•42m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•44m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•46m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•51m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•53m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•56m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments
Open in hackernews

Show HN: I made a human-in-the-loop system for tuning LLMs in beta

https://www.joinoneshot.com/
2•gitpullups•1mo ago
OneShot is an API that routes failed LLM outputs to trained humans, returns corrected outputs or prompt injections, and stores the edits as structured training data.

Privacy Note: This product is not built for privacy yet. The current use case is internal tools or beta features where users aren’t promised privacy. But the point of this tool is NOT FOR PRODUCTION.

In the future, there will be a feature for anonymizing all private information automatically.

Problem: My project this year was a tool for pediatricians to do their insurance claims assisted by AI.

Niche industries like this require a ton of examples, fine-tuning and re-prompting to actually get them a product that works. Then, it requires monitoring the output to some extent (of course with the hospital’s consent) so small model changes or edge cases don’t break outputs for at least the first couple months.

This monitoring takes months of being distracted from doing new features. And every new feature I wanted to ship required this constant beta monitoring to get it to a reliable state. This also includes internal tools and automations that I needed to work reliably. That is when I started wishing I had an AI engineer/architect monitoring outputs 24-7 for every new feature’s first month. In real-world software, programs need to break less. Like almost never. And current AI models often don’t get us quite there. From 90% to 100 or 95 to 100. We waste months before shipping new features trying to tweak it internally without the model being able to have the hybrid of being improved live in the real world.

In niche agent environments, you sometimes need an actual human to jump in.

How it works: First, a beta deployment. You deploy your AI to do X business use case in beta or internally.

Each step of your pipeline queries our API with what models you prefer, etc.

Then, a human who is in charge of a batch of outputs will see a flagged output when something goes wrong (we agree first on what that means). They can then use human judgement to tweak the prompt, prompt a different model, or provide added context over and over in multiple parallel threads until the correct output comes out.

Second, fine tuning. You now own a dataset of what changes to your prompt and what changes to the output were made that caused that magical output. Thousands of changes and tweaks that can take your model to the next level internally for each feature are in your db. This data allows you to ship faster, with better guarantees and much less manual testing that isn’t being rewarded or punished by the real world.

Who are the humans? I’m a developer doing the tickets manually with my technical friends I’m paying out of pocket for now (yes, it IS available 24/7!!!). This is intentionally manual during beta, with clear review guidelines, so we understand the process before trying to hire.

How slow is it? Most of the time no human will touch it and sometimes a human will take a quick unnoticeable automated action. In some edge cases, you’ll feel some noticeable slowing (10s+) but we’re looking to accelerate those as well, and the alternative is fully broken output.

Who is it not for? This is not meant for consumer apps, privacy-sensitive production systems, or teams expecting zero human involvement.

Comments

vmitro•1mo ago
Don't laugh, but I think in the (near) future, more and more accent will be put on HITL concept as private or selfhosted AI workflows gain on interest; it's hard not to (hope for?) an emergence of movement similar to GNU in the space of software itself, where freely available tooling allows for collaborative, federated HITL powered finetuning of ML models.

As I do also work on a similar concept, where HITL is the first class citizen, can you tell us a bit more about the underlying technology stack, if it's possible for users to host their own models for inference and fine tuning, how are pipelines defined and such?

gitpullups•1mo ago
1. Pipelines are defined on your end, I want to build another option but for now it is still just queried as an API endpoint 2. Same as 1, so yes you can definitely use your models, you can definitely just send outputs you don't have to send prompts.
gitpullups•1mo ago
I'm a bit curious what you're working on, and if there might be some interesting connections there. Would you like to speak? You can just book in my calendar through the site.
vmitro•1mo ago
Sorry for the late reply, I'm juggling family / working as a full time senior resident / final year specialty trainee in a German hospital / maintaining three side projects. I've looked at you calendar and the timezones are a huge problem: either I get up at 4 AM or book it after the late shift (11 PM here)...

Anyway:

It's an open-source licensed, distributed data orchestration framework designed from the ground up for HITL workflows where correctness matters more than speed (primary field is medicine, but law, etc. could also benefit). It sounds like we're attacking the same problem from complementary angles. You're building the human routing API, I'm building the pipeline infrastructure that defines when and how humans get routed into the loop.

The core idea: pipelines are YAML-defined state machines with explicit correction steps. When a worker (e.g. your LLM endpoint) produces output, the pipeline can pause, send results to a human reviewer, and wait for either approval or corrected data; all as first-class custom protocol messages (based on Majordomo protocol). The correction protocol has timeout handling, strike counting for repeated failures, and an audit trail that captures every decision point. Also, the YAML can define how to "steer" the pipeline in case of a correction, it can continue, store the correction, route to a specific step, fail, etc. (combinations also possible, e.g. store the correction, continue or jump to another step). A feature creep that's currently itching is implementing a largely reduced Lucid (the dataflow language) syntax set parser and transpiler into YAML pipeline definitions.

What might interest you: every message in a pipeline shares a UUID, and each correction creates an immutable record of what was changed and why. This is essentially your "structured training data" as a sort of (useful) byproduct of the architecture: you don't extract it after the fact, it's the communication protocol itself. Its intended workflow philosophy is an iterative fine tuning, I guess, with training data for

The framework uses ZeroMQ for binary messaging (sub-millisecond routing overhead) and can run from edge devices to datacenters. If it speaks TCP/IP and can run Python 3.11+, you can plug it in. Workers are pluggable, your existing model endpoints could be wrapped as the framework's workers with about 20 lines of Python, receiving tasks and returning results through the same correction-aware pipeline. All the components of the framework have lifecycle aware "hooks" so when you design your workers for example, in Python, you define them as a class and decorate their methods with @hook("async_init") or @hook("process_message") and those hooks get executed at each lifecycle event.

So in your project, instead of clients defining pipelines on their end and querying your API, the framework could provide the orchestration layer that routes between your clients' models, your human review queue, and back—with the pipeline definition living in a YAML file rather than scattered across client code. Your humans would interact with a well-defined correction protocol rather than ad-hoc intervention.

No HTTP endpoint (yet), you'd need to implement a worker that relays e.g. REST API calls and translates them into the framework's messages.

It's LGPL-licensed, intended for federated machine learning and self-hosted scenarios, and the (initial, now fairly more complex) "spartanic philosophy" means the core stays minimal while complexity lives in pluggable workers.

But it's not MVP ready, some things are still broken and I'm trying to hit the 0.1.0 version with a simple demo that takes a WAV file, transcribes it into text, then another model extracts keywords from the text, including intent and the basic context, then it all goes to another model that generates a TinkerPop/Gremlin query based on it, then the client executes the query and the results get sent along to the final worker that summarizes the (reduced) knowledge graph. That'd show a multi modal pipeline in action.

If you're interested, find me on github, the username is the same.