frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•2m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
1•elashri•2m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•2m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•3m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•4m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•4m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•5m ago•1 comments

The Neuroscience Behind Nutrition for Developers and Founders

https://comuniq.xyz/post?t=797
1•01-_-•5m ago•0 comments

Bang bang he murdered math {the musical } (2024)

https://taylor.town/bang-bang
1•surprisetalk•5m ago•0 comments

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•7m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•9m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•10m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•12m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•13m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•15m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•20m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•20m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•20m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•23m ago•1 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•26m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•27m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•27m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
3•vinhnx•28m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•33m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•38m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•42m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•43m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•44m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•51m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•53m ago•0 comments
Open in hackernews

Ask HN: How to overcome the limit of roles in LLM's

2•weli•3w ago
Our use case is not uncommon, we are developing tools so that people can install LLM's on their e-commerces.

But there are some interesting challenges that I feel can't be solved unless inference providers allow us to include the concept additional entities in a conversation.

As far as I know the three most basic ones shared alongside all providers are:

- System

- Assistant

- User

That's fine and it allows for simple conversational-based approaches (ChatGPT, Claude, Gemini, etc). However in our use case we allow our customers (not the final user who is talking with the AI) to configure the AI in different ways (personality, RAG, etc), which poses a problem.

If we inject those customer settings in the System prompt then that's a risk because there might be conflicting prompts with our internal rules. So the easiest option is to "clean" the customer prompts before injecting them, but that feels hacky and just adds one more level of indirection. Cleaning the prompt and injecting it with common patterns like XML tags seems to help a bit but still feels extremely risky for some reason.

Injecting it in the assistant or user also seems flaky and prone to prompt injection.

Creating a fake tool call and result like "getPersonalityConfiguration" seems to work the best, from our testing it is treated as something between the System and Assistant roles. And our top system prompt rules are still respected while allowing the customer some freedom to configure the AI.

The problem comes when you need to add more parties to what essentially is a 2 entity conversation. Sometimes we want external agents to chime in a conversation (via subagents or other methods) and there is no good way to do that AFAIK. It gets the occasional confusion and starts mixing up who is who.

One of our typical scenarios that we need to model:

System: Your rules are: You will never use foul language...

Store owner: You are John the customer agent for store Foo...

User: Do you have snowboards in stock?

Assistant->User: Let me check with the team. I'll get back to you soon.

System->Team: User is asking if we have snowboards in stock. Do we?

Team: We do have snowboards in stock.

Team->User: Yes we do have snowboards in stock!

User: Perfect, if I buy them will the courier send it to my country? [country name].

Assistant->User: Let me check, I need to see if our courier can ship a snowboard to your country.

Assistant->Third party logistics: I have a user from [country] interested in buying a snowboard. The dimensions are X by Y and the weight is Z. We would send it from our logistics center located at [address].

Third party logistics -> Assistant: Yes we can do it, it will be 29.99 for the shipping.

Assistant->User: Yes they can ship it to [country] but it does incur in 29.99 extra charge...

I obviated tool calls and responses, but that's basically the gist of it. Spawning sub-agents that have the context of the main conversation works but at some point it is limiting (we need to copy all personality traits and relevant information via summarization or injecting the conversation in a manner that the sub-agent won't get confused). It feels like an anti-pattern and trying to fight the intended use case of LLM's, which seems to be focused in conversation between two entities with the occasional external information going in through System or tool calling.

It would be amazing if we could add custom roles to model messages, still with special cases like agent or assistant.

Has anyone worked with similar problems? How did you solve it? Is this solved in the model lab or at the inference provider level (post-training)?

Comments

giberson•2w ago
I think tool calling is your answer—you’re just missing a separation of concern(s). For example to handle personality configuration, don’t use a tool to get personality configuration , use a tool to handle responding to the customer. When your agent has gathered the information to respond to the customer it will call the tool sendMessage with the response. Your tool call implementation is a role play prompt that rephrases the message with the provided tone/personality configuration (this is where the customer config is injected as context). The output is then passed through a guardrails completion for potential censoring before finally being displayed to the customer.

This means your main agent model simply becomes a routing agent (a tool calling optimized model) that directs to sub agents that handles various tasks (like figuring out shipping capabilities, or flavoring responses with personality affects, or adhering to guardrails) keeping the customer centric configuration’s blast radius (impact on the effectiveness of your prompts) narrowed to purely aesthetic completion and out of any functional completion.