frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
1•suvankar_m•53s ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
1•xipz•2m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•6m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
1•baruchel•7m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•9m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•10m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
1•raleobob•14m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•14m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
1•jingkai_he•17m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

1•swimmingkiim•21m ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•24m ago•0 comments

Proving Laderman's 3x3 Matrix Multiplication Is Locally Optimal via SMT Solvers

https://zenodo.org/records/18514533
1•DarenWatson•27m ago•0 comments

Fire may have altered human DNA

https://www.popsci.com/science/fire-alter-human-dna/
4•wjb3•27m ago•1 comments

"Compiled" Specs

https://deepclause.substack.com/p/compiled-specs
1•schmuhblaster•32m ago•0 comments

The Next Big Language (2007) by Steve Yegge

https://steve-yegge.blogspot.com/2007/02/next-big-language.html?2026
1•cryptoz•33m ago•0 comments

Open-Weight Models Are Getting Serious: GLM 4.7 vs. MiniMax M2.1

https://blog.kilo.ai/p/open-weight-models-are-getting-serious
4•ms7892•43m ago•0 comments

Using AI for Code Reviews: What Works, What Doesn't, and Why

https://entelligence.ai/blogs/entelligence-ai-in-cli
3•Arindam1729•43m ago•0 comments

Show HN: Solnix – an early-stage experimental programming language

https://www.solnix-lang.org/
2•maheshbhatiya•43m ago•0 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
5•awaaz•45m ago•2 comments

The British Empire's Brothels

https://www.historytoday.com/archive/feature/british-empires-brothels
2•pepys•45m ago•0 comments

What rare disease AI teaches us about longitudinal health

https://myaether.live/blog/what-rare-disease-ai-teaches-us-about-longitudinal-health
2•takmak007•51m ago•0 comments

The Brand Savior Complex and the New Age of Self Censorship

https://thesocialjuice.substack.com/p/the-brand-savior-complex-and-the
2•jaskaransainiz•52m ago•0 comments

Show HN: A Prompting Framework for Non-Vibe-Coders

https://github.com/No3371/projex
2•3371•53m ago•0 comments

Kilroy is a local-first "software factory" CLI

https://github.com/danshapiro/kilroy
2•ukuina•1h ago•0 comments

Mathscapes – Jan 2026 [pdf]

https://momath.org/wp-content/uploads/2026/02/1.-Mathscapes-January-2026-with-Solution.pdf
1•vismit2000•1h ago•0 comments

80386 Barrel Shifter

https://nand2mario.github.io/posts/2026/80386_barrel_shifter/
2•jamesbowman•1h ago•0 comments

Training Foundation Models Directly on Human Brain Data

https://arxiv.org/abs/2601.12053
1•helloplanets•1h ago•0 comments

Web Speech API on HN Threads

https://toulas.ch/projects/hn-readaloud/
1•etoulas•1h ago•0 comments

ArtisanForge: Learn Laravel through a gamified RPG adventure – 100% free

https://artisanforge.online/
2•grazulex•1h ago•1 comments

Your phone edits all your photos with AI – is it changing your view of reality?

https://www.bbc.com/future/article/20260203-the-ai-that-quietly-edits-all-of-your-photos
1•breve•1h ago•0 comments
Open in hackernews

Would you use an LLM that follows instructions reliably?

3•gdevaraj•8mo ago
I'm considering a startup idea and want to validate whether others see this as a real problem.

In my experience, current LLMs (like GPT-4 and Claude) often fail to follow detailed user instructions consistently. For example, even after explicitly telling the model not to use certain phrases, follow a strict structure, or maintain a certain style, it frequently ignores part of the prompt or gives a different output every time. This becomes especially frustrating for complex, multi-step tasks or when working across multiple sessions where the model forgets the context or preferences you’ve already given.

This isn’t just an issue in writing tasks—I've seen the same problem in coding assistance, task planning, structured data generation (like JSON/XML), tutoring, and research workflows.

I’m thinking about building a layer on top of existing LLMs that allows users to define hard constraints and persistent rules (like tone, logic, formatting, task goals), and ensures the model strictly follows them, with memory across sessions.

Before pursuing this as a startup, I’d like to understand:

Have you experienced this kind of problem?

In what tasks does it show up most for you?

Would solving it be valuable enough to pay for?

Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?

Comments

ggirelli•8mo ago
> Have you experienced this kind of problem? In what tasks does it show up most for you? I have experienced this type of problem. A colleague asked an LLM to convert a list of items in a text to a table. The model managed to skip 3 out of 7 items from the list somehow.

> Would solving it be valuable enough to pay for? Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?

The solution I have found so far is to prompt the model to write and execute code to make responses more reproducible. In that way most of the variability ends up in the code, but the code outputs tend to be more consistent, at least in my experience.

That said, I do feel like current providers will start to or are already working on this.

gdevaraj•8mo ago
Thank you for your time and feedback.
proc0•8mo ago
It's the central problem with AI right now! If this was fixed it wouldn't matter if they were elementary school AIs, they would still be useful if the output was consistent. If they were reliable, then you can find an upper bound on their capabilities and you would instantly know anything below that you can automate with confidence. Right now, they might do certain tasks even at PhD level but there is no guarantee that they won't fail miserably at some completely trivial task.
gdevaraj•8mo ago
Thank you for your feedback.
dtagames•8mo ago
Prompting and RAG are the only tools you have, like everyone else. What is "tone?" That's not deterministic. You're asking an LLM to predict tone. And logic? Forget it.

To validate (or really, dismiss) this idea, try it with your own RAG app or even with Cursor. There's just no way you can stack enough prompts to turn predictions into determinism.