In my experience, current LLMs (like GPT-4 and Claude) often fail to follow detailed user instructions consistently. For example, even after explicitly telling the model not to use certain phrases, follow a strict structure, or maintain a certain style, it frequently ignores part of the prompt or gives a different output every time. This becomes especially frustrating for complex, multi-step tasks or when working across multiple sessions where the model forgets the context or preferences you’ve already given.
This isn’t just an issue in writing tasks—I've seen the same problem in coding assistance, task planning, structured data generation (like JSON/XML), tutoring, and research workflows.
I’m thinking about building a layer on top of existing LLMs that allows users to define hard constraints and persistent rules (like tone, logic, formatting, task goals), and ensures the model strictly follows them, with memory across sessions.
Before pursuing this as a startup, I’d like to understand:
Have you experienced this kind of problem?
In what tasks does it show up most for you?
Would solving it be valuable enough to pay for?
Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?
ggirelli•1d ago
> Would solving it be valuable enough to pay for? Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?
The solution I have found so far is to prompt the model to write and execute code to make responses more reproducible. In that way most of the variability ends up in the code, but the code outputs tend to be more consistent, at least in my experience.
That said, I do feel like current providers will start to or are already working on this.
gdevaraj•1d ago