frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: Building LLM apps? How are you handling user context?

31•marcospassos•8mo ago
I've been building stuff with LLMs, and every time I need user context, I end up manually wiring up a context pipeline.

Sure, the model can reason and answer questions well, but it has zero idea who the user is, where they came from, or what they've been doing in the app. Without that, I either have to make the model ask awkward initial questions to figure it out or let it guess, which is usually wrong.

So I keep rebuilding the same setup: tracking events, enriching sessions, summarizing behavior, and injecting that into prompts.

It makes the app way more helpful, but it's a pain.

What I wish existed is a simple way to grab a session summary or user context I could just drop into a prompt. Something like:

const context = await getContext();

const response = await generateText({ system: `Here's the user context: ${context}`, messages: [...] });

Some examples of how I use this:

- For support, I pass in the docs they viewed or the error page they landed on.

- For marketing, I summarize their journey, like 'ad clicked' → 'blog post read' → 'pricing page'.

- For sales, I highlight behavior that suggests whether they're a startup or an enterprise.

- For product, I classify the session as 'confused', 'exploring plans', or 'ready to buy'.

- For recommendations, I generate embeddings from recent activity and use that to match content or products more accurately.

In all of these cases, I usually inject things like recent activity, timezone, currency, traffic source, and any signals I can gather that help guide the experience.

Has anyone else run into this same issue? Found a better way?

I'm considering building something around this initially to solve my problem. I'd love to hear how others are handling it or if this sounds useful to you.

Comments

barbazoo•8mo ago
MCP maybe? You could provide tools for the LLM to discover that data at runtime.
marcospassos•8mo ago
It might help with context generation. But honestly, most of the work is still in tracking, processing, enriching (different services, like IP location, etc), and all the plumbing around it.
max_on_hn•8mo ago
I don't know of anything off-the-shelf, but you could query analytics tools at runtime (e.g. Mixpanel, PostHog) to gather the raw data, and use a generic summarizer to turn that into behavioral context that's usable downstream.
marcospassos•8mo ago
Yeah, exactly. My whole point is to avoid doing all that. It adds up fast. What I really want is something that handles the heavy lifting end-to-end: tracking, interpreting, and outputting a prompt-ready summary like:

"The user landed on the pricing page from a Google ad, clicked to compare plans, then visited the enterprise section before initiating a support chat."

rcarmo•8mo ago
That reads like the kind of session context you’d use for things like breadcrumbs and the like. Just keep a summary going in the user session, re-pack it or summarize it as soon as it gets above a threshold.
matt_s•8mo ago
Interacting with LLMs or AI APIs sounds like other software patterns, it doesn't matter that its AI or an LLM really, you are calling a function and providing inputs and expecting output. You get better output when your inputs are tuned to the scenario. Some of your inputs in this paradigm could be considered as optional parameters because you still get output without them.

If you need to remember parts of the inputs in between user sessions then you need to save state of those somewhere to a disk. Databases are a common choice especially in web development but you could also just put things in a file. Another option if this isn't a web development context is to use something like sqlite since it will help organize the data a little better than say CSVs or similar.

ProfessorZoom•8mo ago
I embed tons of separate pieces of information, save the vectors in a db. Embed the user's question, then have a stored procedure in the db to calculate the top 10 (or 20 or 50 depending on the model) similar pieces of information.

I have an editor where I can ask a question and it brings up the most related pieces of info, and if I change any of those pieces it will update the embedding in the db

marcospassos•8mo ago
That's a good approach. But what I'm looking for is a bit different, more like Segment, but for LLMs. Something that when a user lands on your website, clicks around, and interacts with your app, you get a full behavioral context out of the box, including click path, location, language, currency, etc. You can then inject that context directly into your prompt so the LLM understands what the user is doing and responds without guessing or asking.
enos_feedler•8mo ago
What is the application specific scenario that is requiring this context? Everyone has different scenarios and this might not make sense
coolKid721•8mo ago
Proper usage of LLMs so you don't just flood them with useless context will just be custom tailored prompts that only include the pertinent context, with prompts saying how it's related to what you're looking for. I don't think there's a cheap way around it maybe on the plus side you can tune them using ai code. I think tools are really over used and over-rated and have had horrible experience with them, nothing beats just custom tailoring stuff and setting up a system around it.

What I do is use elixir pheonix, have a genserver keep track of the user state and I just include the related state in the request and just helper functions to generate the related prompts per type of state/context and append them wherever makes the most sense.

I think LLMs make most sense to be viewed of as singular atomic interactions where you have the whole input (prompt/context/data) and get a concrete output. Everything else just seems like people being lazy trying to avoid thinking about the best way of structuring it. Where you put the context/data and how you include it will vary per prompt or the specific atomic interaction, there is no standard rule each interaction is unique. You have to experiment and see what provides the best output for each kind of request. I'd read Anthropics prompting docs if you haven't it's very good. https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

My way of thinking is just viewing every isolated LLM request as a unique function that is the prompt + llm = a unique function Context is just what data you pass into the function (prompt+llm+settings(temp, etc))(data) to get whatever specific output you want. The prompt includes prewriting user/system messages, system prompt, structured output stuff or whatever. Any single request might lead to 1 or 30 of these that feed back into each other. But yeah based on that it depends on just custom tailoring them for anything, it's pretty conceptual and intellectual but I find it fun but I don't think there's any easy way around it. Having the ability to have all your requests be stateful and modify what goes into the prompt based on the current user state (like genservers/elixir makes very easy) is a nice technical thing that helps though.

bilater•8mo ago
You might find this useful: https://context7.com/
marcospassos•8mo ago
Super interesting! However, it focuses on external sources rather than the user journey.
nico•8mo ago
I haven’t solved this, but sounds super useful!

Would love to have something like a hotjar/analytics script that could automatically collect context and then I could query it to produce context for a prompt

Great idea, you should build it. Then do a Show HN with it

marcospassos•8mo ago
Exactly! Something like a tag you install and then query prompt-ready contexts.
esafak•8mo ago
I think MCP is the right place to declare the context management API; the C in MCP is Context. As far as building goes, you could build a (universal) context store. I guess the value would be to bring the context closer to the model?
marcospassos•8mo ago
The value is building the context itself.

Using MCP, this could be a method that would get the context to take decisions.

For example, here's an example of how I use it currently:

```

const context = await getContext();

const response = await generateText({ system: `Here's the user context: ${context}`, messages: [...] });

console.log(context);

// "First-time visitor using Google Chrome on a MacBook, browsing from San Francisco.

// Landed on the pricing page from a Google ad, clicked to compare plans,

// then visited the enterprise section before initiating a support chat."

```

It's like a session recorder for LLMs that captures rich user behavior and traits (like device, browser, location, and journey) and turns them into LLM context. Your agent or app instantly becomes more helpful, relevant, and aware without wiring up your own tracking and enrichment pipeline.

esafak•8mo ago
A context inference service sounds valuable but I wonder what your moat would be.
marcospassos•8mo ago
Yep, that's something I'd have to figure out.

Discuss – Do AI agents deserve all the hype they are getting?

4•MicroWagie•3h ago•1 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

48•UmYeahNo•1d ago•30 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•7h ago•5 comments

Ask HN: Non AI-obsessed tech forums

28•nanocat•18h ago•25 comments

Ask HN: Ideas for small ways to make the world a better place

18•jlmcgraw•21h ago•21 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•5d ago•520 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•5d ago•514 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•16h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•22h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•5d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•18h ago•2 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: How Did You Validate?

4•haute_cuisine•2d ago•6 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments