frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: How do you feel when your coding assistant loses context?

3•noduerme•3h ago
Background: My dad, my mom's dad, and my uncle all suffered from dementia; having a deep, multi-threaded conversation which you were invested in, where you suddenly need to remind the other person of what you were talking about, or who they are, has emotional consequences that range from deep frustration to helpless anger.

Can you feel when your agent has just compressed or lost context? Can you tell by how it bullshits you that it knows where it is, while it's trying to grasp what was going on? What's your emotional response to that?

Comments

cdbattags•3h ago
I just posted this on HN this morning and was looking through "new" but I'm trying to solve this exact problem:

https://annealit.ai

noduerme•3h ago
That's interesting. I mean, I've got an openclaw setup with Claude that is merging and storing chats from whatsapp and the web client once a day, has a ton of context accessible... but there's something about being right in the middle of solving a hard technical problem where you're deep in the weeds about which columns should represent which data, and suddenly it's like, what were we talking about? Oh I should trying to read the database structure again from scratch. I don't think that's a problem that any clever arrangement of memory or personality files can actually solve.
cdbattags•3h ago
But I think when you actually structure memory in the right form based on "workload" (i.e. Google Spreadsheet, Microsoft Word XML, coding lang AST/DAGs), then this is truly possible to have additive "unforgetting".

Edit:

I truly believe this is solvable just like we're doing for natural language but with code/schema/etc! Relational, document, graph, vector!

gibbitz•2h ago
This is indicative of too much context. Remember these systems don't "think" they predict. If you think of the context as an insanely large map with shifting and duplicate keys and queries, the hallucinating and seeming loss of context makes sense. Find ways to reduce the context for better results. Reduce sample sizes, exclude unrelated repositories and code. Remember that more context results in more cost and when the AI investment money dries up, this will be untenable for developers.

If you can't reduce context it suggests the scope of your prompt is too large. The system doesn't "think" about the best solution to a prompt, it uses logic to determine what outputs you'll accept. So if you prompt do an online casino website with user accounts and logins, games, bank card processing, analytics, advertising networks etc., the Agent will require more context than just prompting for the login page.

So to answer the question, if my agent loses context, I feel like I've messed up.

Someone1234•2h ago
Context management is a core skill of using an LLM. So if it loses key context (e.g. tasks, instructions, or constraints), I screwed up, and I need to up my game.

Just throwing stuff into an LLM and expecting it to remember what you want it to without any involvement from yourself isn't how the technology works (or could ever work).

An LLM is a tool, not a person, so I don't have an emotional response to hitting its innate limitations. If you get "deeply frustrated" or feel "helpless anger", instead of just working the problem, that feels like it would be an unconstructive reaction to say the least.

LLMs are a limited tool, just learn what they can and cannot do, and how you can get the best out of them and leave emotions at the door. Getting upset a tool won't do anything.

jacquesm•2h ago
If you have an emotional response to anything an agent or LLM does then you should lay off the sauce for a while and take a walk or something. This stuff is just dumb tech, no matter what the appearances and it does not warrant you getting emotionally invested in your interaction with it. It's a tool. Just like there is no point in getting upset at a hammer or a chainsaw. You are in control, you are the user.
setnone•1h ago
I can totally feel the shift, the rot or whatever when it happens, with opus 1M it seems to happen more often in my recent experience, while my approach didn't change a bit.

So i teach myself to not have an emotional response while working with LLMs. The actual response would be starting a new session, or dive into code myself.

kojeovo•28m ago
now your coding assistant is suffering from dementia too. how sad. i ask it to save important stuff to a file

Ask HN: Is Antigravity code search dropping results recently?

2•sankalpnarula•29m ago•0 comments

Ask HN: How do you feel when your coding assistant loses context?

3•noduerme•3h ago•8 comments

Tell HN: Slow Down

10•jacquesm•3h ago•4 comments

Claude Is Down Again

3•Venkymatam•1h ago•0 comments

Lazy Tmux – Lazy-loading tmux sessions with a tree view

2•Alchemmist•5h ago•0 comments

Claude Is Down

12•haebom•1h ago•8 comments

LLMs learn what programmers create, not how programmers work

37•noemit•1d ago•14 comments

Ask HN: AI productivity gains – do you fire devs or build better products?

107•Bleiglanz•3d ago•199 comments

Ask HN: Any recommended engineering/dev related Slack channels?

2•Kuraptka•11h ago•1 comments

Ask HN: How do you offload all coding to AI?

9•makingstuffs•14h ago•11 comments

Tell HN: Russians may soon lose access to the global internet

32•taminka•20h ago•14 comments

Ask HN: Do you feel less happy when coding with agent?

4•zane__chen•16h ago•9 comments

Does nobody care about not being able to copy from Slack anymore?

5•neal_caffrey•16h ago•3 comments

Is Trusttunnel easy for people to use?

2•AnonyMD•17h ago•0 comments

Ask HN: Founders of estonian e-businesses – is it worth it?

11•udl•1d ago•4 comments

Ask HN: Does the World need more software?

3•Vektorceraptor•18h ago•9 comments

Ask HN: Is anyone here also developing "perpetual AI psychosis" like Karpathy?

29•jawerty•1d ago•24 comments

Ask HN: Is using AI tooling for a PhD literature review dishonest?

9•latand6•1d ago•26 comments

Does it make sense to ask Blackberry to re-license ancient QNX sources?

4•ymz5•23h ago•3 comments

Ask HN: Analog Model of Transformers

7•JPLeRouzic•1d ago•2 comments

Ask HN: $50 monthly budget, which coding models would you recommend now?

10•klueinc•1d ago•18 comments

Ask HN: How does one get rich in 2026?

7•roschdal•4h ago•4 comments

Tell HN: H&R Block tax software installs a TLS backdoor

150•yifanlu•4d ago•12 comments

Ask HN: Is the AI software developer demand destruction narrative accurate?

4•RyanShook•15h ago•2 comments

Tell HN: MS365 upgrade silently to 25 licenses, tried to charge me $1,035

24•davidstarkjava•3d ago•8 comments

Anonymize / de-identify LLM chat history export, post-processing

2•msiraj1•1d ago•1 comments

Veevo Health – book a CT angiogram to see plaque buildup in your arteries

5•arvindsr33•1d ago•3 comments

Ask HN: If there has been no prompt injection, is it safe?

7•sayYayToLife•2d ago•8 comments

SparkVSR: Video Super-Resolution You Can Control with Keyframes

3•steveharing1•2d ago•0 comments

Anyone know how long it will take to re-start Qatar's helium plants?

9•megamike•3d ago•6 comments