Can you feel when your agent has just compressed or lost context? Can you tell by how it bullshits you that it knows where it is, while it's trying to grasp what was going on? What's your emotional response to that?
Can you feel when your agent has just compressed or lost context? Can you tell by how it bullshits you that it knows where it is, while it's trying to grasp what was going on? What's your emotional response to that?
If you can't reduce context it suggests the scope of your prompt is too large. The system doesn't "think" about the best solution to a prompt, it uses logic to determine what outputs you'll accept. So if you prompt do an online casino website with user accounts and logins, games, bank card processing, analytics, advertising networks etc., the Agent will require more context than just prompting for the login page.
So to answer the question, if my agent loses context, I feel like I've messed up.
Just throwing stuff into an LLM and expecting it to remember what you want it to without any involvement from yourself isn't how the technology works (or could ever work).
An LLM is a tool, not a person, so I don't have an emotional response to hitting its innate limitations. If you get "deeply frustrated" or feel "helpless anger", instead of just working the problem, that feels like it would be an unconstructive reaction to say the least.
LLMs are a limited tool, just learn what they can and cannot do, and how you can get the best out of them and leave emotions at the door. Getting upset a tool won't do anything.
cdbattags•1h ago
https://annealit.ai
noduerme•1h ago
cdbattags•55m ago
Edit:
I truly believe this is solvable just like we're doing for natural language but with code/schema/etc! Relational, document, graph, vector!