1/ Prompting - making sure agents behave how you want without overly long, fragile instructions
2/ Context sharing – passing memory, results, and state across time or between agents w/o flooding the system
3/ Cost – tokens get expensive fast, especially as things scale
Curious what others think is the real bottleneck here, and any tips/tricks for solving this. Are you optimizing around token limits, memory persistence, better prompt engineering?
Would love to hear how you’re thinking about this or if there’s a smarter approach we’re all missing. ty in advance!
onlytime2tell•3h ago