I don't know if the investments in AI are worth it but am I blind for not seeing any hope for AGI any time soon.
Agentic AI is interesting perhaps but I hardly have had it work perfectly, I have to hold it's hand at everything.
People making random claims about AGI soon is really weakening my confidence in AI in general. Given I haven't seen much improvements in last few years other than better tools and wrappers, and models that work better with these tools and wrappers.
teaearlgraycold•13h ago
bigstrat2003•11h ago
nitroedge•9h ago
Context rot is real and people who complain about AI's hallucinating and running random wild, I don't see it when the context window is managed properly.
teaearlgraycold•9h ago
> If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.
I think you’re taking my analogy too literally. I just mean they help you go faster. When building software you have a huge advantage in that there is very little risk in exploring an idea. You can’t hurt yourself in the process. You can’t waste materials. You can always instantly go back to a previous state. You can explore multiple options simultaneously with a favorable cost to doing so.
You don’t have to let your standards drop. Just consider AI coding an interactive act of refinement. Keep coding manually where you meet too much resistance. Accept the LLM can only do so much and you can’t often predict when or why it will fail. Review its output. Rewrite it if you like.
Everything always has a chance of being wrong whether or not you use AI. Understand an AI getting something wrong with your code because of statistical noise is not user error. It’s not a complete failure of the system either.
It’s a mega-library that inlines ether an adjustment of a common bit of code or makes up something it thinks looks good. The game is in finding a situation and set of rules which provide a favorable return on the time you put into it.
Imagine if LLMs were right 99% of the time, magically doing most tasks of a certain complexity 10x faster than you could do them. Even when it’s wrong you will only waste so much time fixing 1% of the AI’s work. So it’s a net positive. Find a system that works for you and lets you find something where it makes sense to use it. Maybe 50% of the time and 3X faster than you makes it make sense.
In some domains you can absolutely learn some basic rules for AI use that make it a net positive right away. Like as a boilerplate writer, or code-code translator. You can find other high success likelihood tasks and add them to the list of things you’ll use AI for. These categories can be narrow or wide.
SkiFire13•9h ago
This is an hypothetical that's not here yet.
Of course if LLMs had human-level accuracy they would be ideal.
SkiFire13•9h ago
bigbuppo•9h ago