Anyway. To me it just speaks to the disdain for semi-intellectual work. People seem to think producing text has some value of its own. They think they can shortcircuit the basic assumption that behind every text is an intention that can be relied upon. They think that if they substitute this intention with a prompt, they can create the same value. I expect there to be some kind of bureaucratic collapse because of this, with parties unable to figure out responsibility around these zombie-texts. After that begins cleaning up, legislating and capturing in policy what the status of a given text is etc. Altman &co will have cashed out by then.
> People seem to think producing text has some value of its own.
Reading this sentence makes me think that the author actually never seen agentic work in action? Producing value out of text does work and one of good examples is putting it in a loop with some form of verification output. It's easy to do with programming - type checker, tests, linter etc. – so it can chat by itself with it's own results until the problem is solved.
I also find it personally strange that so often discussions require reminder that rate of change in capabilities is also big part of "the thing" (as opposed to pure capabilities today). It changes on weekly/monthly basis and it changes in one direction only.
If the big corporations can't move fast enough and 100 startups gamble on getting there, eventually one of them will be successful.
Unless it is something like Meta, then they have a Zuck, someone smart, with enough oversight and power, who can drain the swamp and make the whole machine move.
From the Chatterbox site:
> Our patented AIMI platform independently validates your AI models & data, generating quantitative AI risk metrics at scale.
The article's subtitle:
> Security, not model performance, is what's stalling adoption
neepi•3h ago
No you can't solve everything with a chatbot because your CEO needs an AI proposition or he's going to look silly down the golf course with all the other CEOs that aren't talking about how theirs are failing...
tough•3h ago
neepi•2h ago
tough•2h ago
Hopefully more companies will encourage their own employees to explore how AI can fit on their current workflows or improve them and not try to hope for some magical thinking to solve their problems
SirBomalot•2h ago
Speaking with the consultants let's me assume that they too get the pressure from the top to do ai stuff maybe because they fear that else they will be replaced by ai or so. It seems really somewhat desperate.
matt3210•2h ago
delusional•2h ago
At my job it's been coming through the regular channels, but is empowered by aligning with current trends. It's easier to sell an AI project, even internally, when the whole world is talking about it.
tough•2h ago
maybe they're not directly pushing AI (cause they dont need to), but they're happy to accept shitty jobs that make no sense just cause
delusional•7m ago
I don't think that's the right distinction to draw here. It's definitely being pushed, just not by consultants.
> big consultancies are happy to take customers with -absurd- requests
This is of course always true. Consultants usually don't really care where they make the money, and long as you pay them, they'll find someone stupid enough to take on your task.
That's not what I'm seeing though. We're not hiring outside consultants to do big AI projects, we have people within our organization that have been convinced by the public marketing, and are pushing for these projects internally. I'm not seeing big consultancies accepting contracts, I'm seeing normal working people getting consultant brain and taking this as their chance to sell a "cutting edge" project that'll disrupt all those departments they don't understand what does.
csomar•1h ago
nikanj•26m ago