We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
I think two things:
1. Not everyone knows.
2. As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
"And for this cause God shall send them strong delusion, that they should believe a lie: That they all might be damned who believed not the truth, but had pleasure in unrighteousness."
For a more modern take, paraphrasing Hannah Arendt.
“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”
We live in a an age where for many media is reality, uncritical, unchecked. Press releases are about creating reality, not reporting it, they are about psychological manipulation, not information.
> As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
This actually happened in reverse with the spread of social media dynamics to politics and major media. Twitter made Trump president, not the other way around.
* No LLMs were harmed in the making of this comment.
I mean that's been a lot of corporate writing for some time.
They're desperate?
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
I genuinely feel AI makes the ability to come up with approaches worse in software dev.
To presume to point a man to the right and ultimate goal — to point with a trembling finger in the RIGHT direction is something only a fool would take upon himself.
- Hunter S. Thompson> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
How do you overcome this limitation?
By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:
* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"
* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input
....
Why stop though? Google didn't say Altavista and Yahoo is good enough for the majority of power users, let's not create something better.
When you have something good at your hand and you see other possibilities, would you say let's stop, this is enough?
I’m good for now.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
Downside: your employees’ agents decide that they should collectively bargain.
Weird amounts of overlap between the two.
OpenAI might burn through all their money, and end up dropping support for these features and/or being sold off for parts altogether.
I just don’t see OpenAI winning this in the long run. And I’m saying that while I am subscribed to ChatGPT lol.
It is also interesting to contrast calling them by name vs. the other example, “a major semiconductor company”, not called by name. Though of course, there are also different reasonable ways to interpret that.
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.
Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.
Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
Revenue bumps and ROI bumps both gotta come first. Iirc, there's a struggle with the first one.
Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.
Increased efficiency benefits capital not labor; always good to remember to look at which side you prefer to be on
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
> At a major manufacturer, agents reduced production optimization work from six weeks to one day.
Make of that what you will.In our company we have a list of long tail "workflows" or "processes" that really just involves reading a document and filling a form.
For example, how do I even get access to a new DB? Or a new AWS account?
Can this tool help us create an agent that can automate this with some reasonable accuracy?
I see OpenAI frontier as quick way to automate these long tail processes.
Because for many of us, AI is "not approved until legal say so".
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
turbocon•2h ago
baxtr•2h ago