Seriously though, "AI fucks up" is a known thing (as is humans fuck up!) and the people who are using the tech successfully account for that and build guardrails into their systems. Use version control, build automated tests (e2e/stress, not just unit), update your process so you're not incentivizing dumb shit like employees dumping unchecked AI prs, etc.
But the metrics or facts without context or deeper explanations also don't mean much in that article.
> 95% of AI pilots didn’t increase a company’s profit or productivity
If 5% do that could very well be enough to justify it, depending on for which reasons and after how much time the pilots are failing. It's widely touted that only 5% of start ups succeed, yet start ups overall have brought immense technological and productivity gains to the World. You could live in a hut and be happy, and argue none of it is needed, but none the less the gains by some metrics are here, despite 95% failing.
The article throws out numbers to make a point that it wanted to make, but fails to account for any nuance.
If there's a promising new tech, it makes sense that there will be many failed attempts to make use of it, and it makes sense a lot of money will be thrown in. If 5% succeed, it takes 1 million to do 1 attempt, but the potential is 1 billion if it succeeds, it's already 50x return.
In my personal experience, if used correctly it increases my own productivity a lot and I've been using AI daily ever since GPT 3.5 release. I would say I use it during most of what I do.
> AI Pullback Has Officially Started
So I'm personally not seeing this at all, based on how much I personally pay for AI, how much I use it, and how I see it iteratively improving, while it's already so useful for me.
We are building and seeing things that weren't realistic or feasible before now.
And that’s ignoring the rampant copyright infringement, the exploding power use and accompanying increase in climate change, the harm it already does to people who are incapable of dealing with a sycophantic lying machine, the huge amounts of extremely low quality text and code and social media clips it produces. Oh and the further damage it is going to do to civil society because while we already struggled with fake news, this is turning the dial not to 11 but to 100.
I'd bet that some sort of exponentiate the learning rate until shit goes haywire then rollback the weights is actually probably a fairly decent algorithm (something like backtracking line search).
Use it where it works.. ignore the agents hype and other bullshit peddled by 19yo dropouts.
Unlike the 19yo dropouts of the 2010s these guys have brain rot and I don’t trust them after having talked to such people at start up events and getting their black pill takes. They have products that don’t work and lie about numbers.
I’ll trust people like Karpathy and others who are genuinely smart af and not kumon products.
itsafarqueue•2h ago