Almost exactly a year ago, Goldman Sachs produced a report showing the investment in AI is untenable. They showed up with the receipts. It was quite good. They also gave a fare shake to what's working and not working in the space, but most people didn't read it.
https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too...
useful insights but definitely a PR article.
> Don’t overinvest in large language models
> Don’t let LLM success cloud your judgment
> Explore the world of AI beyond LLMs
> Let employees experiment
a.k.a. nuffing.
Reject LLM, embrace traditional keyword SEO?
Regarding "AI", at least this CTO is cautious, but also says: “Most jobs at this point can benefit from AI.”
That is of course incorrect. I'm waiting for a CTO who dismisses "AI" outright. That is probably too dangerous in the current climate.
AI is shorthand for “we have no clue how to solve this problem, but AI can do it for us.”
I wish that that wasn’t the case. Generative AI is a very cool tech, not magic. And machine learning, the “other” AI, powers much of what we consider normal in our tech interactions.
I find this ironic, given that it is extremely difficult to have an AI solve a problem that you cannot, yourself, solve in some capacity.
Triggers will come from outside. But thanks to "scale" even after that their response will be "too big to fail" protect our ass please.
That's how they survive after the Iraq/Afg fiasco, 2008 GFC. Wikileaks/Panama papers etc
Quantum computing is probably due for another round, or perhaps superconductivity.
To me it speaks to a fundamental weakness of the human psyche. We're all still mostly driven by herd behavior
And to your point, I have seen more and more stories about quantum lately. Not a lot, but the noise floor is coming up a little bit
Now the hype is insane again.
apwell23•4h ago