changing the input (data) means you get a different output (model).
source data has nothing to do with model determinism.
as an end-user of AI products, your perspective might be that the models are non-deterministic, but really it’s just different models returning different results … because they are different models.
“end-user non-determinism” is only really solved by repeatedly using the same version of a trained model (like a normal software dependency), potentially needing a bunch of work to upgrade the (model) dependency version later on.
Anyway. To me it just speaks to the disdain for semi-intellectual work. People seem to think producing text has some value of its own. They think they can shortcircuit the basic assumption that behind every text is an intention that can be relied upon. They think that if they substitute this intention with a prompt, they can create the same value. I expect there to be some kind of bureaucratic collapse because of this, with parties unable to figure out responsibility around these zombie-texts. After that begins cleaning up, legislating and capturing in policy what the status of a given text is etc. Altman &co will have cashed out by then.
> People seem to think producing text has some value of its own.
Reading this sentence makes me think that the author actually never seen agentic work in action? Producing value out of text does work and one of good examples is putting it in a loop with some form of verification output. It's easy to do with programming - type checker, tests, linter etc. – so it can chat by itself with it's own results until the problem is solved.
I also find it personally strange that so often discussions require reminder that rate of change in capabilities is also big part of "the thing" (as opposed to pure capabilities today). It changes on weekly/monthly basis and it changes in one direction only.
the kind of people the parent comment was talking about tend to believe they can send three emails and make millions of pounds suddenly appear in business value (i’m being hyperbolic and grossly unfair but the premise is there).
they think the idea is far more valuable than the implementation - the idea is their bit (or the bit they’ve decided is their bit) and everyone else is there to make their fantastic idea magically appear out of thin air.
they aren’t looking at tests and don’t have a clue what a linter is (they probably think it’s some fancy device to keep lint off their expensive suits).
Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.
There's some truth to the difference between "short term profits" and "my salary depends on this" being whether you're the boss or the employee.
But this interview is only fear-mongering to sell expensive models. Ditching the industry leaders.
If the big corporations can't move fast enough and 100 startups gamble on getting there, eventually one of them will be successful.
Unless it is something like Meta, then they have a Zuck, someone smart, with enough oversight and power, who can drain the swamp and make the whole machine move.
From the Chatterbox site:
> Our patented AIMI platform independently validates your AI models & data, generating quantitative AI risk metrics at scale.
The article's subtitle:
> Security, not model performance, is what's stalling adoption
As a software engineer I want everything to be perfect but not as an entrepreneur.
How many people here have been subjected to that "looks good, put it in production!" directive after showing off a quick POC for something? And then you have to explain how far away from being production-ready things are, etc...
There's a reason wireframing tools intentionally use messy lines, and why most UX people know better than to put brand colours in wireframes.
His background was electrical engineering but it applies doubly in software.
Every instance was some variation of RAG chat/langgraph thing. On multiple occasions, I heard "I don't see what value this has over ChatGPT", except they now had 5-6 figure cloud bills with it.
Technical users really weren't thrilled with it (i.e they wanted usable insights from their data (something best served by a db query), but ended up with LLM copypasta of internal docs) and seemed to expect significant functionality and utility on top of "regular" LLM use.
Stakeholders constantly complained (rightfully so) about issues with inaccuracy in responses, or "why is this presented in this fashion", resulting in hours of the data team folks coming up with new prompts and crossing fingers.
“Why is this dashboard showing this number?”
That’s my concern with any data “insight” magic. How do you debug what it’s telling the users?
neepi•8mo ago
No you can't solve everything with a chatbot because your CEO needs an AI proposition or he's going to look silly down the golf course with all the other CEOs that aren't talking about how theirs are failing...
tough•8mo ago
neepi•8mo ago
tough•8mo ago
Hopefully more companies will encourage their own employees to explore how AI can fit on their current workflows or improve them and not try to hope for some magical thinking to solve their problems
SirBomalot•8mo ago
Speaking with the consultants let's me assume that they too get the pressure from the top to do ai stuff maybe because they fear that else they will be replaced by ai or so. It seems really somewhat desperate.
matt3210•8mo ago
delusional•8mo ago
At my job it's been coming through the regular channels, but is empowered by aligning with current trends. It's easier to sell an AI project, even internally, when the whole world is talking about it.
tough•8mo ago
maybe they're not directly pushing AI (cause they dont need to), but they're happy to accept shitty jobs that make no sense just cause
delusional•8mo ago
I don't think that's the right distinction to draw here. It's definitely being pushed, just not by consultants.
> big consultancies are happy to take customers with -absurd- requests
This is of course always true. Consultants usually don't really care where they make the money, and long as you pay them, they'll find someone stupid enough to take on your task.
That's not what I'm seeing though. We're not hiring outside consultants to do big AI projects, we have people within our organization that have been convinced by the public marketing, and are pushing for these projects internally. I'm not seeing big consultancies accepting contracts, I'm seeing normal working people getting consultant brain and taking this as their chance to sell a "cutting edge" project that'll disrupt all those departments they don't understand what does.
tough•8mo ago
AI is now the du jour's vector to get an easy YES from command.
Sad state of affairs i guess, at least put the effort to know wtf you want to build and more importantly WHY or HOW is it better than current solutions
prmoustache•8mo ago
csomar•8mo ago
nikanj•8mo ago
ben_w•8mo ago
(Simpsons kind, I don't know enough about civil engineering to comment on the real one).
steveBK123•8mo ago
"I have to do some __ / have a __ strategy / hire a Head Of __ or I look bad"
blitzar•8mo ago
steveBK123•8mo ago
There are a lot of leaders who are looking for problems for their solutions.
edit: I say this as someone who has been stuck on top-down POCs which I late found out originated from "so my brother in law has this startup" where we got management questions that were mostly "so how could we use this here?" rather than "how is it performing / is it a good value / does it solve the problem we want it to solve".
Some tech cannot fail, it can only be failed.
EndsOfnversion•8mo ago
steveBK123•8mo ago
arethuza•8mo ago
blitzar•8mo ago