What is more likely is that people enjoy the novelty of the experiment, which is not something that will be reproducible for long.
If the transactions the AI make are thus influenced, then the study merely demonstrates people like novelty, which is already well known, and says nothing about whether AI can sustainably orchestrate a business.
I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
I doubt the experiment is set up that way, but that would be an ethical way to do it.
I don't think we need to have real human risk to get results from the experiment.
“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”
which was refreshing to read.
Sorry for confusion!
Storekeeping is more than just ordering merch and putting it up on hangars.
> She has a corporate card, a phone number, email, internet access and eyes through security cameras.
She has a corporate card, a phone number, email, internet access and eyes through security cameras
No, it's still dark. This is very similar to the initial stages of the capitalist dystopia in Manna (https://marshallbrain.com/manna), which seems to be the Torment Nexus SV is excited about building.
AI will never replace capitalists, because they're the only people allowed to have abundance without work. And don't you DARE to even THINK to question the absolutely SACRED status of private property. There is no alternative. Get back to work, you slacker.
Royals needed gods to justify themselves; when gods die or are switched out, royals are deleted or deposed.
I'm looking forward to the "coordination problem" being debunked. It's always been a demand that economic problems must be impossible to solve centrally, rather than a proof (a demand that justifies 2/5 of the economy going to the financial industry to produce nothing but coordination.) I actually thought that the success of algorithmic trading was enough to do it.
People anthropomorphize. Nobody really finds it "jarring" in most contexts.
It writes code okay, scaling up to pretty well depending on the model. It's writing is boring but serviceable for corporate communicative content you don't care about. It's images are ugly. It's music is repetitive and dull.
I think the biggest problem with LLMs is that they were perfected and are shockingly good at writing code. And based on that, AI engineers, who find writing code to be hard/rewarding, have decided it can do anything. And it's proving more and more that it cannot.
Unfortunately the Business Class has decided it does everything fine enough as to not cause riots, so we're all getting it shoved into our shit anyway.
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
It doesn't look like this one will be any better. Did you look at the merchandise selection? It's only chance is pity purchases from AI bros.
If we are talking about the one at that newspaper, it wasnt just unprofitable. The "customers" made it give away products for free. It was ordering them playstations.
As entertainment it was fun, but as a business or proof of intelligence or Turing test, it was an abject failure.
300+ comments, 3 months ago:
The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.
Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
8 Nov 2021I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.
Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.
If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.
At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?
The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.
This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.
Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.
At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.
Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).
(might try to see if I can swindle Luna, the agent running Andon Market, into cutting a deal for investment)
The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.
So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
But maybe people will forget eventually.
“PC LOAD LETTER”
Xx_crazy420_xX•5d ago
vannevar•5d ago
fl4ppyb3ngt•4m ago