I think you may just be noticing sloppy attention to detail, i.e., not proofing, relying on AI that is not quite ready, similar to devs just committing AI slop without review.
I suspect someone is going to train a marketing specialized AI at some point that is focused on that specific type of promotional manipulative language of marketing. But, frankly, I don’t see it being long loved either though, because I see marketing being totally nullified by AI. You don’t need marketing when humans are no longer making decisions/buying.
I guess they need more funding and grants. A human does not need to ingest the entire Internet in order to plagiarize what was read. A human does not need a prompt in order to take action. Two humans can have a conversation that does not collapse immediately.
These people apparently need coaching on the most basic activities. How to solve this in the future? Perhaps women should refuse to procreate with "AI" researchers, who prefer machines anyway.
No, clearly.
> This led the court to conclude that the “[a]uthors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works.”
What am I even reading ahah
Edit: Okay after reading it a bit, this paper is actually pretty funny
And that's with the huge "pre-training" data stored in our genetic code (comprising billions of years and evolution), alongside epigenetic inheritance.
"Next token" prediction is (primary) local, in the sense that the early layers are largely concerned with grammatical coherence, not semantics, and if you shifted the text input context window by a few paragraphs, it would adjust the output accordingly.
It's not _mathematically_ the same, but i do think the mechanics are similar.
MangoToupe•7h ago
add-sub-mul-div•6h ago
77pt77•6h ago
MangoToupe•6h ago
mock-possum•5h ago
jmsdnns•6h ago
fusionadvocate•6h ago
yard2010•6h ago
MangoToupe•6h ago
jerf•5h ago
fusionadvocate•5h ago
ticulatedspline•3h ago
categorizing the difference with AI it's much the same as with a person, context. if you ask a person what's the capitol of Florida and they tell you "Pink Elephant, and the capitol building is a literal giant pink elephant with an escalator up it's trunk", my how creative, but it's a lie. But you press them and it seems they genuinely believe it and swear up and down they saw it in a book. Now it's a hallucination, though is it creative if they believe they're just regurgitating the contents of a book? technically yes but the creativity is subconscious.
Now if you asked the same person to make up a fictitious capitol to a fake state and got that answer you'd say it was creative, and not a lie or a hallucination since the context was fiction to begin with even if the source of that creative thought comes from the same place in both instances. If there's no objectively correct answer and not a copy of an exiting known then it's "creativity".
The biggest difference is hallucinations are rare in humans, above we'd probably assume the person was being flippant, or didn't know and was a pathological liar (and not a very good one). We don't associate those motives or capacity to AI though, the AI genuinely seems to think that's right, that the response is coming honestly, thus we categorize all factual errors as hallucinations.