If you travel and go to some random beach town and buy random item from random street merchant, they won’t give you a refund. Main issue to bridge is ensuring the item is expected as you can’t physically inspect prior to purchase.
It’ll be interesting to see how that’s solved. I participate in kickstarter which defacto doesn’t really offer refunds, so maybe it’ll be the same.
Except where enshrined by law, eg in the EU
Depends on how large the beach town is, and what country. Whenever I've needed to return/change things in those type of places (in South America, South Europe and around East Asia), it's never been a problem even if I don't have a receipt, as usually the vendor recognize you, or the person who sold it to you is around somewhere.
I can remember one clear time (probably out of 10s) where someone refused to take back an item that clearly didn't work the way it was sold to us.
I don’t think so—it makes the risk of purchase too high, and people will buy less. Which is not what the sellers want.
> it, along with free shipping were pretty rare before Amazon and Walmart.
Refunds were not rare before Walmart and Amazon.
For comparison, Ireland is amongst the richest places in Europe these days, and they never colonised anyone. They used to be colonised.
meanwhile my local highstreet is essentially dead
Caveat emptor
Did you return the one you bought locally?
More suburban strip malls, more fluorescent lighting, more people working mindless do nothing retail jobs for minimum wage, higher prices due to zero economies of scale, inefficiency from every local store reinventing the wheel of staffing/recruiting/scheduling/warehouseing/anti-theft/POS/advertising/etc.
If online shops have to raise prices to combat fraud it doesn’t suddenly turn springfield Ohio into the Zurich city center.
Ultimately, it would be a bit ironic if generative AI ends up kneecaping itself, either through regulation (because businesses and governments will be unlikely to tolerate hiring fraud, returns fraud etc. beyond a threshold), or caused things to move into meatspace through on-site interviews, reliance on physical stores, elimination of online courses and others, which is less amenable to its application.
Homework isn't any more ineffective imo. The way we educate and grade is.
Imagine if people went to school to learn something rather than to "level up". And you earned a job based on what you know, or what you can do, rather than what degree you banked.
Then maybe you would want to do the homework.
If gen AI helps us flip the current system on it's head that would be a good thing.
As I see it, companies will want some sort of artifact that tells them that the person has some basic knowledge in their field of study, which would make us come back to the same system that many detest.
Otherwise we'll just end up with Samsung Korea's version of the entrance test[1] which is like the SAT but for getting a job, which only a handful of companies can realistically do, and as such there is very little appetite for it in the "West".
Sure, but unfortunately a degree doesn't really do that today. When you interview someone, do you care at all about their degree or grades? Does it give you confidence they know something? I don't think so.
Which is my main point, that this isn't a new problem.
MOOCs were the hope for education but that didn’t take off either. Now any remote learning will need physical examinations, which make certification pathways for everything more expensive.
Even if you want to study, our distractions are crafted by people who spend hours figuring out the right dopamine reward schedules to keep you distracted.
I don't think so. Proving you can pass a test is pretty useless imo. Especially when it typically boils down to a memorization test and the subject matter is largely irrelevant.
That wasn’t something possible prior to LLMs. Being able to cheat code or generate homework assignments was not this trivial either.
is this something anyone has actually seen happen, or is it part of the AI hype cycle?
No shit it's happening. Now, on what scale, and should we care?
There are enough fraudsters out there that someone will try it, and they're dumb enough that someone will get caught doing it in a hilariously obvious way. It would take a literal divine intervention to prevent that.
Now, is there enough AI-generated fraud for anyone to give a flying fuck about it? That's a better question to ask.
Good luck.
well then here's my refutation: some say this isn't happening at the scale this article claim someone say it's happening.
that should convince you by your own admission.
beside it's the article responsibility to provide evidence for their points. circular links leading to the same handful of stories is not "preponderant"
You might as well be asking for proof that humans use AI to generate porn.
Gigantic bot farms taking over social media
Non-consensual sexual imagery generation (including of children)
LLM-induced psychosis and violence
Job and college application plagiarism/fraud (??)
News publications churning out slop
Scams of the elderly
So don't worry: in a few months we can come back to this thread and return fraud will be recognized to have been supercharged by generative AI. But then we can have the same conversation about like insurance fraud or some other malicious use case that there's obvious latent demand for, and new capability for AI models to satisfy that latent demand at far lower complexity and cost than ever before.
Then we can question whether basic mechanics of supply and demand don't apply to malicious use cases of favored technology for some reason.
Are you adjusting your perception of the problem based on fear of a possible solution?
Anyway, our society has fuck tons of protections against "what ifs" that are extremely good, actually. We haven't needed a real large scale anthrax attack to understand that we should regulate anthrax as if it's capable of producing a large scale attack, correct?
You'll need a better model than just asserting your prior conclusions by classifying problems into "actual threats" and "what ifs."
Also I guess you're perfectly fine with me developing self replicating gray nanogoo, I mean I've not actually created it and ate the earth so we can't make laws about self replicating nanogoo I guess.
How would an LLM help with that? Paper prescriptions can be copied using Word and a pen.
Word and a pen is still effort, compared to just Image + prompt.
The cost of the Fix is the issue.
That’s resources that need to be spent to combat a type of fraud that was impossible at scale 4 years ago.
Sometimes we asked for refund some stuff and my boss told me: "dont deliver them the original hardware, pick a cheap one from the stock and put it in the box"
This worked very often without any questions, so we just could keep the good stuff :-D
I’d say the fears and defenses we had in place for speech online, are having their foundations ripped out from under them.
Most of the concern used to be about government control, and that more speech would be the way to democratize and expand our agency over our lives.
However now, especially with generative AI and LLMs, the primary vector to control the market place of ideas is to overwhelm the market.
Reduce the cost to make content, sandblast our receptors, create too many things to spend our collective energy on verifying, and the outcomes are the same as controlling what is thought and discussed.
Amazon is pretty notorious for shipping almost all of the risk onto the seller. I suspect that's the norm, these days, for most platforms.
Either way, I am not sure how big of a problem this is to begin with, since you'd leave quite the paper trail either way. It's not a stunt you can pull off repeatedly without getting caught.
Where I live we don't have the habit of just putting the delivery on the porch for a few reasons. First, it's ridiculous if you think about it - no one signed for it, so how could you mark it as delivered? I don't get the US in that regard. Secondly, most of the houses have fences, so the delivery person can't come to the house even if they wanted to. You're basically required to meet the delivery person.
That would massively slow down delivery times, especially if the packaging is non-trivial to open/inspect. Not to mention that not everyone works a comfy remote job where they're at the door the entire day.
But I agree that not everything is easy to inspect. Most things seem to be, though. Another issue is not wanting third parties from seeing what you've purchased.
maelito•21h ago
embedding-shape•21h ago
exe34•21h ago
gray_-_wolf•21h ago
eru•20h ago
embedding-shape•14h ago
exe34•14h ago
eru•3h ago
nojs•20h ago
embedding-shape•21h ago
If you do come up with a 100% fool-proof implementation of this, you'll be able to get a lot of money for it, so do give it a try! Many have tried before you, yet it always turns out to be short of impossible. But who knows, maybe there is a way...
exe34•19h ago
embedding-shape•19h ago
> We absolutely need certified no-AI digital proofs.
gruez•18h ago
You can mitigate this by having a pool of devices (eg. 1000) share keys. AFAIK TPM chips and U2F/FIDO keys do this to provide some anonymity while limiting the blast radius if a key does get leaked.
exe34•14h ago
frenzcan•21h ago
Maybe it needs to be similar to SSL certificates where trusted authorities can verify and revoke verification for digital assets.
stanac•20h ago
This would limit authenticity to images taken by official software.
webstrand•20h ago
intended•20h ago
But that will only matter for highly legal things or important things. Everything else will be too much of a bother to follow information hygiene.
It’s like we’ve introduced an information weed, whose only goal is to create content that matches our dopamine receptors. Maybe our instincts will shift to assuming any shiny or eye catching content is fruit of the weed? Designed to attract us?
orbital-decay•20h ago
konfusinomicon•20h ago