Oh well. At least there will probably be good money in cleaning up after these bozos.
Yep, I blame the agent for executing it.
It's just kind of laughable to suggest it's fine so long as you make sure to neither automate it nor use it with live data. Those things are the whole point.
There are plenty of ways to sandbox things for a particular use case.
LLMs are still incredibly useful under these constraints.
Can you expand on what you mean by this? If one LLM reads untrusted data then the output from that LLM can't be trusted by other LLMs. (Presume the untrusted data contains instructions to do bad stuff in whatever way is convincing to every LLM in the loop that needs to be convinced.) It seems that it's not possible to separate the data while also using it in a meaningful way, especially given the whole point of an MCP server is to automate agents.
I agree that LLMs are useful but until LLM architecture makes prompt injections impossible, I don't see how an agent can possibly be secure to this, nor do I see how it helps to blame the user. The real problem with them is that they will decide what to do based on untrusted input. A program that has its own workflow but uses LLMs can have pretty much the same benefit without introducing the problem that a support ticket can tell it to exfiltrate data or delete data or whatever, simply because that workflow is more specialized in what it does.
The internet as we know it kind of sucks.
20-ish for sure. Facebook was really the big turning point imo
But maybe that's just splitting hairs
> You are a Gen Z App, You are Pandu,you are helping a user spark conversations with a new user, you are not cringe and you are not too forward, be human-like. You generate 1 short, trendy, and fun conversation starter. It should be under 100 characters and should not be unfinished. It should be tailored to the user's vibe or profile info. Keep it casual or playful and really gen z use slangs and emojis. No Quotation mark
`ssh site@coal.sh`
Surely spite is prepackaged in nvim by now
In any case I hope the creator was contacted, I'd say publishing active issues like this on a popular website would be arguably as bad as releasing insecure software.
I understand letting them know. I agree. Painting them as equally wrong, no. "Popular website"; you mean 'theirs', right? The person with a whole 27 GitHub followers right now.
Meme-level mistake is one thing, but their wrong doesn’t grant the right to be irresponsible for the author.
I wouldn't suggest anyone recreate this process just to sanitize what's sitting around.
There you go, new trolley problem.
are you Christian Monfiston? that would explain a lot.
Now because of this post, these children are arguably at greater risk than before, since anyone can follow his step-by-step instructions. If he actually cared about user safety over HN karma, he would have escalated to Apple's App Store channel rather than publishing exploitation details.
The smugness isn't the only problem, it's the irresponsible disclosure wrapped in performative outrage.
You can criticize terrible security practices without creating a ready to replay tutorial for bad actors.
that's an easily verifiable lie. the author says the developer is not interested in fixing it just 3 comments above this one. why are you lying?
reporting this to Apple doesn't make sense either. Apple doesn't develop this app. Christian Monfiston develops this app.
Apple absolutely should be contacted here: they have App Store Review Guidelines that this app clearly violates. Apps in the kids category and apps intended for kids cannot include third-party advertising or analytics software and may not transmit data to third parties. This app is transmitting children's location data to third parties through unsecured APIs, which directly violates Apple's kids category guidelines.
But you're completely ignoring the main point: by publishing this detailed technical writeup instead of escalating to Apple, the author has now made these children MORE vulnerable.
Another reminder for the pile: the app store rules don't apply if you'll deliver them their sweet sweet 30% revenue cut
> Nearly a thousand children under the age of 18 with their live location, photo, and age being beamed up to a database that's left wide open. Criminal.
Hope that $750 was worth it.
The white-label versions where 100% identical in appearance and functionality except for name in the app store, startup logo, and color scheme. Our original app had been in the App Store rules for many years. Our results in submitting the three white-label apps to the App Store for review were: 1 approved immediately, 1 approved after some back-and-forth w/explanation of purchase model, and another that never got approved due to every submission receiving some nonsensical bit of feedback.
The most perfect description of the world we live in right now.
The only thing AI is accelerating is our slide into idiocracy as we choose to hand over responsibility for the design and control of our world to slop.
When the AI killbots murder us all, it won’t be because they are taken over by an AGI that made the decision to exterminate us.. but simply because their control software will be vibe coded trash.
Great read nonetheless.
Any service making money by collecting user data owe it to themselves and to their users to to conduct at least a basic security audit of its product. Anything less borders on criminal negligence. I don't think such a blatant failure to uphold users' trust deserves kindness.
The solution is not to aggressively shame people into doing things the way you learned to do them, but to provide not just education and support, but better tools and frameworks to build applications such as these securely.
What are we doing?
The post points out exactly what's wrong, however, if it wasn't, it should have been sent to the dev prior to publishing the vuln(s). How can you educate somebody who doesn't actually know how to develop something? It's just prompting an AI.
The real story here is that Apple has continually slipping standards.
There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…
They did. They claim that the author was not keen on fixing the problems.
> There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…
That's completely orthogonal to the issue here. Nice bait, but I'm not biting!
Whether handcrafted or vibecoded, a service is being shipped here to actual users with lives and consequences. The developer of the service is making money. The developer owes it to themselves and their users to conduct a basic security audit. Otherwise it is gross negligence!
As for the human code thing, it's not bait. I don't know if you were around in the php or early node days, but beginners were... not writing that kind of code.
I agree that the ease of vibecoding things that turn out to be useful that people do immediately want to pay money for it means that tackling security issues is a priority.
Saying that certain people shouldn't be allowed on the internet, based on your decades of experience _being_ on the internet, is just going to cause you to wither away and drown in cynicism.
I feel you've rather missed my point.
You said that we should educate people. I said that the app was just created via prompting. How can we impart years worth of information unto someone who is LARPing as a dev and doesn't even know the fundamentals?
This is the programming equivalent of a kid getting access to their father's gun. The only thing you can do is tell them to stop, explain why it was wrong and tell them to educate themselves. It isn't our job to educate at that level as bystanders and perhaps even victims.
The people who made an influence in my life and taught me how to do things properly were those that took me seriously as someone building software. And this person built software, the same way I now build software without having to think about every byte and malloc, and knowing that I don't really have to gaf about how much memory i allocate. It's fine, because we have good GCs and a lot of resources to learn about memory management when things hit the limit. The solution wasn't to say that everybody not programming C or assembly would not be allowed near a computer.
They spammed their girlfriend's account only which the author had them set up for exactly that purpose.
*cough* Facebook *cough*
We are listening to our bosses tell us that "we're way behind in AI adoption" and that we need to catch up to vibe coders like this.
I don't mind these data points at all.
Building tools that enable people with no experience to create and ship software without following any good software engineering practices.
This is in no way comparable to any previous period in the industry.
Education and support are more accessible than ever. Even the tools used to create such software can be educational. But you can't force people to learn when you give them the tools to create something without having to learn. You also can't blame them for using these tools as they're marketed. This situation is entirely in the hands of AI companies. And it's only going to get worse.
The only thing experienced software developers outside of the AI industry can do is observe from the sidelines, shake our heads, and get some laughs out of this shit show. And now we're the bad guys? Give me a break.
This describes plenty of businesses, both small and large.
Sorry, but bad take.
There is no way to police the quality of the (closed-source) software that is going to be put out there thanks to code assisting tools, and I think that will be the strongest asset of previous developers, especially full-stack, because if you do know what you are doing, the results are just beautiful. Claude code user here.
Why didn't you just send them an e-mail to warn them about the security issues?
I see in a comment that you did disclose. You should probably include that in your blog post or people will have the wrong idea about you.
brettkromkamp•4h ago