Oh well. At least there will probably be good money in cleaning up after these bozos.
Yep, I blame the agent for executing it.
It's just kind of laughable to suggest it's fine so long as you make sure to neither automate it nor use it with live data. Those things are the whole point.
There are plenty of ways to sandbox things for a particular use case.
LLMs are still incredibly useful under these constraints.
Can you expand on what you mean by this? If one LLM reads untrusted data then the output from that LLM can't be trusted by other LLMs. (Presume the untrusted data contains instructions to do bad stuff in whatever way is convincing to every LLM in the loop that needs to be convinced.) It seems that it's not possible to separate the data while also using it in a meaningful way, especially given the whole point of an MCP server is to automate agents.
I agree that LLMs are useful but until LLM architecture makes prompt injections impossible, I don't see how an agent can possibly be secure to this, nor do I see how it helps to blame the user. The real problem with them is that they will decide what to do based on untrusted input. A program that has its own workflow but uses LLMs can have pretty much the same benefit without introducing the problem that a support ticket can tell it to exfiltrate data or delete data or whatever, simply because that workflow is more specialized in what it does.
I agree that for most tasks a pre-defined workflow with task specific LLM calls sprinkled in is a much better fit.
However, I really like agents with tool use for personal use (both programming and otherwise). In that case, the agent is either sandboxed or I approve any tools with the potential to do damage.
For the example of the Supabase MCP, it still seems pretty useful when limited to a test environment or read-only access to prod - it's a dev tool. Since it's a dev tool, the user needs to actually know what its doing. If they have no clue but are still running it on prod data, they have no business touching it or frankly any other dev tool. I class this as the same ignorance that leads people to storing passwords in plaintext.
So, I blame the developer for trying to use an MCP server 1) when they have no idea wtf it does and 2) in an environment that can affect real users who aren't aware of the incompetence of the dev whose service they're using. Likewise, in TFA, I blame the dev, not the tool. Unfortunately, no matter how you do it, lowering the barrier of entry for development while still providing access to ample footguns will always result in cases like this.
The internet as we know it kind of sucks.
20-ish for sure. Facebook was really the big turning point imo
But maybe that's just splitting hairs
> You are a Gen Z App, You are Pandu,you are helping a user spark conversations with a new user, you are not cringe and you are not too forward, be human-like. You generate 1 short, trendy, and fun conversation starter. It should be under 100 characters and should not be unfinished. It should be tailored to the user's vibe or profile info. Keep it casual or playful and really gen z use slangs and emojis. No Quotation mark
`ssh site@coal.sh`
Thanks for the great writeup
Surely spite is prepackaged in nvim by now
In any case I hope the creator was contacted, I'd say publishing active issues like this on a popular website would be arguably as bad as releasing insecure software.
I understand letting them know. I agree. Painting them as equally wrong, no. "Popular website"; you mean 'theirs', right? The person with a whole 27 GitHub followers right now.
Meme-level mistake is one thing, but their wrong doesn’t grant the right to be irresponsible for the author.
I wouldn't suggest anyone recreate this process just to sanitize what's sitting around.
There you go, new trolley problem.
are you Christian Monfiston? that would explain a lot.
Now because of this post, these children are arguably at greater risk than before, since anyone can follow his step-by-step instructions. If he actually cared about user safety over HN karma, he would have escalated to Apple's App Store channel rather than publishing exploitation details.
The smugness isn't the only problem, it's the irresponsible disclosure wrapped in performative outrage.
You can criticize terrible security practices without creating a ready to replay tutorial for bad actors.
that's an easily verifiable lie. the author says the developer is not interested in fixing it just 3 comments above this one. why are you lying?
reporting this to Apple doesn't make sense either. Apple doesn't develop this app. Christian Monfiston develops this app.
Apple absolutely should be contacted here: they have App Store Review Guidelines that this app clearly violates. Apps in the kids category and apps intended for kids cannot include third-party advertising or analytics software and may not transmit data to third parties. This app is transmitting children's location data to third parties through unsecured APIs, which directly violates Apple's kids category guidelines.
But you're completely ignoring the main point: by publishing this detailed technical writeup instead of escalating to Apple, the author has now made these children MORE vulnerable.
> The developer has been given responsible disclosure and I have been informed that steps are being taken to address the security concerns.
There is still no timeline or other information about the events, which is unfortunate; I'd expect the author to document and report this in such a situation.
The job offer the author received on the other hand is… something.
Another reminder for the pile: the app store rules don't apply if you'll deliver them their sweet sweet 30% revenue cut
> Nearly a thousand children under the age of 18 with their live location, photo, and age being beamed up to a database that's left wide open. Criminal.
Hope that $750 was worth it.
The white-label versions where 100% identical in appearance and functionality except for name in the app store, startup logo, and color scheme. Our original app had been in the App Store rules for many years. Our results in submitting the three white-label apps to the App Store for review were: 1 approved immediately, 1 approved after some back-and-forth w/explanation of purchase model, and another that never got approved due to every submission receiving some nonsensical bit of feedback.
The most perfect description of the world we live in right now.
The only thing AI is accelerating is our slide into idiocracy as we choose to hand over responsibility for the design and control of our world to slop.
When the AI killbots murder us all, it won’t be because they are taken over by an AGI that made the decision to exterminate us.. but simply because their control software will be vibe coded trash.
Great read nonetheless.
Any service making money by collecting user data owe it to themselves and to their users to to conduct at least a basic security audit of its product. Anything less borders on criminal negligence. I don't think such a blatant failure to uphold users' trust deserves kindness.
For something that needs to be maintained and is running in production with a decent number of users? This would pretty unacceptable to me
But people like me are losing this battle, we won't be relevant much longer
The solution is not to aggressively shame people into doing things the way you learned to do them, but to provide not just education and support, but better tools and frameworks to build applications such as these securely.
What are we doing?
The post points out exactly what's wrong, however, if it wasn't, it should have been sent to the dev prior to publishing the vuln(s). How can you educate somebody who doesn't actually know how to develop something? It's just prompting an AI.
The real story here is that Apple has continually slipping standards.
There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…
They did. They claim that the author was not keen on fixing the problems.
> There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…
That's completely orthogonal to the issue here. Nice bait, but I'm not biting!
Whether handcrafted or vibecoded, a service is being shipped here to actual users with lives and consequences. The developer of the service is making money. The developer owes it to themselves and their users to conduct a basic security audit. Otherwise it is gross negligence!
As for the human code thing, it's not bait. I don't know if you were around in the php or early node days, but beginners were... not writing that kind of code.
I agree that the ease of vibecoding things that turn out to be useful that people do immediately want to pay money for it means that tackling security issues is a priority.
Saying that certain people shouldn't be allowed on the internet, based on your decades of experience _being_ on the internet, is just going to cause you to wither away and drown in cynicism.
I feel you've rather missed my point.
You said that we should educate people. I said that the app was just created via prompting. How can we impart years worth of information unto someone who is LARPing as a dev and doesn't even know the fundamentals?
This is the programming equivalent of a kid getting access to their father's gun. The only thing you can do is tell them to stop, explain why it was wrong and tell them to educate themselves. It isn't our job to educate at that level as bystanders and perhaps even victims.
The people who made an influence in my life and taught me how to do things properly were those that took me seriously as someone building software. And this person built software, the same way I now build software without having to think about every byte and malloc, and knowing that I don't really have to gaf about how much memory i allocate. It's fine, because we have good GCs and a lot of resources to learn about memory management when things hit the limit. The solution wasn't to say that everybody not programming C or assembly would not be allowed near a computer.
The dev is making money from his prompted output—he can pay for his own education if he chooses to receive an education, but you have boundary issues if you want to force someone to be educated. This is what op realized that you didn't—you usually cannot force someone to learn or take responsibility for their behaviour as a bystander, you can only document it and attempt to get help from someone more able to do so once they've got all the facts. Do I agree with the method completely? No, but what's done is done.
What is necessary here isn't an education, it's personal development and emotional maturity, which comes with experience and thus comes through time, allowing accountability for mistakes. You can't teach that to someone who isn't ready for it who doesn't want to learn it.
I was a young dickhead too once, I know them when I see them. You only have to see their tweets to realize they are a young dickhead.
We go back to likening it to a kid finding their father's gun or stealing condoms from their old man. Sure, they can produce a child when it turns to shit, but the time to have learned is before, not after. After? It's about taking responsibility for your actions. The action has been taken, the consequences must now be dealt with as per law.
What should happen? Apple should take the app down immediately and an internal investigation should be started. The host should follow their policies on ToS breaches and account termination and report it to the relevant authorities to protect their own legal interests. As for the dev? I personally don't care, we are far beyond that moment now. What about the users? Will they be informed? What's the scale? Are their passwords compromised too?
Complete assholes can build things—why should we give them energy to build things that serve their own asshole agenda? It's an unoriginal, derivative slop app. If the dev wants to learn, they can pay for an education, but they'd be better off seeking legal counsel immediately.
Anyone can make software. But not everybody should with the level of personal development they're at in any given moment. It's an ever-moving target. Teen pregnancy or in young adolescence? Disaster. Pregnancy in thirties? Normal and can deal with it. Time changes things. Sometimes. For some people.
Romanticising what happened to you in the '90s helps nobody. It's 2025. There are laws to protect people from things like this, and Apple slipped up big time in approving this in the first place. There also weren't the vast syllabi in place in the '90s, the embarrassment of riches in readily available educational materials beginning at free or cheap either. The dev can pay for an LLM, so he can pay for an education if he wants one.
The dev wanted a shortcut though because he is lazy. Play stupid games, win stupid prizes.
Op is young too, but op is clearly intelligent and well-intentioned. There's no money in him having written the blog post, and even if it misses the mark on several levels for me, I understand what they're trying to do. The dev? Greedy and lazy with zero regard for their users, law, and shirks accountability.
If you want to educate anyone, educate op who wrote the blog post, their heart is at least in the right place, but obviously young too. It happens to all of us.
Despite being an ancient one, you too perhaps have some personal development to work on, despite your greater number of years. You immediately jump down the throat of people you incorrectly perceive to be shit-talking using AI to code, and that's because it clearly touches something you're insecure about as you do this: https://x.com/ProgramWithAI
If you're so sure of yourself and that what happened to you is so great, where is your own confidence? The inability to engage with the topic at hand yet consistently attempting to make it about something else entirely screams insecurity or abusing an LLM to parse everything for you. The loudest people are frequently the least confident.
If you don't see what's wrong with what the dev did or what Apple failed to do then that says it all. If you're using these tools to prompt your way into being a dev and seeing these problems too then perhaps you should feel unconfident. I would be quaking in my boots at seeing someone else go through a "that could have been me with a different roll of the dice" kind of scenario.
Don't mistake vibe coders for developers. They're frequently prompt engineers LARPing as devs. Likewise, musicians are not always composers, and DJs are not always musicians. Totally different disciplines. Loaded digital guns in the hands of young dickheads is not "fantastic"—it's a disaster of unprecedented scale. "Us senior devs" are the father figures and they've gotten access to not just one gun, but the entire global armory with the inevitable lack of judgement capabilities typical of someone their age.
A blog post is going to be the least of the dev's concerns, frankly. The likely legal shitstorm that's probably coming his way is going to make your comments here look bizarre.
They spammed their girlfriend's account only which the author had them set up for exactly that purpose.
*cough* Facebook *cough*
We are listening to our bosses tell us that "we're way behind in AI adoption" and that we need to catch up to vibe coders like this.
I don't mind these data points at all.
Building tools that enable people with no experience to create and ship software without following any good software engineering practices.
This is in no way comparable to any previous period in the industry.
Education and support are more accessible than ever. Even the tools used to create such software can be educational. But you can't force people to learn when you give them the tools to create something without having to learn. You also can't blame them for using these tools as they're marketed. This situation is entirely in the hands of AI companies. And it's only going to get worse.
The only thing experienced software developers outside of the AI industry can do is observe from the sidelines, shake our heads, and get some laughs out of this shit show. And now we're the bad guys? Give me a break.
LLMs are incredible engineering tools and brushing them aside as nonsense is imo doing a disservice to everybody, and especially ourselves if we take our craft seriously. You can literally replace llm with php and post the same take on usenet in 1999, or whenever you started writing software.
I am tired of engineers just throwing their hands up and being defeatist while fully endorsing whatever narratives the ai industry is throwing out there, when what we are talking about is a big pile of floats that is able to generate something that makes it into the App Store. It is unprecedented in its abilities, but it’s also nothing new conceptually. It makes computer things easier.
That's just not true.
Every past technology that claimed to enable non-technical people to build software has either failed, or was ultimately adopted by technical people. From COBOL, to BASIC, to SQL, to Low-Code, to No-Code, and others. LLMs are the latest attempt at this, and so far, they've had much more success and mainstream adoption than anything that came before.
The difference with LLMs is that it's the first time software can be built and deployed via natural language by truly anyone. This is, after all, their most advertised feature. The skills required to vibe code are reading and writing English, and basic knowledge to use a modern computer. This is a much lower skill requirement than for using any programming language, no matter how user friendly it is. Sure, there is a lot of poor quality software today already, but that will pale in comparison to the software that will be written by vibe coding. Most of the poor quality software before LLMs was limited in scope and reach. It would never have been deployed, and it would remain abandoned in some GitHub repo. Now it's getting deployed as quickly as it can be generated. "Just fucking ship it."
> LLMs are incredible engineering tools and brushing them aside as nonsense is imo doing a disservice to everybody
I'm not brushing them aside as nonsense. I use these tools as well, and have found them helpful at certain tasks. But there is a vast difference from how domain experts use these tools, and how the general public uses them. Especially people who are only now getting into software development, and whose main interest is to quickly cash out. If you think these people care about learning best software development practices, you'd be sorely mistaken. "Just fucking ship it."
In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site. Deployment of software has never been gated by how much effort it took to write it, there's millions of wordpress sites out there that get deployed way faster than an LLM can generate code.
If we care about the security of it all, then let's build the platforms to have LLMs build secure applications. If we care about the craft of programming, whatever that means in this day and age, then we need to catch people building where they are. I'm not going to tell people to not use computers because they want to cash out, they will just use whatever tool they find anyway. Might as well cash out on them cashing out while also giving them better platforms to build upon.
As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not. The only reason the LLM actually used that is because it was possible for the user to provide it tokens, instead of replit/lovable/expo/whatever providing a proper way to provision these things.
Every cash~out fast bro out there these days uses stripe and doesn't roll their own payment processing anymore. They certainly used to do so because they just clicked a random wordpress plugin. That's what I think a more productive way to tackle the issue is.
Those didn't fail, but they're certainly not used by non-technical people. That was my point: that all technologies that previously promised to make software development accessible for non-technical people didn't deliver on that promise, and that they're used by software engineers today. I would chalk up the Low-Code and No-Code tools as general failures, since neither business people nor engineers want to use them.
> In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site.
I don't think that's an accurate comparison, as website builders only cover a small fraction of what's possible with "real programming". Web authoring and publishing tools have existed since the dawn of the web, and the modern ones simply turned it into a service model.
LLMs OTOH allow creating any type of software (in theory). They're much broader in scope, and lower the skill requirements to create general-purpose software much more than any previous technology. The software in TFA was an iOS app. This is why they're a big deal, and why we're seeing scam artists and grifters pump out these low-effort applications in record time and volume. They were already enabled by WordPress and Squarespace, and there are certainly a lot of scam and spam sites on the web thanks to website builders, but their scope, reach and productivity got multiplied by LLMs.
> If we care about the security of it all, then let's build the platforms to have LLMs build secure applications.
That's easier said than done, if it's possible at all. Security, privacy, and bug-free software is not something that can be automated, at least with current technology. It requires great care and attention to detail from expert humans, which grifters have zero interest in, and non-expert non-grifters don't have the experience or patience to do. Vibe coding, after all, is the idea that you keep pasting errors to the LLM and prompting it until the software on the surface works as you expect it to. Code is just the translation layer for the LLM to write and interpret; vibe coders don't want to know about it.
Could we encode some general security and privacy hints in the LLM system prompt so that it can check for specific issues? Sure. It will never be exhaustive, though, so it would just give a false sense of security.
> As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not.
Agreed. What I think you're not taking into account is the fact that there is a large swath of the population who just doesn't care about this. The only thing they care about is having an easy way to pump out a service that attracts victims who they can quickly exploit in some way. Once that service is no longer profitable, they'll replace it with another. What LLMs have given these people is an endless revenue stream with minimal effort.
This is not the same group of people who cares about software, the product they're building, and their users. Those are a small minority of the new generation of software developers who will seek out best practices and figure out how to use these tools for good. Unfortunately, I don't think they will ever become experts at anything other than interacting with an LLM, but that's a separate matter.
So the key point is: building high quality software starts with caring. Good practices that ensure high quality are discovered by intentionally seeking out established knowledge, or by trial and error. But the types of issues we're seeing here are not because the developer is inexperienced and made a mistake—it's because they don't care. Which should be criticized and mocked in public, and I would argue regulated and fined, depending on the severity. I even think that a software development license is even more important today than ever before.
This describes plenty of businesses, both small and large.
Sorry, but bad take.
There is no way to police the quality of the (closed-source) software that is going to be put out there thanks to code assisting tools, and I think that will be the strongest asset of previous developers, especially full-stack, because if you do know what you are doing, the results are just beautiful. Claude code user here.
Why didn't you just send them an e-mail to warn them about the security issues?
I see in a comment that you did disclose. You should probably include that in your blog post or people will have the wrong idea about you.
Update available here: https://coal.sh/blog/pandu_bad
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
I used to meet clowns like this all the time when I freelanced years ago. Back then they called themselves "ideas guys" and liked to make you sign an NDA for the privilege of hearing their braindead overplayed product idea. Scumbags and users, every one of them, always looking for a shortcut to personal gain.
https://web.archive.org/web/20250709231129/https://coal.sh/b...
brettkromkamp•7mo ago