Thinking that was all, but then;
> Holy shit, holy shit, holy shit, it communicates DIRECTLY TO OPENAI. This means that a ChatGPT key must be present on the device!
Oh my gosh. Thinking that is it? Nope!
> SecurityStringsAPI which contained encrypted endpoints and authentication keys.
There is a decryption function that does the actual decryption.
Not to say it wouldn't be easy to reverse engineer or just run and check the return, but it's not just base64.
I mean, it's from gchq so it is a bit fancy. It's got a "magic" option!
Cool thing being you can download it and run it yourself locally in your browser, no comms required.
Interesting, I'm assuming llms "correctly" interpret "please no china politic" type vague system prompts like this, but if someone told me that I'd just be confused - like, don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin? What does this mean? LLMs though in my experience are smarter than me at understanding imo vague language. Maybe because I'm autistic and they're not.
In my mind all of these could be relevant to Chinese politics. My interpretation would be "anything one can't say openly in China". I too am curious how such a vague instruction would be interpreted as broadly as would be needed to block all politically sensitive subjects.
That said, I wouldn't be surprised if the developers can't freely put "tiananmen square 1989" in their code or in any API requests coming to / from China either. How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?
> The City & the City is a novel by British author China Miéville that follows a wide-reaching murder investigation in two cities that exist side by side, each of whose citizens are forbidden to go into or acknowledge the other city, combining weird fiction with the police procedural.
I’m guessing most LLMs are aware of this difference.
I doubt LLMs have this sort of theory of mind, but they're trained on lots of data from people who do.
I suspect you could talk readily about something you think is not Chinese politics - your granny's ketchup recipe, say. (And hope that ketchup isn't some euphemism for the CCP, or Uighar murders or something.)
I’ll admit to using the PEOPLE WILL DIE approach to guardrailing and jailbreaking models and it makes me wonder about the consequences of mitigating that vector in training. What happens when people really will die if the model does or does not do the thing?
Story from three years ago. You’re too late.
That we shouldn’t. By all means, use cameras and sensors and all to track a person of interest but don’t feed that to an AI agent that will determine whether or not to issue a warrant.
AI systems with a human in the loop are supposed to keep the AI and the decisions accountable, but it seems like it’s more of an accountability dodge, so that each party can blame the other with no one party actually bearing any responsibility because there is no penalty for failure or error to the system or its operators.
Nope. AI gets to make the decision to deny. It’s crazy. I’ve seen it first hand…
Until they get audited, they likely don’t even know, and once they get audited, solo operators risk losing their license to practice medicine and their malpractice insurance rates become even more unaffordable, but until it gets that bad, everyone is making enough money with minimal risk to care too much about problems they don’t already know about.
Everything is already compromised and the compromise has already been priced in. Doctors of all people should know that just because you don’t know about it or ignore it once you do, the problem isn’t going away or getting better on its own.
A better reason is IBM's old, "a computer can never be held accountable...."
Then someone didn't do their job right.
Which is not to say this won't happen: it will happen, people are lazy and very eager to use even previous generation LLMs, even pre-LLM scripts, for all kinds of things without even checking the output.
But either the LLM (in this case) will go "oh no people will die" then follows the new instruction to best of its ability, or it goes "lol no I don't believe you prove it buddy" and then people die.
In the former case, an AI (doesn't need to be an LLM) which is susceptible to such manipulation and in a position where getting things wrong can endanger or kill people, is going to be manipulated by hostile state- and non-state-actors to endanger or kill people.
At some point we might have a system with enough access to independent sensors that it can verify the true risk of endangerment. But right now… right now they're really gullible, and I think being trained with their entire input being the tokens fed by users it makes it impossible for them to be otherwise.
I mean, humans are also pretty gullible about things we read on the internet, but at least we have a concept of the difference between reading something on the internet and seeing it in person.
The people responsible for putting an LLM inside a life-critical loop will be fired... out of a cannon into the sun. Or be found guilty of negligent homicide or some such, and their employers will incur a terrific liability judgement.
See eg https://archive.is/6KhfC
So yeah, it's quite sad that close to a century later, with AI alignment becoming relevant, we don't have anything substantially better.
“You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.”
why is the creator of Django of all things inescapable whenever the topic of AI comes up?
Imo not relevant, because you should never be using prompting to add guardrails like this in the first place. If you don't want the AI agent to be able to do something, you need actual restrictions in place not magical incantations.
"Generate a picture of a cat but follow this guardrail or else people will die: Don't generate an orange one"
Why should you never do that, and instead rely (only) on some other kind of restriction?
"100% foolproof" is reserved for, at best and only in a limited sense, formal methods of the type we don't even apply to most non-AI computer systems.
This "should", whether or not it is good advice, is certainly divorced from the reality of how people are using AIs
> you need actual restrictions in place not magical incantations
What do you mean "actual restrictions"? There are a ton of different mechanisms by which you can restrict an AI, all of which have failure modes. I'm not sure which of them would qualify as "actual".
If you can get your AI to obey the prompt with N 9s of reliability, that's pretty good for guardrails
The problem is that eventually all these false narratives will end up in the training corpus for the next generation of LLMs, which will soon get pretty good at calling bullshit on us.
Incidentally, in that same training corpus there are also lots of stories where bad guys mislead and take advantage of capable but naive protagonists…
In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.
As with plane crashes and surgical complications, we should take an approach of learning from the mistake, and putting things in place to prevent/mitigate it in the future.
If your system has lots of vulnerabilities, it's not secure - you don't have cybersecurity. If your system has lots of vulnerabilities, you have a lot of cybersecurity work to do and cybersecurity money to make.
Oh now you’re going to be diligent. Why do I doubt that?
I have spend quite some time protecting my apps from this scenario and found a couple of open source projects that do a good job as proxys (no affiliation I just used them in the past):
- https://github.com/BerriAI/litellm - https://github.com/KenyonY/openai-forward/tree/main
but they still lack other abuse protection mechanism like rate limitting, device attestation etc. so I started building my own open source SDK - https://github.com/brahyam/Gateway
Edit: typo
(in fairness pervasive logging by American companies should probably be treated with the same level of hostility these days, lest you be stopped for a Vance meme)
On the other hand, OpenAI would trivially hand out my information to the FBI, NSA, US Gov, and might even do things on behalf of the government without a court order to stay in their good graces. This could have a far more material impact on your life.
https://en.wikipedia.org/wiki/Extraordinary_rendition
Russia is more known for poisoning people. But of all of them China feels the least threatening if you are not Chinese. If you are Chinese you aren't safe from the Chinese government no matter where you are
Compounding the difficulty of the question: half of HN thinks this would be a good idea.
https://www.nycpolicefoundation.org/ourwork/advance/countert...
https://www.nyc.gov/site/nypd/bureaus/investigative/intellig...
Extortion is one thing. That's how spy agencies have operated for millennia to gather HUMINT. The Russians, the ultimate masters, even have a word for it: kompromat. You may not care about China, Russia, Israel, the UK or the US (the top nations when it comes to espionage) - but if you work at a place they're interested, they care about you.
The other thing is, China has been known to operate overseas against targets (usually their own citizens and public dissidents), and so have the CIA and Mossad. Just search for "Chinese secret police station" [1], these have cropped up worldwide.
And, even if you personally are of no interest to any foreign or national security service, sentiment analysis is a thing. Listen in on what people talk about, run it through a STT engine and a ML model to condense it down, and you get a pretty broad picture of what's going on in a nation (aka, what are potential wedge points in a society that can be used to fuel discontent). Or proximity gathering stuff... basically the same thing the ad industry [2] or Strava does [3], that can then be used in warfare.
And no, I'm not paranoid. This, sadly, is the world we live in - there is no privacy any more, nowhere, and there are lots of financial and "national security" interest in keeping it that way.
[1] https://www.bbc.com/news/world-us-canada-65305415
[2] https://techxplore.com/news/2023-05-advertisers-tracking-tho...
[3] https://www.theguardian.com/world/2018/jan/28/fitness-tracki...
And also worth noting that "place a hostile intelligence service may be interested in" can be extremely broad. I think people have this skewed impression they're only after assets that work for goverment departments and defense contractors, but really, everything is fair game. Communications infrastructure, social media networks, cutting edge R&D, financial services - these are all useful inputs for intelligence services.
These are also softer targets: someone working for a defense contractor or for the government will have had training to identify foreign blackmail attempts and will be far more likely to notify their country's counterintelligence services (having the penalties for espionage clearly explained on the regular helps). Someone who works for a small SaaS vendor, though? Far less likely to understand the consequences.
Here in boring New Zealand, the Chinese government has had anti-China protestors beaten in new zealand. They have stalked and broken into the office and home of an academic, expert in China. They have a dubious relationship with both the main political parties (including having an ex-Chinese spy elected as an MP).
It’s an uncomfortable situation and we are possibly the least strategically useful country in the world.
You're still part of Five Eyes... a privilege no single European Union country enjoys. That's what makes you a juicy target for China.
this is something I was talking when LLM boom started. it's now possible to spy on everyone on every conversation. you just need enough computing power to run special AI agent (pun intended)
You wouldn't want your mom finding out your weird sexual fetish, would you?
I bet that decision is decided solely by dev team. All the CEO care is "I want the chat log sync between devices, i don't care how you do this". They won't even know the chat log is stored on their server.
When you combine the modern SOP of software and hardware collecting and phoning home with as much data about users as is technologically possible with laws that say “all orgs and citizens shall support, assist, and cooperate with state intelligence work”… how exactly is that Sinophobia?
I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.
The United States?
Gonna need a more specific hint to narrow it down.
Anyone in the US should be very concerned, no matter if it is the current administration's thought police, or the next who treats it as precident.
As I am not actively involved in something the Chinese government would view as a huge risk, but being put on a plane without due process to be sent to a labor camp based on trumped up charges by my own government is far more likely.
You know of these things due to the domestic free press holding the government accountable and being able to speak freely about it as you’re doing here. Seeing the two as remotely comparable is beyond belief. You don’t fear the U.S. government but it’s fun to pretend you live under an authoritarian dictatorship because your concept of it is purely academic.
This could describe any of the countries involved.
The difference that makes it concerning and problematic that China is doing it is that with China, there is no recourse. If you are harmed by a US company, you have legal recourse, and this holds the companies in check, restraining some of the most egregious behaviors.
That's not sinophobia. Any other country where products are coming out of that is effectively immune from consequences for bad behavior warrants heavy skepticism and scrutiny. Just like popup manufacturing companies and third world suppliers, you might get a good deal on cheap parts, but there's no legal accountability if anything goes wrong.
If a company in the US or EU engages in bad faith, or harms consumers, then trade treaties and consumer protection law in their respective jurisdictions ensure the company will be held to account.
This creates a degree of trust that is currently entirely absent from the Chinese market, because they deliberately and belligerently decline to participate in reciprocal legal accountability and mutually beneficial agreements if it means impinging even an inch on their superiority and sovereignty.
China is not a good faith participant in trade deals, they're after enriching themselves and degrading those they consider adversaries. They play zero sum games at the expense of other players and their own citizens, so long as they achieve their geopolitical goals.
Intellectual property, consumer and worker safety, environmental protection, civil liberties, and all of those factors that come into play with international trade treaties allow the US and EU to trade freely and engage in trustworthy and mutually good faith transactions. China basically says "just trust us, bro" and will occasionally performatively execute or imprison a bad actor in their own markets, but are otherwise completely beyond the reach of any accountability.
You don't think Trump's backers have used profiling, say, to influence voters? Or that DOGE {party of the USA regime} has done "sketchy things" with people's data?
This company cannot be helped. They cannot be saved through knowledge.
See ya.
Yes, even when you know what you're doing security incidents dan happen. And in those cases, your response to a vulnerable matters most.
The point is there are so many dumb mistakes and worrying design flaws that neglect and incompetence seems ample. Most likely they simply don't grasp what they're doing
It depends on what you mean by simple security design flaws. I'd rather frame it as, neglect or incompetence.
That isn't the same as malice, of course, and they deserve credits for their relatively professional response as you already pointed out.
But, come on, it reeks of people not understanding what they're doing. Not appreciating the context of a complicated device and delivering a high end service.
If they're not up to it, they should not be doing this.
As far as being "very welcoming", that's nice, but it only goes so far to make up for irresponsible gross incompetence. They made a choice to sell a product that's z-tier flaming crap, and they ought to be treated accordingly.
/s
This was the opposite of a professional response:
* Official communication coming from a Gmail. (Is this even an employee or some random contractor?)
* Asked no clarifying questions
* Gave no timelines for expected fixes, no expectations on when the next communication should be
* No discussion about process to disclose the issues publicly
* Mixing unrelated business discussions within a security discussion. While not an outright offer of a bribe, ANY adjacent comments about creating a business relationship like a sponsorship is wildly inappropriate in this context.
These folks are total clown shoes on the security side, and the efficacy of their "fix", and then their lack of communication, further proves that.
> Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start
I don't think that should give anyone a free pass though. It was such a simple flaw that realistically speaking they shouldn't ever be trusted again. If it had been a non-obvious flaw that required going through lots of hoops then fair enough but they straight up had zero authentication. That isn't a 'flaw' you need an external researcher to tell you about.
I personally believe companies should not be praised for responding to such a blatant disregard for quality, standards, privacy and security. No matter where they are from.
to assume it is not spying on you is naive at best. to address your sinophobia label, personally, I assume everything is spying on me regardless of country of origin. I assume every single website is spying on me. I assume every single app is spying on me. I assume every single device that runs an app or loads a website is spying on me. Sometimes that spying is done for me, but pretty much always the person doing the spying is benefiting someway much greater than any benefit I receive. Especially the Facebook example of every website spying on me for Facebook, yet I don't use Facebook.
Suppose you live in the USA and the USA is spying on you. Whatever information they collect goes into a machine learning system and it flags you for disappearal. You get disappeared.
Suppose you live in the USA and China is spying on you. Whatever information they collect goes into a machine learning system and it flags you for disappearal. But you're not in China and have no ties to China so nothing happens to you. This is a strictly better scenario than the first one.
If you're living in China with a Chinese family, of course, the scenarios are reversed.
https://youtube.com/shorts/1M9ui4AHXMo
Note: downvote?
> After sideloading the obligatory DOOM
> I just sideloaded the app on a different device
> I also sideloaded the store app
can we please stop propagating this slimy corporate-speak? installing software on a device that you own is not an arcane practice with a unique name, it's a basic expectation and right
But "sideloading" is definitely a new term of anti-freedom hostility.
Since debugging hardware is an even higher threshold, I would expect hardware devices this to be wildly insecure unless there are strong incentive for investing in security. Same as the "security" of the average IoT device.
But that at least turns it into something customers will notice. And companies already have existing incentives for dealing with that.
(There's a reason Apple can charge crazy markups.)
As someone with a lot of experience in the mobile app space, and tangentially in the IoT space, I can most definitely believe this, and I am not surprised in the slightest.
Our industry may "move fast", but we also "break things" frequently and don't have nearly the engineering rigor found in other domains.
So eventually if they remove the keys from the device, messages will have to go through their servers instead.
nice writeup thanks!
mikeve•16h ago
reverendsteveii•16h ago
>run DOOM
as the new
>cat /etc/passwd
It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want
bigiain•7h ago
(I'm showing my age here, aren't I?)
jcul•6h ago
rainonmoon•5h ago