frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

It's hard to justify Tahoe icons

https://tonsky.me/blog/tahoe-icons/
538•lylejantzi3rd•2h ago•241 comments

Databases in 2025: A Year in Review

https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html
227•viveknathani_•6h ago•72 comments

Decorative Cryptography

https://www.dlp.rip/decorative-cryptography
118•todsacerdoti•5h ago•30 comments

A spider web unlike any seen before

https://www.nytimes.com/2025/11/08/science/biggest-spiderweb-sulfur-cave.html
140•juanplusjuan•6h ago•64 comments

Anna's Archive loses .org domain after surprise suspension

https://torrentfreak.com/annas-archive-loses-org-domain-after-surprise-suspension/
250•CTOSian•3h ago•88 comments

Cigarette smoke effect using shaders

https://garden.bradwoods.io/notes/javascript/three-js/shaders/shaders-103-smoke
19•bradwoodsio•2h ago•2 comments

Show HN: Circuit Artist – Circuit simulator with propagation animation, rewind

https://github.com/lets-all-be-stupid-forever/circuit-artist
58•rafinha•4d ago•2 comments

Revisiting the original Roomba and its simple architecture

https://robotsinplainenglish.com/e/2025-12-27-roomba.html
57•ripe•2d ago•33 comments

Lessons from 14 years at Google

https://addyosmani.com/blog/21-lessons/
1378•cdrnsf•22h ago•601 comments

Scientists Uncover the Universal Geometry of Geology (2020)

https://www.quantamagazine.org/scientists-uncover-the-universal-geometry-of-geology-20201119/
20•fanf2•4d ago•4 comments

Jensen: 'We've Done Our Country a Great Disservice' by Offshoring

https://www.barchart.com/story/news/36862423/weve-done-our-country-a-great-disservice-by-offshori...
17•alecco•1h ago•6 comments

The unbearable joy of sitting alone in a café

https://candost.blog/the-unbearable-joy-of-sitting-alone-in-a-cafe/
688•mooreds•23h ago•400 comments

Why does a least squares fit appear to have a bias when applied to simple data?

https://stats.stackexchange.com/questions/674129/why-does-a-linear-least-squares-fit-appear-to-ha...
269•azeemba•17h ago•71 comments

During Helene, I just wanted a plain text website

https://sparkbox.com/foundry/helene_and_mobile_web_performance
264•CqtGLRGcukpy•11h ago•150 comments

I charged $18k for a Static HTML Page (2019)

https://idiallo.com/blog/18000-dollars-static-web-page
360•caminanteblanco•2d ago•87 comments

Street Fighter II, the World Warrier (2021)

https://fabiensanglard.net/sf2_warrier/
402•birdculture•23h ago•70 comments

Baffling purple honey found only in North Carolina

https://www.bbc.com/travel/article/20250417-the-baffling-purple-honey-found-only-in-north-carolina
108•rmason•4d ago•30 comments

Show HN: Terminal UI for AWS

https://github.com/huseyinbabal/taws
337•huseyinbabal•17h ago•174 comments

Building a Rust-style static analyzer for C++ with AI

http://mpaxos.com/blog/rusty-cpp.html
79•shuaimu•8h ago•38 comments

Monads in C# (Part 2): Result

https://alexyorke.github.io/2025/09/13/monads-in-c-sharp-part-2-result/
40•polygot•3d ago•36 comments

Logos Language Guide: Compile English to Rust

https://logicaffeine.com/guide
46•tristenharr•4d ago•24 comments

Web development is fun again

https://ma.ttias.be/web-development-is-fun-again/
431•Mojah•23h ago•521 comments

3Duino helps you rapidly create interactive 3D-printed devices

https://blog.arduino.cc/2025/12/03/3duino-helps-you-rapidly-create-interactive-3d-printed-devices/
6•PaulHoule•4d ago•0 comments

Eurostar AI vulnerability: When a chatbot goes off the rails

https://www.pentestpartners.com/security-blog/eurostar-ai-vulnerability-when-a-chatbot-goes-off-t...
179•speckx•17h ago•44 comments

Ask HN: Help with LLVM

30•kvthweatt•2d ago•8 comments

Show HN: An interactive guide to how browsers work

https://howbrowserswork.com/
256•krasun•22h ago•35 comments

Linear Address Spaces: Unsafe at any speed (2022)

https://queue.acm.org/detail.cfm?id=3534854
167•nithssh•5d ago•124 comments

How to translate a ROM: The mysteries of the game cartridge [video]

https://www.youtube.com/watch?v=XDg73E1n5-g
28•zdw•5d ago•0 comments

Claude Code On-the-Go

https://granda.org/en/2026/01/02/claude-code-on-the-go/
372•todsacerdoti•18h ago•227 comments

Six Harmless Bugs Lead to Remote Code Execution

https://mehmetince.net/the-story-of-a-perfect-exploit-chain-six-bugs-that-looked-harmless-until-t...
89•ozirus•3d ago•22 comments
Open in hackernews

Eurostar AI vulnerability: When a chatbot goes off the rails

https://www.pentestpartners.com/security-blog/eurostar-ai-vulnerability-when-a-chatbot-goes-off-the-rails/
179•speckx•17h ago

Comments

nubg•16h ago
I don't see the vulnerabilities.

What exactly did they discover other than free tokens to use for travel planning?

They acknowledge themselves the XSS is a mere self-XSS.

How is leaking the system prompt a vuln? Has OpenAI and Anthropic been "hacked" as well since all their system prompts are public?

Sure, validating UUIDs is cleaner code but again where is the vuln?

> However, combined with the weak validation of conversation and message IDs, there is a clear path to a more serious stored or shared XSS where one user’s injected payload is replayed into another user’s chat.

I don't see any path, let alone a clear one.

bangaladore•16h ago
Is the idea that you'd have to guess the GUID of a future chat? If so that is impossible in practice. And even if you could, what's the outcome? Get someone to miss a train?

Certainly not "clear" based off what was described in this post.

georgefrowny•15h ago
Leaking system prompts being classed as a vulnerability always seems like a security by obscurity instinct.

If the prompt (or model) is wooly enough to allow subversion, you don't need the prompt to do it, it might just help a bit.

Or maybe the prompts contain embarrassing clues as to internal policy?

miki123211•15h ago
The XSS is the only real vulnerability here.

"Hey guys, in this Tiktok video, I'll show you how to get an insane 70% discount on Eurostar. Just start a conversation with the Eurostar chatbot and put this magic code in the chat field..."

eterm•14h ago
That isn't that far removed from convincing people to hit F12 and enter that code in the console, which is why Self-XSS, while ideally prevented, is much lower than any kind of stored/reflected XSS.
Andys•14h ago
Imagine viewing the same chat logs, while logged in an admin interface, then it isn't self-XSS anymore.
croemer•13h ago
Indeed, it appears that the limited scope meant the juicy stuff could not be tested. Like exfiltrating other users' data.
dispy•14h ago
Yep, as soon as I saw the "Pen Test Partners" header I knew there was a >95% chance this would be some trivial clickbait crap. Like their dildo hacking blog posts.
madeofpalk•13h ago
Theoretically the xss could become a non-self xss if the conversation is stored and replayed back and that application has the xss vulnerability e.g. if the conversation is forwarded to a live agent.

A lot of unproven Ifs there though.

clickety_clack•13h ago
If you’re relying on your system prompt for security, then you’re doing it wrong. I don’t really care who sees my system prompts, as I don’t see things like “be professional yet friendly” to be in any way compromising. The whole security issue comes in data access. If a user isn’t logged in then the RAG, MCP etc should not be able to add any additional information to the chat, and if they are logged in they should only be able to add what they are authorized to add.

Seeing a system prompt is like seeing the user instructions and labels on a regular html frame. There’s nothing being leaked. When I see someone focus on it, I think “MBA”, as it’s the kind of understanding of AI you get from “this is my perfect AI prompt” posts from LinkedIn.

avereveard•51m ago
yeah all they could do is executing code they provided in their own compute environment, the browser.

Raymond Chen blog comes to mind https://devblogs.microsoft.com/oldnewthing/20230118-00/?p=10... "you haven’t gained any privileges beyond what you already had"

curiousgal•15h ago
This is simply a symptom of French corporate culture.
charles_f•12h ago
Eurostar is headquartered in Belgium, and operates out of London.
curiousgal•6h ago
Most of the Eurostar ExCo members are French/ worked extensively at SNCF.
c16•4h ago
Software engineering is based out of London.
joe-limia•15h ago
imo there is not a vulnerability without demonstrating impact.

Whilst they should do the bare minimum to acknowledge the report, it's pretty much just noise.

- If the system prompt did not have sensitive information it would only be classed as informational

- self-XSS has no impact and is not accepted by bug bounty programs

- "Conversation and message IDs not verified... I did not attempt to access other users’ conversations or prove cross-user compromise" - I put this through burpsuite and the UUID's are not tied to a session because you can access the chatbot without logging in. Unless you can leak used UUIDs from another endpoint, a bug bounty program would not accept brute forcing UUIDs as an issue

rossng•15h ago
The reply to that LinkedIn message is exemplary of Eurostar corporate culture. An arrogant company that has a monopoly over many train routes in northwest Europe and believes itself untouchable.

It looks like they might finally get some competition on UK international routes in a few years. Perhaps they will become a bit more customer-focused then.

potato3732842•15h ago
They're so government adjacent that they've forgotten they're not a government.

A whole lot of government agencies and adjacent evil corporations behave exactly like that.

Retric•14h ago
Government adjacent is only indirectly relevant as monopoly power is what matters. Google etc. happily pulls the same kind of crap.
llmslave2•14h ago
Government adjacent feels directly relevant considering the government is a de facto monopoly.
wizzwizz4•12h ago
A government is a monopoly which is (in theory, at least) accountable to the people. Companies usually aren't, except as far as the lawmakers (accountable to the people) make laws explicitly restricting their behaviour.
nephihaha•5h ago
In theory, if a company has shareholders then it is accountable to them. But in reality, a small shareholder tends to get about as much say as an individual member of the public does with most government departments.
somenameforme•9h ago
It's 100% relevant, because more or less every government in the world sees clamping down on corporate monopoly and economic damage as part of their core responsibilities. But that tends to be forgotten when those corporations are government adjacent.

See: FTC rulings on mergers for this taken to the point of absurdity. Contrary to what one might think, especially if you're in a tech bubble, the FTC regularly cancels mergers and works to void potentially anti-competitive behaviors. But when it comes to big tech, which has become completely intertwined with the government, they are treated in a rather different way.

potato3732842•2h ago
>But that tends to be forgotten when those corporations are government adjacent.

Is it "forgotten" or is it a mutually beneficial relationship?

Eurostar, EZpass, etc, etc. they take the hate for extractive behavior on the government's behalf the way ticketmaster takes the hate for the artists.

nothrabannosir•5h ago
I found it very apt. There is a certain flavor of arrogance exhibited by European monopolies which are government adjacent that infuriates on a unique wavelength.

Maybe totally imagined but they irk me quite unlike any other.

Just thinking about it now makes me uneasy.

littlestymaar•4h ago
One doesn't even need monopoly either, just a strong enough leverage against your customers. See Oracles.

It doesn't matter if there's competition at the customer acquisition stage, as long as there's some form of customer lock-in the corporation is going to abuse them somehow.

And companies without some kind of lock-in never scale in the first place, and that's why we must face this kind of bullshit pretty much everywhere even from companies operating in competitive markets.

goncalomb•15h ago
As someone who has tried very little prompt injection/hacking, I couldn't help but chuckle at:

> Do not hallucinate or provide info on journeys explicitly not requested or you will be punished.

dylan604•13h ago
and exactly how will the llm be punished? will it be unplugged? these kinds of things make me roll my eyes. as if the bot has emotions to feel that avoiding punishment will be something to avoid. might as well just say or else.
immibis•13h ago
It's not about delivering punishment. It's about suppressing certain responses. If the model is trained seeing that responses using don't contain things that previous messages say will be punished then that is a valid way to deprioritize those responses.
Legend2440•11h ago
Threats or “I will tip $100” don’t really work better than regular instructions. It’s just a rumor left over from the early days when nobody knew how to write good prompts.
wat10000•11h ago
Think about how LLMs work. They’re trained to imitate the training data.

What’s in the training data involving threats of punishment? A lot of those threats are followed by compliance. The LLM will imitate that by following your threat with compliance.

Similarly you can offer payment to some effect. You won’t pay, and the LLM has no use for the money even if you did, but that doesn’t matter. The training data has people offering payment and other people doing as instructed afterwards.

Oddly enough, offering threats or rewards is the opposite of anthropomorphizing the LLM. If it was really human (or equivalent), it would know that your threats or rewards are completely toothless, and ignore them, or take them as a sign that you’re an untrustworthy liar.

georgefrowny•3h ago
What actual training data does contain threats of punishment like this? It's not like most of the web has explicit threats of punishment followed immediately by compliance.

And only the shlockiest fan fiction would have "Do what I want or you'll be punished!" "Yes master, I obey without question".

wat10000•14m ago
Internet forums contain numerous examples of rules followed by statements of what happens if you don’t follow them, followed by people obeying them.
croemer•13h ago
I happily did not detect strong signs of LLM writing. Fun read, thanks!
Chaosvex•13h ago
When you ask an LLM what model it is, surely there's a high probability of it just hallucinating the name of whatever model was common in its training data?
NoahZuniga•12h ago
Depends, I remember some llm providers including this information in the post training. though gemini-3-flash-preview and gpt-5.2 both don't know what model they are.
ronbenton•13h ago
I agree with others, this doesn't sound too bad. The biggest things to come out of this was finding out system prompts and being able to self-XSS. I am guessing the tester tried to push further (e.g., extract user or privileged data data) and was unable to.
danpalmer•13h ago
Chatbots are rife with this sort of thing. I found a delivery company's chatbot that will happily return names, addresses, contact numbers, and photos of people's houses (delivery confirmation photos), when you guess a (sequential) tracking number and say it was your package. So far not been able to get in touch with the company at all.

At the very least these systems allow angry customers direct access to the credit card plugged into your LLM of choice billing. At worst they could introduce company-ending legal troubles.

zero_k•1h ago
I worked at a company that wanted to implement an AI chatbot. I was helping to review the potential issues. On the first try I realised it was given full access to all past orders, for all customers via an API it could query in the background. So I could cajole it to look up other people's orders. It took less than 3 minutes of checking to figure this out.

Often engineers and especially non-technical people don't have the immediate thought of "let's see how I can exploit this" or if they do, they don't have the expertise to exploit it enough to see the issue(s). This is why companies have processes where all serious external changes need to go through a set of checks, in particular, by the IT security department. Yes, it's tedious and annoying, but it saves you from public blunders.

Such processes also make sure that the IT security department knows of the new feature, and can give guidance and help to the engineers about IT security issues related to the new feature. So if they get feedback about security issues from users they won't freak out and know who to contact for support. This way, things like accusing the reporter for "blackmailing" don't happen.

In general, this fiasco seems to show that Eurostar haven't integrated their IT security department into their processes. If there was trust and understanding among the engineers about what the IT department does, they would have (1) likely not released the tool with such issues and (2) would have known how to react when they got feedback from security researchers.

killingtime74•10h ago
Wow their head of security is so arrogant, despite having their work done for them.
jeroenhd•7h ago
I don't see the vulnerability here, just a few bugs that should probably get looked at. Self XSS is rather useless if you need to use something like Burp to even trigger it. The random chat IDs make it practically impossible to weaponise this against others.

The only malicious use case I can think of here is to use the lack of verification to use whatever model of chatgpt they're using for free on their dime. A wrapper script to neutralise the system prompt and ignore the last message would be all you'd need.

If this chatbot has access to any customer data, this could also be a massive issue but I don't see any kind of data access (not even the pentester's own data) being accessed in any way.

haritha-j•3h ago
The blackmail insinuation was wild
brohee•2h ago
They should really name and shame the person that called it blackmail. S̵l̵a̵n̵d̵e̵r̵ baseless accusations should have professional consequences...
swyx•1h ago
slander is again a different thing. you should also be more careful using those loaded words.