frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

It's hard to justify Tahoe icons

https://tonsky.me/blog/tahoe-icons/
523•lylejantzi3rd•2h ago•233 comments

Databases in 2025: A Year in Review

https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html
226•viveknathani_•6h ago•67 comments

Decorative Cryptography

https://www.dlp.rip/decorative-cryptography
116•todsacerdoti•5h ago•30 comments

A spider web unlike any seen before

https://www.nytimes.com/2025/11/08/science/biggest-spiderweb-sulfur-cave.html
137•juanplusjuan•6h ago•62 comments

Cigarette smoke effect using shaders

https://garden.bradwoods.io/notes/javascript/three-js/shaders/shaders-103-smoke
16•bradwoodsio•2h ago•2 comments

Anna's Archive loses .org domain after surprise suspension

https://torrentfreak.com/annas-archive-loses-org-domain-after-surprise-suspension/
240•CTOSian•3h ago•86 comments

Show HN: Circuit Artist – Circuit simulator with propagation animation, rewind

https://github.com/lets-all-be-stupid-forever/circuit-artist
58•rafinha•4d ago•2 comments

Revisiting the original Roomba and its simple architecture

https://robotsinplainenglish.com/e/2025-12-27-roomba.html
57•ripe•2d ago•33 comments

Lessons from 14 years at Google

https://addyosmani.com/blog/21-lessons/
1375•cdrnsf•22h ago•601 comments

Scientists Uncover the Universal Geometry of Geology (2020)

https://www.quantamagazine.org/scientists-uncover-the-universal-geometry-of-geology-20201119/
20•fanf2•4d ago•4 comments

Jensen: 'We've Done Our Country a Great Disservice' by Offshoring

https://www.barchart.com/story/news/36862423/weve-done-our-country-a-great-disservice-by-offshori...
16•alecco•57m ago•4 comments

The unbearable joy of sitting alone in a café

https://candost.blog/the-unbearable-joy-of-sitting-alone-in-a-cafe/
688•mooreds•23h ago•399 comments

Why does a least squares fit appear to have a bias when applied to simple data?

https://stats.stackexchange.com/questions/674129/why-does-a-linear-least-squares-fit-appear-to-ha...
269•azeemba•17h ago•71 comments

During Helene, I just wanted a plain text website

https://sparkbox.com/foundry/helene_and_mobile_web_performance
263•CqtGLRGcukpy•11h ago•147 comments

I charged $18k for a Static HTML Page (2019)

https://idiallo.com/blog/18000-dollars-static-web-page
360•caminanteblanco•2d ago•87 comments

Street Fighter II, the World Warrier (2021)

https://fabiensanglard.net/sf2_warrier/
402•birdculture•23h ago•70 comments

Baffling purple honey found only in North Carolina

https://www.bbc.com/travel/article/20250417-the-baffling-purple-honey-found-only-in-north-carolina
108•rmason•4d ago•29 comments

Show HN: Terminal UI for AWS

https://github.com/huseyinbabal/taws
337•huseyinbabal•17h ago•174 comments

Building a Rust-style static analyzer for C++ with AI

http://mpaxos.com/blog/rusty-cpp.html
79•shuaimu•8h ago•38 comments

Monads in C# (Part 2): Result

https://alexyorke.github.io/2025/09/13/monads-in-c-sharp-part-2-result/
40•polygot•3d ago•36 comments

Logos Language Guide: Compile English to Rust

https://logicaffeine.com/guide
46•tristenharr•4d ago•24 comments

Web development is fun again

https://ma.ttias.be/web-development-is-fun-again/
430•Mojah•23h ago•519 comments

3Duino helps you rapidly create interactive 3D-printed devices

https://blog.arduino.cc/2025/12/03/3duino-helps-you-rapidly-create-interactive-3d-printed-devices/
6•PaulHoule•4d ago•0 comments

Eurostar AI vulnerability: When a chatbot goes off the rails

https://www.pentestpartners.com/security-blog/eurostar-ai-vulnerability-when-a-chatbot-goes-off-t...
179•speckx•17h ago•44 comments

Ask HN: Help with LLVM

30•kvthweatt•2d ago•8 comments

Show HN: An interactive guide to how browsers work

https://howbrowserswork.com/
255•krasun•22h ago•35 comments

Linear Address Spaces: Unsafe at any speed (2022)

https://queue.acm.org/detail.cfm?id=3534854
167•nithssh•5d ago•124 comments

How to translate a ROM: The mysteries of the game cartridge [video]

https://www.youtube.com/watch?v=XDg73E1n5-g
28•zdw•5d ago•0 comments

Six Harmless Bugs Lead to Remote Code Execution

https://mehmetince.net/the-story-of-a-perfect-exploit-chain-six-bugs-that-looked-harmless-until-t...
89•ozirus•3d ago•22 comments

Claude Code On-the-Go

https://granda.org/en/2026/01/02/claude-code-on-the-go/
371•todsacerdoti•18h ago•227 comments
Open in hackernews

Six Harmless Bugs Lead to Remote Code Execution

https://mehmetince.net/the-story-of-a-perfect-exploit-chain-six-bugs-that-looked-harmless-until-they-became-pre-auth-rce-in-a-security-appliance/
89•ozirus•3d ago

Comments

kichik•13h ago
Nice chain and write-up. I don't know that I would call eval() on user input, hard coded secrets, and leaked credentials small or harmless. All of those are scary on their own.
arcfour•9h ago
Yeah...and the fact that they evidently had no responsible disclosure process and ghosted the reporter...for a security product?!

Big yikes.

x0x0•9h ago
This writeup is great, particularly the discussion of how Mehmet worked through understanding the system.

That said, Logpoint sell a siem product w/o a vulnerability intake process and that can't manage to rapidly patch pre-auth RCE security holes. There's nothing to say besides Logpoint are not serious people and nobody should use their nonsense. Given the number of bugs found and the surface level depth, security wasn't even an afterthought; it was not thought about at all.

AlienLang•7h ago
This is a great example of how small assumptions compound. None of these bugs look critical in isolation, but together they completely change the threat model.
boston_clone•7h ago
Thanks, ChatGPT.
patates•6h ago
For fun (not because I'm trying to defend the capabilities of LLMs, but really just out of boredom), I tried to make this better with this prompt:

> make this sound less AI-generated and more like a proper hacker news comment with some SUBSTANCE: (insert the OP comment from above)

and Gemini 3 Flash generated this:

> This is exactly why CVSS scores can be misleading in practice. You see this constantly at Pwn2Own: it’s rarely one massive architectural failure, but a daisy-chain of three or four "low severity" quirks—an info leak here, a minor logic error there.

> To a product manager, these look like edge cases or wontfix tickets. But to an attacker, they aren't bugs; they’re primitives. You use the info leak to bypass ASLR, the logic error to groom the heap, and suddenly a nuisance crash becomes full RCE. We tend to model threats based on distinct barriers, ignoring that the complexity of the system guarantees these gadgets will eventually align.

It feels like it's fun when one plays with it on their own but it's really boring when reading the content others have generated (and therefore I'm sorry for adding to the pile - just wanted to see if the "HN style" was already baked-in to the LLMs and share the result: Yes it is).

jacquesm•6h ago
I wished there was a 9th bit that we could use to tag AI generated content with.
patates•6h ago
9th bit is the color:

> I think it's time for computer people to take Colour more seriously

Source: https://ansuz.sooke.bc.ca/entry/23 , "What Colour are your bits?"

jacquesm•5h ago
Yes, that's what I had in mind.
amelius•4h ago
Unicode can maybe invent an escape code.
jacquesm•4h ago
That is one law I could get behind actually: the absolute requirement to label any and all AI output by using a duplicate of all of Unicode that looks the same and feels the same but is actually binary in a different space.

And then browsers and text editors could render this according to the user's settings.

amelius•4h ago
Yes, it would already help if they started with whitespace and punctuation. That would already give a big clue as to what is AI generated.

In fact, using a different scheme, we can start now:

    U+200B — ZERO WIDTH SPACE
Require that any space in AI output is followed by this zero-width character. If this is not acceptable then maybe apply a similar rule to the period character (so the number of "odd" characters is reduced to one per sentence).
tgv•3h ago
Unfortunately, people here know their way around tools to take out the markers. Probably someone will vibe up a browser plugin for it.
patates•3h ago
I sometimes use AI to fix my English (especially when I'm trying to say something that pushes my grammar skill to the limit) and people like me can use that to inform others about that. Bad actors will always do weird stuff, this is more about people like me who want to be honest, but signing with (generated/edited with AI) is too much noise.
amelius•3h ago
Yes, and I think the big AI companies will want to have AI-generated data tagged, because otherwise it would spoil their training data in the long run.
jacquesm•2h ago
I would not be at all surprised if they already watermark their output but just didn't bother to tell us about it.
tgv•2h ago
A little bit of advice: don't copy and paste the LLM's output, but actively read and memorize it (phrase by phrase), and then edit your text. It helps developing your competence. Not a lot, and it takes time, but consciously improving your own text can help.
patates•2h ago
Thank you for the advice, I'll try next time!
josefx•36m ago
There is the evil bit RFC for IPv4
Zephilinox•6h ago
Both those responses sound clearly like AI though
patates•5h ago
Totally! And even if it weren't, I'm still for labelling the AI generated content.

It's just when someone's going to generate something, they should at least give a little more thought to the prompt.

rob_c•1h ago
1) routing (mis-)config problem - key of remote exploit. This should always be something people double check if they don't understand how it works.

2) hard-coded secrets - this is just against best practice. don't do this _ever_ there's a reason secure enclaves exist, not working it into your workflow is only permissible if you're working with black-box proprietary tools.

3) hidden user - this is again against best practice allowing for feature creep via permissions creep. If you need privileged hidden remote accessible accounts at least restrict access and log _everything_.

4) ssrf - bad but should be isolated so is much less of an issue. technically against best practices again, but widely done in production.

5) use of python eval in production - no, no, no, no, never, _ever_ do this. this is just asking for problems for anything tied to remote agents unless the point of the tool is shell replication.

6) static aes keys / blindly relying on encryption to indicate trusted origin - see bug2, also don't use encryption as origin verification if the client may do _bad_ things

parsing that was... well... yeah, I can see why that turned into a mess, the main thing missing is a high-level clear picture of the situation vs a teardown of multiple bugs and a brain dump