frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
290•theblazehen•2d ago•97 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
22•alainrk•1h ago•12 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
35•AlexeyBrin•1h ago•5 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
15•onurkanbkrc•1h ago•1 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
717•klaussilveira•16h ago•218 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
978•xnx•21h ago•562 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
94•jesperordrup•6h ago•35 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
5•nar001•35m ago•2 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
138•matheusalmeida•2d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
74•videotopia•4d ago•11 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
17•matt_d•3d ago•4 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
46•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
242•isitcontent•16h ago•27 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
242•dmpetrov•16h ago•128 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
344•vecti•18h ago•153 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
4•andmarios•4d ago•1 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
510•todsacerdoti•1d ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
393•ostacke•22h ago•101 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
310•eljojo•19h ago•192 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•187 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
437•lstoll•22h ago•286 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
33•1vuio0pswjnm7•2h ago•31 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
73•kmm•5d ago•11 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
26•bikenaga•3d ago•13 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
98•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
278•i5heu•19h ago•227 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
43•gmays•11h ago•15 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1088•cdrnsf•1d ago•469 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
312•surprisetalk•3d ago•45 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
36•romes•4d ago•3 comments
Open in hackernews

CSRF protection without tokens or hidden form fields

https://blog.miguelgrinberg.com/post/csrf-protection-without-tokens-or-hidden-form-fields
303•adevilinyc•1mo ago

Comments

owenthejumper•1mo ago
Right now the problem is what the author already mentions - the use of Sec-Fetch-Site (FYI, HTTP headers are case insensitive :) - is considered defense in depth in OWASP right now, not a primary protection.

Unfortunately OWASP rules the world. Not because it's the best way to protect your apps, but because the corporate overloads in infosec teams need to check the box with "Complies with OWASP Top 10"

miguelgrinberg•1mo ago
Hi, author here.

This was actually a mistake. If you look at the OWASP cheat sheet today you will see that Fetch Metadata is a top-level alternative to the traditional token-based protection.

I'm not sure I understand why, but the cheat sheet page was modified twice. First it entered the page with a top-level mention. Then someone slipped a revision that downgraded it to defense in depth without anyone noticing. It has now been reverted back to the original version.

Some details on what happened are in this other discussion from a couple of days ago: https://news.ycombinator.com/item?id=46347280.

nchmy•1mo ago
Can you share links to better guidance than OWASP?
tptacek•1mo ago
The OWASP Top 10 is a list of vulnerabilities, not a checklist of things you have to actually "do".
flomo•1mo ago
Completely agree. But fyi there is a bunch of dev training stuff around this, implying like "don't do an owasp or you're in trouble".
ozim•1mo ago
If you look from perspective of vulnerability assessment, it kind of is.
scott_w•1mo ago
While you’re correct, corporate security teams demand suppliers “comply with OWASP,” despite this being a nonsensical statement to anyone who’d read the website.

Unfortunately, the customer purchasing your product doesn’t know this and (naturally) trusts their own internal experts over you. Especially given all their other suppliers are more than happy to state they’re certified!

tptacek•1mo ago
I'm, uh, pretty familiar with the routine. I stand by what I said: you do not need any particular CSRF defense in place; you need to not have CSRF vulnerabilities. There's no OWASP checkbox-alike that requires you to have CSRF tokens, and plenty of real line-of-business apps at gigantic companies don't.
scott_w•1mo ago
To be fair, though, you’re a lot more knowledgeable and experienced than some security “experts” I’ve had to deal with ;-)
8n4vidtmkvmk•1mo ago
Since when are they case sensitive? https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... says otherwise.

It's possible for a server to treat them as case sensitive, but that seems like a bad idea.

thomascountz•1mo ago
+1

HTTP/2, headers are not unique if they only differ by casing, but they must be encoded as lowercase.

   Just as in HTTP/1.x, header field names are strings of ASCII characters that are compared in a case-insensitive fashion. However, header field names MUST be converted to lowercase prior to their encoding in HTTP/2.  A request or response containing uppercase header field names MUST be treated as malformed (Section 8.1.2.6).[1]
HTTP/1.X, headers are insensitive to casing for reasons of comparison and encoding.

   Each header field consists of a name followed by a colon (":") and the field value. Field names are case-insensitive.[2]
So, if Sec-Fetch-Site is sensitive at all, it would be sec-fetch-site when sending via HTTP/2 and you're responsive for encoding/decoding.

[1]: https://datatracker.ietf.org/doc/html/rfc7540#section-8.1.2

[2]: https://datatracker.ietf.org/doc/html/rfc2616#section-4.2

thatwasunusual•1mo ago
>> FYI, HTTP headers are case insensitive

> Since when are they case sensitive?

[...]

thomascountz•1mo ago
Perhaps the OG comment was misread or confusion was caused by a typo and/or edit.

When I originally read it hours ago, I also read it as "...HTTP headers are case sensitive," (emphasis mine).

That said, there is one caveat regarding case sensitivity for headers encoded for HTTP/2.

jonway•1mo ago
My primitive instincts lead me to believe that sometimes they end up being Case-Sensitive and Sometimes NoT! (depending on implementation)
tmsbrg•1mo ago
I'm surprised there's no mention of the SameSite cookie attribute, I'd consider that to be the modern CSRF protection and it's easy, just a cookie flag:

https://scotthelme.co.uk/csrf-is-dead/

But I didn't know about the Sec-Fetch-Site header, good to know.

miguelgrinberg•1mo ago
The OWASP CSRF prevention cheat sheet page does mention SameSite cookies, but they consider it defense in depth: https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re....
tptacek•1mo ago
Because of clientside Javascript CSRF, which is not a common condition.
nchmy•1mo ago
Client side js is not particularly relevant to csrf.
tptacek•1mo ago
I mostly agree, but that's the logic OWASP uses to argue you should still be doing explicit tokens even if you're using SameSite and Sec-Fetch.
nchmy•1mo ago
But that's not what owasp argues. Fetch Metadata is recommended as a primary, standalone defense against CSRF (you can be forgiven for not knowing this - I worked on getting the doc updated and it landed a couple weeks ago, then was reverted erroneously, and fixed yesterday)
tmsbrg•1mo ago
What do you mean with clientside Javascript CSRF?
hn_throwaway_99•1mo ago
I don't understand the potential vulnerabilities listed at the linked section here: https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc...

They give 2 reasons why SameSite cookies are only considered defense in depth:

----

> Lax enforcement provides reasonable defense in depth against CSRF attacks that rely on unsafe HTTP methods (like "POST"), but does not offer a robust defense against CSRF as a general category of attack:

> 1. Attackers can still pop up new windows or trigger top-level navigations in order to create a "same-site" request (as described in section 2.1), which is only a speedbump along the road to exploitation.

> 2. Features like "<link rel='prerender'>" [prerendering] can be exploited to create "same-site" requests without the risk of user detection.

> When possible, developers should use a session management mechanism such as that described in Section 8.8.2 to mitigate the risk of CSRF more completely.

----

But that doesn't make any sense to me. I think "the robust solution" should be to just be sure that you're only performing potential sensitive actions on POST or other mutable method requests, and always setting the SameSite attribute. If that is true, there is absolutely no vulnerability if the user is using a browser from the past seven years or so. The 2 points noted in the above section would only lead to a vulnerability if you're performing a sensitive state-changing action on a GET. So rather than tell developers to implement a complicated "session management mechanism", it seems like it would make a lot more sense to just say don't perform sensitive state changes on a GET.

Am I missing something here? Do I not understand the potential attack vectors laid out in the 2 bullet points?

tordrt•1mo ago
Yep SameSite lax, and just make sure you never perform any actions using Get requests, which you shouldn’t anyway.
paulryanrogers•1mo ago
Unsubscribe often need to be GET, or at least start as GET
eli•1mo ago
list-unsubscribe header sends a POST. Probably makes more sense to just use a token from an email anyway.
hn_throwaway_99•1mo ago
The way the list-unsubscribe header works, it essentially must use a token when one click unsubscribe (i.e when the List-Unsubscribe-Post: List-Unsubscribe=One-Click header is also passed) is used, and since GMail has required one click unsubscribe for nearly 2 years now, my guess is all bulk mail senders support this. Relevant section from the one click unsubscribe RFC:

> The URI in the List-Unsubscribe header MUST contain enough information to identify the mail recipient and the list from which the recipient is to be removed, so that the unsubscription process can complete automatically. Since there is no provision for extra POST arguments, any information about the message or recipient is encoded in the URI. In particular, one-click has no way to ask the user what address or from what list the user wishes to unsubscribe.

> The POST request MUST NOT include cookies, HTTP authorization, or any other context information. The unsubscribe operation is logically unrelated to any previous web activity, and context information could inappropriately link the unsubscribe to previous activity.

> The URI SHOULD include an opaque identifier or another hard-to-forge component in addition to, or instead of, the plaintext names of the list and the subscriber. The server handling the unsubscription SHOULD verify that the opaque or hard-to-forge component is valid. This will deter attacks in which a malicious party sends spam with List-Unsubscribe links for a victim list, with the intention of causing list unsubscriptions from the victim list as a side effect of users reporting the spam, or where the attacker does POSTs directly to the mail sender's unsubscription server.

> The mail sender needs to provide the infrastructure to handle POST requests to the specified URI in the List-Unsubscribe header, and to handle the unsubscribe requests that its mail will provoke.

paulryanrogers•1mo ago
I was thinking more about the unsubscribe footer links still very common in emails.
eli•1mo ago
I don’t think CSRF has anything to do with those?
paulryanrogers•1mo ago
The endpoints serving those links can't be protected as well. Unless they serve a form that posts, which may not be legal if it requires extra clicks
nhumrich•1mo ago
This is "not allowing cross site at all" so, technically it's not "request forgery" protection. Yes, this is very semantic, but, CSRF is a vulnerability introduced by enabling CS and CORS. So, technically, same-site cookies are not "protection" against CSRF.
hn_throwaway_99•1mo ago
I don't understand your distinction at all. I may not quite grok your meaning here, but CORS is usually discussed in the context of allowing cross-origin AJAX calls.

But cross origin form posts are and have always been permitted, and are the main route by which CSRF vulnerabilities arise. Nothing on the client or server needs to be enabled to allow these form posts.

Furthermore, the approach detailed in the article simply has the server block requests if they are cross site/origin requests, so I'm not sure what the semantic difference is.

true_religion•1mo ago
Yeah, CORS is not a safety mechanism. It’s a procedure of loosening the default safety mechanism of not sharing any response data from a cross site request with client side JavaScript.
nchmy•1mo ago
Cs and cors have nothing to do with csrf... Though, yes, neither does same-site
nchmy•1mo ago
I don't know why I said same-site cookies have nothing to do with csrf. They can be helpful as defense in depth, but not primary defense.
hn_throwaway_99•1mo ago
I haven't seen any proposed attack vectors where they are insufficient primary defense when using SameSite Lax as long as you don't do any sensitive state change operations on non-mutative methods like GET.

I feel like people are just parroting the OWASP "they're just defense in depth!" line without understanding what the actual underlying vulnerabilities are, namely:

1. If you're performing a sensitive operation on a GET, you're in trouble. But I think that is a bigger problem and you shouldn't do that.

2. If a user is on a particularly old browser, but these days SameSite support has been out on all major browsers for nearly a decade so I think that point is moot.

The problem I have with the "it's just defense in depth" line is people don't really understand how it protects against any underlying vulnerabilities. In that case, CSRF tokens add complexity without actually making you any safer.

I'd be happy to learn why my thinking is incorrect, i.e. where there's a vulnerability lurking that I'm not thinking of if you use SameSite Lax and only perform state changes on mutable methods.

hatefulheart•1mo ago
I’m confused, how does this prevent a CSRF attack?

SameSite or not is inconsequential to the check a backend does for a CSRF token in the POST.

tptacek•1mo ago
No? The whole point of SameSite=(!none) is to prevent requests from unexpectedly carrying cookies, which is how CSRF attacks work.
hatefulheart•1mo ago
What does this even mean?

I’m not being rude, what does it mean to unexpectedly carry cookies? That’s not what I understand the risk of CSRF is.

My understanding is that we want to ensure a POST came from our website and we do so with a double signed HMAC token that is present in the form AND the cookie, which is also tied to the session.

What on earth is unexpectedly carrying cookies?

demurgos•1mo ago
The "unexpected" part is that the browser automatically fills some headers on behalf of the user, that the (malicious) origin server does not have access to. For most headers it's not a problem, but cookies are more sensitive.

The core idea behind the token-based defense is to prove that the origin server had access to the value in the first place such that it could have sent it if the browser didn't add it automatically.

I tend to agree that the inclusion of cookies in cross-site requests is the wrong default. Using same-site fixes the problem at the root.

The general recommendation I saw is to have two cookies. One without same-site for read operations, this allows to gracefully handle users navigating to your site. And a second same-site cookie for state-changing operations.

hn_throwaway_99•1mo ago
The only reason CSRF is even possible is because the browser sends (or, well, used to send) cookies for a particular request even if that request initiated from a different site. If the browser never did that (and most people would argue that's a design flaw from the get go) CSRF attacks wouldn't even be possible. The SameSite attribute makes it so that cookies will only be sent if the request that originated them is the same origin as the origin that originally wrote the cookie.
hatefulheart•1mo ago
I think I understand now, the Cookie just is not present in the POST if a user clicked on, for example, a maliciously crafted post from a different origin?
kassner•1mo ago
Exactly.
zenmac•1mo ago
Never needed the CSRF and assumed that cookies was always SameSite, but can see that it was introduced in 2016. Just had the sitename put into the value of the cookie since, and never really needed to think about that.

Just feels like all these http specs are super duck tapped together. I guess that is only way to ensure mass adoption for new devs and now vibe coders.

alserio•1mo ago
I'm not sure I'm understanding your solution
hn_throwaway_99•1mo ago
Given what was written, I'm not quite sure the author does either.
zenmac•1mo ago
If the domain name is in the cookie value then that can't be used when submit another request from another domain. Yes you can configure the dns to bypass that, but at that point it is also pointless for CSRF.
hn_throwaway_99•1mo ago
Not to be rude, but from your comments you don't appear to understand what the CSRF vulnerability actually is, nor how attackers make use of it.

Cookies can still only be sent to the site that originally wrote them, and they can only be read by the originating site, and this was always the case. The problem, though, is that a Bad Guy site could submit a form post to Vulnerable Site, and originally the browser would still send any cookies of Vulnerable Site with the request. Your comment about "if the domain name is in the cookie value" doesn't change this and the problem still exists. "Yes you can configure the dns to bypass that" also doesn't make any sense in this context. The issue is that if a user is logged into Vulnerable Site, and can be somehow convinced to visit Bad Guy site, then Bad Guy site can then take an action as the logged user of Vulnerable Site, without the user's consent.

ludwik•1mo ago
> Just had the sitename put into the value of the cookie since, and never really needed to think about that.

How would that help? This doesn't seem like a solution to the CSRF problem

FiloSottile•1mo ago
SameSite doesn’t protect against same-site cross-origin requests, so you are staking your app’s security on the security of the marketing blog.
tmsbrg•1mo ago
What do you mean with same-site cross-origin requests?
FiloSottile•1mo ago
See the same-site section of https://words.filippo.io/csrf/
tmsbrg•1mo ago
Oh, thanks. I learned something new. Never knew that different subdomains are considered the same "site", but MDN confirms this[0]. This shows just how complex these matters are imo, it's not surprising people make mistakes in configuring CSRF protection.

It's a pretty cool attack chain, if there's an XSS on marketing.example.com it can be used to execute a CSRF on app.example.com! It could also be used with dangling subdomain takeover or if there's open subdomain registration.

[0] https://developer.mozilla.org/en-US/docs/Glossary/Site

FiloSottile•1mo ago
It's why I like Sec-Fetch-Site: the #1 risk is for the developer to make a mistake trying to configure something more complex. Sec-Fetch-Site delegates the complexity to the browser.
hxtk•1mo ago
It’s a real problem for defense sites because .mil is a public suffix so all navy.mil sites are the “same site” and all af.mil sites etc.
hn_throwaway_99•1mo ago
Thanks very much for your comment. I posted elsewhere that I felt like SameSite: Lax should be considered a primary defense, not just "Defense in depth" as OWASP calls it, but your rationale makes sense to me, while OWASP's does not.

That is, if you are using SameSite Lax and not performing state changes on GETs, there is no real attack vector, but like you say it means you need to be able to trust the security of all of your subdomains equally, which is rarely if ever the case.

I'm surprised browser vendors haven't thought of this. Like even SameSite: Strict will still send cookies when the request comes from a subdomain. Has there been any talk of adding something like a SameSite: SameOrigin or something like that? It seems weird to me that the Sec-Fetch-Site header has clear delineations between site and origin, but the SameSite header does not.

FiloSottile•1mo ago
Browser vendors have absolutely thought about this, at length.

The web platform is intricate, legacy, and critical. Websites by and large can’t and don’t break with browser updates, which makes all of these things like operating on the engine in flight.

For example, click through some of the multiple iterations of the Schemeful Same Site proposal linked from my blog.

Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy. CSRF is what Fetch metadata is for.

hn_throwaway_99•1mo ago
> Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy.

That doesn't make any sense to me, can you explain? Cookies were only ever readable or writable by the site that created them, even before SameSite existed. Even with a CSRF vulnerability, the attacker could never read the response from the forged request. So it seems to me that SameSite fundamentally is more about preventing CSRF vulnerabilities - it actually doesn't do much (beyond that) in terms of privacy, unless I'm missing something.

shermantanktop•1mo ago
Am I missing something? The suggested protection helps with XSS flavors of CSRF but not crafted payloads that come from scripts which have freedom to fake all headers. At that point you also need an oauth/jwt type cookie passed over a private channel (TLS) to trust the input. Which is true for any sane web app, but still…
varenc•1mo ago
If an attacker has a user's private authentication token, usually stored in a __Host prefixed cookie, then it's game over anyway. CSRF is about protecting other sites forcing a user to make a request to a site they're authenticated to, when the malicious site doesn't actually have the cookie/token.

CSRF is when you don't have the authentication token, but can force a user to make a request of your choosing that includes it. In this context you're using HTML/JS and are limited by the browser in terms of what headers you can control.

The classic CSRF attack is just a <form> on a random site that posts to "victim.com/some_action". If we were to re-write browser standards today, cross-domain POST requests probably just wouldn't be permitted.

naasking•1mo ago
> If we were to re-write browser standards today, cross-domain POST requests probably just wouldn't be permitted.

That would be a terrible idea IMO. The insecurity was fundamentally introduced by cookies, which were always a hack. Those should be omitted, and then authorization methods should be designed to learn the lessons from the 70s and 80s, as CSRF is just the latest incarnation of the Confused Deputy:

https://en.wikipedia.org/wiki/Confused_deputy_problem

varenc•1mo ago
Ah, so true. That's what i mean! Cross domain requests that pass along the target domain's cookies. As in, probably every cookie would default to current __Host-* behavior. (and then some other way to allow a cookie if you want. Also some way of expressing desired cookie behavior without a silly prefix on its name...)
ImJamal•1mo ago
How would you make SSO work without cross domain posts?
ctidd•1mo ago
CSRF exists as a consequence of insecure-by-default browser handling of cookies, whereby the browser sends the host’s cookies on requests initiated by a third-party script to the vulnerable host. If a script can fake all headers, it’s not running in a browser, and so was never exposed to the insecure browser cookie handling to be able to leverage it as a vector. If no prerequisite vector, then no vulnerability to mitigate.
t-writescode•1mo ago
As I understand it, the moment you’re dealing with custom scripts, you’ve left the realm of a csrf attack. They’re dependent upon session tokens in cookies
nchmy•1mo ago
Csrf is not dependent on js. It happens via normal links on external sites.
t-writescode•1mo ago
That's what I said, yes.
nchmy•1mo ago
Sorry, I misread your comment
rvnx•1mo ago
If you want, “SameSite=Strict” may also be helpful and is supported on “all” browsers so it is reasonable to use it (but like you did, adding server validation is always a +).

https://caniuse.com/mdn-http_headers_set-cookie_samesite_str...

This checks Scheme, Port and Origin to decide whether the request should be allowed or not.

simonw•1mo ago
I find that cookie setting really confusing. It means that cookies will only be respected on requests that originated on the site that set them... but that includes when you click links from one site to another.

So if you follow a link (e.g. from a Google search) to a site that uses SameSite=Strict cookies you will be treated as logged out on the first page that you see! You won't see your logged in state until you refresh that page.

I guess maybe it's for sites that are so SPA-pilled that even the login state isn't displayed until a fetch() request has fired somewhere?

ctidd•1mo ago
You want lax for the intuitive behavior on navigation requests from other origins. Because there’s no assumption navigation get requests are safe, strict is available as the assumption-free secure option.
macNchz•1mo ago
SameSite=Strict is belt-and-suspenders protection in the case where you could have GET requests that have some kind of impact on state, and the extra safety is worth the UX impact (like with an online banking portal).

Discussions about this often wind up with a lot of people saying "GET requests aren't supposed to change state!!!", which is true, but just because they're not supposed to doesn't mean there aren't some floating around in large applications, or that there aren't clever ways to abuse seemingly innocuous side effects from otherwise-stateless GET requests (maybe just visiting /posts/1337/?shared_by_user=12345 exposes some tiny detail about your account to user #12345, who can then use that as part of a multi-step attack). Setting the strict flag just closes the door on all of those possibilities in one go.

Macha•1mo ago
Note SameSite=Strict also counts against referrals too, which means your first request will appear unauthenticated. If this request just loads your SPA skeleton, that might be fine, but if you're doing SSR of any sort, that might not be what you want.
rocqua•1mo ago
That's why someone suggested a non samesite cookie for reads and a samesite cookie for requests with side effects.

CSRF is mostly about causing side effects, not about access to information. And presumably just displaying your landing page should not have side effects, even when doing authenticated server side rendering. At least no side effects other than creating logs.

altmind•1mo ago
Are there any approaches to csrf tokens that don't require storing issued tokens on server-side?
t-writescode•1mo ago
Most of them. You can send in a cookie and a field and compare.

CSRF is about arbitrary clicks in emails and such that automagic your logged-in-session cookies to the server. If you require an extra field and compare it, you’re fine

maxbond•1mo ago
The alternative to storing tokens is to use an AEAD encryption scheme like AES-GCM to protect tokens from forgery or tampering. You will still have to worry about reuse, so you will probably want to restrict use of this token to the user it was generated for and to a lifetime (say, 24 hours). That is a very high level description, there are details (like nonce generation) that must be done correctly for the system to be secure.
est•1mo ago
reminds me of something similar

https://news.ycombinator.com/item?id=46321651

e.g. serve .svg only when "Sec-Fetch-Dest: image" header is present. This will stop scripts

amluto•1mo ago
Or sending Content-Security-Policy: script-src 'none' for everything that isn’t intended to be a document. Or both.

IMO it’s too bad that suborigins never landed. It would be nice if Discord’s mintlify route could set something like Suborigin: mintlify, thus limiting the blast radius to the mintlify section.

est•1mo ago
maybe adding a dedicated cookie for that specific path?
amluto•1mo ago
HTTP-only cookies ought to work fine for this.

I imagine there’s a fair amount of complexity that would need to be worked out, mostly because the browser doesn’t know the suborigin at the time it makes a request. So Sec-Fetch-Site and all the usual CORS logic would not be able to respect suborigins unless there was a pre-flight check for the browser to learn the suborigin. But this doesn’t seem insurmountable: a server using suborigins would know that request headers are sent as if the request were aimed at the primary origin, and there could be some CORS extensions to handle the case where the originating document has a suborigin.

louiskottmann•1mo ago
This is a massive change for cache in webapp templates as it makes their rendering more stable and thus more cacheable.

A key component here is that we are trusting the user's browser to not be tampered with, as it is the browser that sets the Sec-Fetch-Site header and guarantees it has not been tampered with.

I wonder if that's a new thing ? Do we already rely on browsers being correct in their implementation for something equally fundamental ?

tptacek•1mo ago
The entire web security model assumes we can trust browsers to implement web security policies!
louiskottmann•1mo ago
I appreciate that, but in the case of TLS or CSRF tokens the server is not blindly trusting the browser in the way Sec-Fetch-Site makes it.
tptacek•1mo ago
Sure it is. The same-origin rule that holds the whole web security model together is entirely a property of browser behavior.
louiskottmann•1mo ago
That's indeed a good example of prior full trusting of the browser by the server.
nchmy•1mo ago
It's a shame you talked about browser tampering, since better caching is indeed a benefit of fetch metadata headers.
vasco•1mo ago
> One option is to reject all requests that do not have the Sec-Fetch-Site header. This keeps everyone secure, but of course, there's going to be some unhappy users of old devices that will not be able to use your application. Plus, this would also reject HTTP clients that are not browsers. If this is not a problem for your use case, then great, but it isn't a good solution overall.

If my client is not a browser surely I can set whatever headers I want? Including setting it to same-origin?

nchmy•1mo ago
Sec fetch has 98% browser coverage now. You can fall back to origin, which has 100% coverage.

Non-browser clients can be either blocked or even just given a pass, since CSRF is about tricking someone into clicking a link that then sends their Auth cookie along with the request. Either the non-browser request includes a valid cookie in the request and is allowed to mutate state, or it doesn't and nothing happens as the request doesn't get authenticated.

rfmoz•1mo ago
Adding more security headers every year feels like strapping seatbelts onto a collapsing roller coaster. It would be better to stop this "sec headers stack" in favour of simpler, secure by default browser primitives with explicit opt-out. Getting an example from https://securityheaders.com the list nowadays is as follows:

- Strict-Transport-Security - Content-Security-Policy - X-Frame-Options - X-Content-Type-Options - Referrer-Policy - Permissions-Policy - Cross-Origin-Embedder-Policy - Cross-Origin-Opener-Policy - Cross-Origin-Resource-Policy

thaumasiotes•1mo ago
Yeah, redoing the defaults would probably be good.

On the other hand, I tried doing a Google search with javascript disabled today, and I learned that Google doesn't even allow this. (I also thought "maybe that's just something they try to pawn off on mobile browsers", but no, it's not allowed on desktop either.)

So the state of things for "how should web browsers work?" seems to be getting worse, not better.

PhilipRoman•1mo ago
Wow, I used to be able to search google even from terminal browsers like 'elinks'
rhdunn•1mo ago
I used elinks once to find a solution to an issue where the login screen was broken after an upgrade. I was able to switch to a virtual console, find out about the issue, identify the commands to fix the issue, and use them to resolve the issue.
paffdragon•1mo ago
I think it still works if you set your user agent to something like lynx. I had a custom UA set for Google search in Firefox just for this purpose and to disable AI overviews.
c17r•1mo ago
I just tried with the "links" browser and I get a "Update your browser. Your browser isn't supported anymore. To continue your search, upgrade to a recent version"
rfmoz•1mo ago
The reference of robots.txt offer a good way to define specific behavior for the whole domain, as example. Something like that for security could be enough for large amount of websites.

Also, a new header like “sec-policy: foo-url” may be a clean way to move away that definitions from the app+web+proxy+cdn mesh to a fixed clear point.

zwnow•1mo ago
These files are just ignored by everything. We dont need .txt files, we need good defaults.
rfmoz•1mo ago
I reply myself because I've found that idea already porposed:

"Origin policy was a proposal for a web platform mechanism that allows origins to set their origin-wide configuration in a central location, instead of using per-response HTTP headers." - https://github.com/WICG/origin-policy

But their status is "[On hold for now]" since, at least, three years ago.

_heimdall•1mo ago
This is an extremely common approach across industries. Look into diesel engine emission control systems sometime if you aren't familiar. The last few decades has been bolting one new system on every dew years because the ones already added continue to cause unintended reliability problems.
NorwegianDude•1mo ago
The simplest way to prevent CSRF is to use the Referer header, and that has been used since forever. If the header is missing, you no-op the post. Origin is similar, and can be used with referer as fallback, but it's not needed for most sites.
nchmy•1mo ago
Fetch Metadata headers, as discussed in this post, are just as simple and much more effective. There's lots of issues with referer, and even some with origin.
talkin•1mo ago
NO. Please don’t spread wrong solutions.

Your attempt has similarities to the idea behind Checking Sec-Fetch-Site. Implementing that header is the same amount of work. But this header is exactly meant for this purpose, and referer is haunted with problems.

So for officially intended protections, implementing this header and samesite cookies gets you a very long way without any complexity, assumptions, or tricks of old lore.

NorwegianDude•1mo ago
It's not a wrong solution. It's been commonly used since forever, tens of years before the sec-fetch-site header existed, and it stops CSRF. Sec-fetch-site is not supported in old browsers, so relying on that is unsafe without any fallbacks.
justarandomname•1mo ago
I worked on an legacy application that did this as a stop-gap as CSRF tokens were being implemented and it just kept both approaches.
magmostafa•1mo ago
This approach using Sec-Fetch-* headers is elegant, but it's worth noting the browser support considerations. According to caniuse, Sec-Fetch-Site has ~95% global coverage (missing Safari < 15.4 and older browsers).

For production systems, a layered defense works best: use Sec-Fetch-Site as primary protection for modern browsers, with SameSite cookies as fallback, and traditional CSRF tokens for legacy clients. This way you get the UX benefits of tokenless CSRF for most users while maintaining security across the board.

The OWASP CSRF cheat sheet now recommends this defense-in-depth approach. It's especially valuable for APIs where token management adds significant complexity to client implementations.

mxey•1mo ago
Without those headers, you can as a fallback compare the Origin header to the Host header.

See https://words.filippo.io/csrf/

yread•1mo ago
> UX benefits of tokenless CSRF

What are those?

nchmy•1mo ago
98% coverage if you exclude browsers that caniuse doesn't track (which is surely appropriate, since even things like checkbox elements have only 96% coverage if you include un tracked browsers).

And you can fall back to origin header, which has universal coverage. Then block anything else.

Also, owasp doesn't recommend it as defense in depth. It is a primary, standalone defense against CSRF.

https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re...

6510•1mo ago
If i open a link with a new target, say "foo" then post a form to the same "foo" target. What would be the origin?
dorianmariecom•1mo ago
rails does this in 8.2
nchmy•1mo ago
*will do

I just went looking for docs and it seems that 8.2 is not out yet

https://github.com/rails/rails/pull/56350/

foobarkey•1mo ago
I put the session cookie as http_only, same_site=strict and turned off csrf. Then pentesters came and quoted owasp in the report, while not being able to demonstrate an attack. Some drone added csrf back, everyone congratulated themselves in making things more secure :)
deepsun•1mo ago
> this would also reject HTTP clients that are not browsers

Why? I can send any headers from a client I make.