frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How I bypassed Amazon's Kindle web DRM

https://blog.pixelmelt.dev/kindle-web-drm/
544•pixelmelt•6h ago•170 comments

America’s semiconductor boom

https://www.youtube.com/watch?v=T-jt3qBzJ4A
77•zdw•3h ago•23 comments

Claude Skills

https://www.anthropic.com/news/skills
480•meetpateltech•10h ago•278 comments

Cloudflare Sandbox SDK

https://sandbox.cloudflare.com/
125•bentaber•5h ago•41 comments

Gemini 3.0 spotted in the wild through A/B testing

https://ricklamers.io/posts/gemini-3-spotted-in-the-wild/
283•ricklamers•9h ago•174 comments

Accelerating Authoritarian Dynamics: Assessment of Democratic Decline

https://steadystate1.substack.com/p/accelerating-authoritarian-dynamics
3•andsoitis•6m ago•0 comments

Lead Limited Brain and Language Development in Neanderthals and Other Hominids?

https://today.ucsd.edu/story/did-lead-limit-brain-and-language-development-in-neanderthals-and-ot...
34•gmays•3h ago•5 comments

A 4k-Room Text Adventure Written by One Human in QBasic No AI

https://the-ventureweaver.itch.io/tlote4111
54•ATiredGoat•4d ago•33 comments

Your data model is your destiny

https://notes.mtb.xyz/p/your-data-model-is-your-destiny
165•hunglee2•2d ago•23 comments

DoorDash and Waymo launch autonomous delivery service in Phoenix

https://about.doordash.com/en-us/news/waymo
216•ChrisArchitect•12h ago•495 comments

Codex Is Live in Zed

https://zed.dev/blog/codex-is-live-in-zed
176•meetpateltech•10h ago•27 comments

Talent

https://www.felixstocker.com/blog/talent
120•BinaryIgor•8h ago•48 comments

Hyperflask – Full stack Flask and Htmx framework

https://hyperflask.dev/
285•emixam•13h ago•92 comments

Understanding Spec-Driven-Development: Kiro, Spec-Kit, and Tessl

https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html
36•janpio•4h ago•2 comments

Microwave technique allows energy-efficient chemical reactions

https://phys.org/news/2025-10-microwave-technique-energy-efficient-chemical.html
29•rolph•6d ago•1 comments

Post office in France rolls out croissant-scented stamp

https://www.ctvnews.ca/world/article/french-post-office-rolls-out-croissant-scented-stamp/
92•ohjeez•1w ago•34 comments

Benjie's Humanoid Olympic Games

https://generalrobots.substack.com/p/benjies-humanoid-olympic-games
99•robobenjie•6h ago•75 comments

A conspiracy to kill IE6 (2019)

https://blog.chriszacharias.com/a-conspiracy-to-kill-ie6
156•romanhn•8h ago•88 comments

A liver transplant from start to finish

https://press.asimov.com/articles/liver
3•mailyk•4d ago•0 comments

Syntax highlighting is a waste of an information channel (2020)

https://buttondown.com/hillelwayne/archive/syntax-highlighting-is-a-waste-of-an-information/
208•swyx•4d ago•81 comments

Elixir 1.19

https://elixir-lang.org/blog/2025/10/16/elixir-v1-19-0-released/
198•theanirudh•18h ago•40 comments

Electricity can heal wounds three times as fast (2023)

https://www.chalmers.se/en/current/news/mc2-how-electricity-can-heal-wounds-three-times-as-fast/
130•mgh2•13h ago•83 comments

How to tame a user interface using a spreadsheet

https://blog.gingerbeardman.com/2025/10/11/how-to-tame-a-user-interface-using-a-spreadsheet/
86•msephton•6d ago•18 comments

Lace: A New Kind of Cellular Automata Where Links Matter

https://www.novaspivack.com/science/introducing-lace-a-new-kind-of-cellular-automata
118•airesearcher•12h ago•47 comments

Hacker News – The Good Parts

https://smartmic.bearblog.dev/why-hacker-news/
95•smartmic•5h ago•111 comments

Show HN: Inkeep (YC W23) – Agent Builder to create agents in code or visually

https://github.com/inkeep/agents
62•engomez•13h ago•46 comments

A stateful browser agent using self-healing DOM maps

https://100x.bot/a/a-stateful-browser-agent-using-self-healing-dom-maps
107•shardullavekar•14h ago•54 comments

Eon – An Effects-Based OCaml Nameserver

https://ryan.freumh.org/eon.html
46•Bogdanp•5d ago•2 comments

VOC injection into a house reveals large surface reservoir sizes

https://www.pnas.org/doi/10.1073/pnas.2503399122
86•PaulHoule•5d ago•75 comments

Nvidia DGX Spark and Apple Mac Studio = 4x Faster LLM Inference with EXO 1.0

https://blog.exolabs.net/nvidia-dgx-spark/
27•edelsohn•2h ago•9 comments
Open in hackernews

Improving the Trustworthiness of JavaScript on the Web

https://blog.cloudflare.com/improving-the-trustworthiness-of-javascript-on-the-web/
53•doomrobo•11h ago

Comments

some_furry•11h ago
This is really cool, and I'm excited to hear that it's making progress.

Binary transparency allows you to reason about the auditability of the JavaScript being delivered to your web browser. This is the first significant step towards a solution to the "JavaScript Cryptography Considered Harmful" blog post.

The remaining missing pieces here are, in my view, code signing and the corresponding notion of public key transparency.

zb3•10h ago
Ok (let's pretend I didn't see the word "blockchain" there), but none of this should interfere with browser extensions that need to modify the application code.
some_furry•9h ago
EDIT: Disregard this comment. I think there was a technical issue on my computer. Keeping the original comment below.

-----

> let's pretend I didn't see the word "blockchain" there

There's nothing blockchain about this blog post.

I think this might be a rectangles vs squares thing. While it's true that all blockchains use chains of hashes (e.g., via Merkle trees), it's not true that all uses of append-only data structures are cryptocurrency.

See also: Certificate transparency.

JimDabell•9h ago
They specifically suggest using a blockchain for Tor:

> A paranoid Tor user may not trust existing transparency services or witnesses, and there might not be any other trusted party with the resources to self-host these functionalities. For this use case, it may be reasonable to put the prefix tree on a blockchain somewhere. This makes the usual domain validation impossible (there’s no validator server to speak of), but this is fine for onion services. Since an onion address is just a public key, a signature is sufficient to prove ownership of the domain.

some_furry•9h ago
Oh, weird. I didn't see that (and a subsequent Ctrl+F showed 0 results) but now it's showing up for me?
AndrewStephens•9h ago
As a site owner, the best thing you can do for your users is to serve all your resources from a server you control. Serving javascript (or any resource) from a CDN was never a great idea and is pointless these days with browser domain isolation, you might as well just copy any third party .js in your build process.

I wrote a coincidently related rant post last week that didn't set the front page of HN on fire so I won't bother linking to it but the TL/DR is that a whole range of supply chain attacks just go away if you host the files yourself. Each third party you force your users to request from is an attack vector you don't control.

I get what this proposal is trying to achieve but it seems over complex. I would hate to have to integrate this into my build process.

doomrobo•8h ago
You're right that, when your own server is trustworthy, fully self-hosting removes the need for SRI and integrity manifests. But in the case that your server is compromised, you lose all guarantees.

Transparency adds a mechanism to detect when your server has been compromised. Basically you just run a monitor on your own device occasionally (or use a third party service if you like), and you get an email notif whenever the site's manifest changes.

I agree it's far more work than just not doing transparency. But the guarantees are real and not something you get from any existing technology afaict.

EGreg•8h ago
If they want to make a proposal, they should have httpc://sha-256;... URLS which are essentially constant ones, same as SRI but for top-level domains.

Then we can really have security on the Web! Audit companies (even anonymous ones but with a good reputation) could vet certain hashes as being secure, and people and organizations could see a little padlock when M of N approved a new version.

As it is, we need an extension for that. Because SRI is only for subresource integrity. And it doesn't even work on HTML in iframes, which is a shame!

ameliaquining•7h ago
The linked proposal is basically a user-friendlier version of that, unless you have some other security property in mind that I've failed to properly understand.
jmull•8h ago
It would be helpful if they included a problem statement of some sort.

I don't know what problem this solves.

While I could possibly read all this and deduce what it's for, I probably won't... (the stated premise of this, "It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." is not true.)

miloignis•8h ago
For me, the key problem being solved here is to have reasonably trustworthy web implementations of end-to-end-encrypted (E2EE) messaging.

The classic problem with E2EE messaging on the web is that the point of E2EE is that you don't have to trust the server not to read your messages, but if you're using a web client you have to trust the server to serve you JS that won't just send the plain text of your messages to the admin.

The properties of the web really exacerbate this problem, as you can serve every visitor to your site a different version of the app based on their IP, geolocation, tracking cookies, whatever. (Whereas with a mobile app everyone gets the same version you submitted to the app store).

With this proposed system, we could actually have really trustworthy E2EE messaging apps on the web, which would be huge.

(BTW, I do think E2EE web apps still have their place currently, if you trust the server to not be malicious (say, you or a trusted friend runs it), and you're protecting from accidental disclosure)

jmull•7h ago
It doesn't seem like there's much difference in the trust model between E2EE web apps and App Store apps. Either way the publisher controls the code and you essentially decide whether to trust the publisher or not.

Perhaps there's something here that affects that dynamic, but I don't know what it is. It would help this effort to point out what that is.

fabrice_d•7h ago
On the web, if your server is compromised it's game over, even if the publisher is not malicious. In app stores, you have some guarantee that the code that ends up on your device is what the publisher intended to ship (basically signed packages). On the web it's currently impossible to bootstrap the integrity verification with just SRI.

This proposal aims at providing the same guarantees for web apps, without resorting to signed packages on the web (ie. not the same mechanism that FirefoxOS or ChromeOS apps used). It's competing with the IWA proposal from Google, which is a good thing.

knowitnone3•6h ago
everyone gets the same version that sends your secure messages to another server? I'm impressed.
CharlesW•7h ago
> I don't know what problem this solves.

This allows you to validate that "what you sent is what they got", meaning that the code and assets the user's browser executes are exactly what you intended to publish.

So, this gives web apps and PWAs some of the same guarantees of native app stores, making them more trustworthy for security-sensitive use cases.

everdrive•7h ago
I improve the trustworthiness of js by blocking it by default.
thadt•6h ago
Starts reading: "fantastic, this is what we've been needing! But... where is code signing?"

> One problem that WAICT doesn’t solve is that of provenance: where did the code the user is running come from, precisely?

> ...

> The folks at the Freedom of Press Foundation (FPF) have built a solution to this, called WEBCAT. ... Users with the WEBCAT plugin can...

A plugin. Sigh.

Fancy, deep transparency logs that track every asset bundle deployed are good. I like logging - this is very cool. But this is not the first thing we need.

The first thing we need, is to be able to host a public signing key somewhere that browsers can get and automatically signature verify the root hash served up in that integrity manifest. Then point a tiny boring transparency log at _that_. That's the thing I really, really care about for non-equivocation. That's the piece that lets me host my site on Cloudflare pages (or Vercel, or Fly.io, or Joe's Quick and Dirty Hosting) that ensures the software being run in my client's browser is the software I signed.

This is the pivotal thing. It needs to live in the browser. We can't leave this to a plugin.

doomrobo•5h ago
I'll actually argue the opposite. Transparency is _the_ pivotal thing, and code signing needs to be built on top of it (it definitely should be built into the browser, but I'm just arguing the order of operations rn).

TL;DR you'll either re-invent transparency or end up with huge security holes.

Suppose you have code signing and no transparency. Your site has some way of signaling to the browser to check code signatures under a certain pubkey (or OIDC identity if you're using Sigstore). Suppose now that your site is compromised. What is to prevent an attacker from changing the pubkey and re-signing under the new pubkey. Or just removing the pubkey entirely and signaling no code signing at all?

There are a three answers off the top of my head. Lmk if there's one I missed:

1. Websites enroll into a code signing preload list that the browser periodically pulls. Sites in the list are expected to serve valid signatures with respect to the pubkeys in the preload list.

Problem: how do sites unenroll? They can ask to be removed from the preload list. But in the meantime, their site is unusable. So there needs to be a tombstone value recorded somewhere to show that it's been unenrolled. That place it's recorded needs to be publicly auditable, otherwise an attacker will just make a tombstone value and then remove it.

So we've reinvented transparency.

2. User browsers remember which sites have code signing after first access.

Problem: This TOFU method offers no guarantees to first-time users. Also, it has the same unenrollment problem as above, so you'd still have to reinvent transparency.

3. Users visually inspect the public key every time they visit the site to make sure it is the one they expect.

Problem: This is famously a usability issue in e2ee apps like Signal and WhatsApp. Users have a noticeable error rate when comparing just one line of a safety number [1; Table 5]. To make any security claim, you'd have to argue that users would be motivated to do this check and get it right for the safety numbers for every security-sensitive site they access, over a long period of time. This just doesn't seem plausible

[1] https://arxiv.org/abs/2306.04574

thadt•5h ago
I'll actually argue that you're arguing exactly what I'm arguing :)

My comment near the end is that we absolutely need transparency - just that what we need tracked more than all the code ever run under a URL is that one signing key. All your points are right: users aren't going to check it. It needs to be automatic and it needs to be distributed in a way that browsers and site owners can be confident that the code being run is the code the site owner intended to be run.

doomrobo•4h ago
Gotcha, yeah I agree. Fwiw, with the imagined code signing setup, the pubkey will be committed to in the transparency log, without any extra work. The purpose of the plugin is to give the browser the ability to parse (really fetch, then parse) those extension values into a meaningful policy. Anyways I agree, it'd be best if this part were built into the browser too.
saurik•5h ago
1) It seems strange that this spec isn't an extension of the previous cache manifest mechanism, which was very similar and served a very similar purpose: it listed all of the URLs of your web app, so they could all be pre-downloaded... it just didn't include the hashes, and that could easily have been added.

2) That the hashes are the primary key and the path is the value makes no sense, as it means that files can only have exactly one path. I have often ended up with the same file mapped to two places in my website for various reasons, such as collisions in purpose over time (but the URL is a primary key) or degenerate objects. Now, yes: I can navigate avoiding that, but why do I have to? The only thing this seems to be buying is the idea that the same path can have more than one hash, and even if we really want that, it seems like it would make a million times more sense to just make the value be an array of hashes, as that will make this file a billion times more auditable: "what hashes can this path have?" should be more clear than "I did a search of the file to check and I realized we had a typo with the same path in two places". No one -- including the browser implementing this -- is trying to do the inverse operation (map a hash to a path).

3) That this signs only the content of the file and not the HTTP status or any of the headers seems like an inexcusable omission and is going to end up resulting in some kind of security flaw some day (which isn't an issue for subresource integrity, as those cases don't have headers the app might want and only comes into play for successful status). We even have another specification in play for how and what to sign (which includes the ability to lock in only a subset of the headers): Signed HTTP Messages. That should be consulted and re-used somehow.

4) Since they want to be able to allow the site to be hosted in more than one place anyway, they really should bite the bullet and make the identity of the site be a key, not a hostname, and the origin of a site should then become the public key. This would let the same site hosted by multiple places share the same local browser storage and just act like the exact same site, and it would also immediately fix all of the problems with "what if someone hacks into my server and just unenrolls me from the thing", as if they do that they wouldn't have the signing key (which you can keep very very offline) and, when a user hits reload, the new site they see would be considered unrelated to the one they were previously on. You also get provenance for free, and no longer have to worry about how to deal with unenrollment: the site just stops serving a manifest and it is immediately back to being the normal boring website, and can't access any of the content the user gave to the trusted key origin.

doomrobo•4h ago
1. I didn't know about this [1] actually! It looks like it's been unsupported for a few years now. The format looks pretty barebones, and we'd still need hashes like you said, as well as "wildcard" entries. I reckon the JSON solution might still be the better choice, but this is good to have as a reference.

2. I agree, and this is something we have gone back and forth on. The nice thing about hashes as primary keys is you can easily represent a single path having many possible values, and you can represent "occurs anywhere" hashes by giving them the empty string. But the downside like you mention is that a hash cannot occur at multiple paths, which is far from ideal. I'll make an issue in the Github about this, because I don't think it's near settled.

3. I had read the spec [2] but never made this connection! You're right that it's not hard to imagine malleability sneaking in via headers and status codes. I'll make an issue for this.

4. I wanted to veer a bit from requiring sites to hold yet more cryptographic material than they already do. Yes you can keep signing keys "very very offline", but this requires a level of practice that I'm not sure most people would achieve. Also you run into key rotation annoyances as well. The current route to something like you describe is have every site have their own transparency log entry (though they can share manifests and even asset hosts), and use code signing to link their instance to the source of truth.

[1] https://en.wikipedia.org/wiki/Cache_manifest_in_HTML5

[2] https://www.rfc-editor.org/rfc/rfc9421.html

saurik•3h ago
<3 FWIW, I know Hacker News discussions often go stale after a bit and later responses might never be seen, and I don't really have time until later tonight to work on this (I used up my minutes earlier frantically reading this article to leave that other comment), so I thought I'd leave a quick comment here saying that I will be doing some further explanation in a comment here later tonight into what I thought is so interesting with cache manifests and some further thoughts viz-a-viz shared origins. (And, if you also happen to think any of my commentary is somewhat useful, I'd love to have a call or something with you at some point to talk about some of your vision for this work: I have a number of use cases for this kind of verification, and I think I was one of the more serious users of cache manifests back a decade ago.)
vader1•4h ago
> This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.

The Google Play Store does none of this, lol. All apps created since 2021 have to make use of Google Play App Signing, which means Google holds the keys used to sign the app. They leverage this to include stuff like their Play Integrity in the builds that are served. The Android App Bundle format means that completely different versions of the app are delivered depending on the type of device, locale, etc. There is 0 transparency about this for the end-user.