frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
528•klaussilveira•9h ago•146 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
859•xnx•15h ago•517 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
72•matheusalmeida•1d ago•13 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
180•isitcontent•9h ago•21 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
182•dmpetrov•10h ago•78 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
294•vecti•11h ago•130 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
68•quibono•4d ago•12 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
343•aktau•16h ago•168 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
338•ostacke•15h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
433•todsacerdoti•17h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
237•eljojo•12h ago•147 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
13•romes•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
372•lstoll•16h ago•252 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
6•videotopia•3d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
41•kmm•4d ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
219•i5heu•12h ago•162 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
90•SerCe•5h ago•75 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
61•phreda4•9h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•82 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
38•gfortaine•7h ago•10 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
127•vmatsiiako•14h ago•53 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
18•gmays•4h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
261•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1029•cdrnsf•19h ago•428 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
54•rescrv•17h ago•18 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
18•denysonique•6h ago•2 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
5•neogoose•2h ago•1 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
109•ray__•6h ago•54 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
45•lebovic•1d ago•14 comments
Open in hackernews

The Chrome Speculation Rules API allows the browser to preload and prerender

https://www.docuseal.com/blog/make-any-website-load-faster-with-6-lines-html
104•amadeuspagel•6mo ago

Comments

game_the0ry•6mo ago
Wonder if this is will replace how nextjs and nuxt do optimistic pre-fetching when users hover on links.

Also brings up the questions:

- should browser do this by default?

- if yes, would that result in too many necessary requests (more $$)?

Either way, good to know.

babanooey21•6mo ago
It probably won't replace Nuxt.js/Nuxt’s pre-fetching, as such websites function as SPAs, using internal JavaScript pushState navigation, which has become standard for those frameworks.

However, Next.js pre-fetching can't perform pre-rendering on hover, which can cause a noticeable lag during navigation. The native Chrome API allows not only pre-fetching, but also pre-rendering, enabling instant page navigation.

exasperaited•6mo ago
Link prefetching is generally something you would want a website to retain control over, because it can distort stats, cause resource starvation, and even (when web developers are idiots) cause things like deletions (when a clickable link has a destructive outcome).

I am reminded of the infamous time when DHH had to have it explained to him that GET requests shouldn’t have side effects, after the Rails scaffolding generated CRUD deletes on GET requests.

https://dhh.dk/arc/000454.html

Google were not doing anything wrong here, and DHH was merely trying to deflect blame for the incompetence of the Rails design.

But the fact remains, alas, that this kind of pattern of mistakes is so common, prefetching by default has risks.

radicaldreamer•6mo ago
Imagine mousing over a delete account button and having your browser render that page and execute JS in the background.
qingcharles•6mo ago
You'd hope it would have a confirmation, but "log out" usually doesn't.
dlcarrier•6mo ago
Is it typical to count a single bracket as a line?
technojunkie•6mo ago
This article is a good reminder of another one from late last year: https://csswizardry.com/2024/12/a-layered-approach-to-specul...

In the above article, Harry gives a more nuanced and specific method using data attributes to target specific anchors in the document, one reason being you don't need to prerender login or logout pages.

<a href data-prefetch>Prefetched Link</a>

<a href data-prefetch=prerender>Prerendered Link</a>

<a href data-prefetch=false>Untouched Link</a>

meindnoch•6mo ago
Ah, another web standard™ from Chrome. Just what we needed!
rafram•6mo ago
Like it or not, the current web standardization process requires implementations to be shipped in multiple browsers before something can enter the spec.
epolanski•6mo ago
In canary or experimental flags.
bornfreddy•6mo ago
Multiple browser engines or just multiple browsers?
leptons•6mo ago
Internet Explorer first implemented XMLHttpRequest, and then it became a standard. Without browser makers innovating we'de be waiting a long, long time for the W3C to make any progress, if any at all.
giantrobot•6mo ago
A majority of Chrome's "standard" pushes have the alternate use of better browser fingerprinting. Ones of end users might use WebMIDI to work with musical devices but every scammy AdTech company will use it to fingerprint users. Same with most of their other "standards". It's almost like Google is an AdTech firm that happens to make a web browser.
leptons•6mo ago
I honestly don't care about fingerprinting. It's too late to worry about that, because there will always be a way to do it, to some extent. And me using a browser with WebMIDI only means I'm one of the 3.4 billion people using Chrome. There are better ways to "fingerprint" people online. Detecting browser APIs is not a particularly good one.
giantrobot•6mo ago
It's not the presence of WebMIDI that's the problem. It's the devices it can enumerate. Same with their other Web* APIs that want to enumerate devices outside the browser's normal sandboxing.
leptons•6mo ago
So what? You need to give access to each website that wants to interact with WebMIDI. It doesn't just let any website poll the devices. It's no different than the location API or any other API - the user has to explicitly grant access to each website that wants to use the API. If you don't trust the site, don't give it access. It's really as simple as that.

Sorry chicken little, the sky is not falling.

HaZeust•6mo ago
Off-topic (semi) but I'm a big fan of Docuseal - I use them more my client-contractor agreements without any issue. Pricing is unbeatable as well, other contract-signing services have completely lost the plot.
t1234s•6mo ago
Don't feature like this waste bandwidth and battery on mobile devices?
babanooey21•6mo ago
It preloads pages on mouse hover over the a href link. On mobile there are no mouse hover events. The page can be preloaded on "touchstart" event which almost definitely results in page visit.
radicaldreamer•6mo ago
Not to mention laptops! Loads of people use those on battery power
youngtaff•6mo ago
Putting aside the lack of hover on mobile (there are other ways to trigger it)

It’s not clear it will waste battery on mobile… not sure if it’s still the case but mobile radios go to sleep and waking them used a non-trivial amount of energy so preloading a page was more efficient than let the radio go to sleep and then waking it

Need someone who’s more informed than I am to review wether that’s still the case

MrJohz•6mo ago
The point of this proposal is to bring this feature under the control of the browser, which is probably better placed to decide when preloading/prerendering should happen. It's already possible to do something very similar using JS, but it's difficult to control globally when that happens across different sites. Whereas when this feature is built into the browser, then the browser can automatically disable prerendering when the user (for example) has low battery, or is on a metered internet connection, or if the user simply doesn't want prerendering to happen at all.

So in theory, this should actually reduce bandwidth/battery wastage, by giving more control to the browser and user, rather than individual websites.

ozgrakkurt•6mo ago
Or just put in some effort to make things actually more efficient and don’t waste resources on the user’s machine.
ashwinsundar•6mo ago
Those aren’t mutually exclusive goals. You can serve efficient pages AND enable pre-fetch/pre-render. Let’s strive for sub-50ms load times
tonyhart7•6mo ago
Yeah but it "fake" sub 50ms load when you load it at the front before it shows
dlivingston•6mo ago
I guess you could call it fake or cheating, but ahead-of-time preparation of resources and state is used all the time. Speculative execution [0], DNS prefetching [1], shader pre-compilation, ... .

[0]: https://en.wikipedia.org/wiki/Speculative_execution

[1]: https://www.chromium.org/developers/design-documents/dns-pre...

tonyhart7•6mo ago
also dns not changing every second where you website need it

Yeah but you not only "count" it only when it shows tho????

I'am not saying its not "valid" but when you count it only when you shows it, are we really not missing a part why we need "cache" in the first place?

dietr1ch•6mo ago
Every layer underneath tries really hard to cheat and keep things usable/fast.

This includes libraries, kernels, CPUs, devices, drivers and controllers. The higher level at which you cheat, the higher the benefits.

jmull•6mo ago
Not mutually exclusive, but they compete for resources.

Prefetch/prerender use server resources, which costs money. Moderate eagerness isn’t bad, but also has a small window of effect (e.g. very slow pages will still be almost as slow, unless all your users languidly hover their mouse over each link for a while).

Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.

I don’t have anything against prefetch/render, but it’s a small thing compared to efficient pages (at which point you usually don’t need it).

ashwinsundar•6mo ago
> Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.

Not trying to be a contrarian just for the sake of it, but I don't think this has to be true. Choice of technology or framework also influences how easy it is to create an efficient page, and that's a free choice one can make*

* Unless you are being forced to make framework/language/tech decisions by someone else, in which case carry on with this claim. But please don't suggest it's a universal claim

dietr1ch•6mo ago
Idk, if you are starting from prerender/prefetch `where href_matches "/*"` maybe you are wasting resources like you are swinging at a piñata in a different room.

This approach will just force the pre-loader/renderer/fetcher to be cautious and just prepare a couple of items (in document order unless you randomise or figure out a ranking metric) and have low hit ratios.

I think existing preloading/rendering on hover works really well on desktop, but I'm not aware of an equivalent for mobile. Maybe you can just preload visible links as there's fewer of them? But tradeoffs on mobile are beyond just latency, so it might not be worth it.

jameslk•6mo ago
“Just” is doing a lot of heavy lifting there. There’s a lot that has to go on between the backend and frontend to make modern websites with all their dynamic moving pieces, tons of video/imagery, and heavy marketing/analytics scripts run on a single thread (yes I’m aware things can load/run on other threads but the main thread is the bottleneck). Browsers are just guessing how it will all come together on every page load using heuristics to orchestrate downloading and running all the resources. Often those heuristics are wrong, but they’re the best you can get when you have such an open ended thing as the web and all the legacy it carries with it

There’s an entire field called web performance engineering, with web performance engineers as a title at many tech companies, because shaving milliseconds here and there is both very difficult but easily pays those salaries at scale

giantrobot•6mo ago
> There’s a lot that has to go on between the backend and frontend to make modern websites with all their dynamic moving pieces, tons of video/imagery, and heavy marketing/analytics scripts run on a single thread

So there's a lot going on...with absolutely terrible sites that do everything they can to be user-hostile? The poor dears! We may need to break out the electron microscope to image the teeny tiny violin I will play for them.

All of that crap is not only unnecessary it's pretty much all under the category of anti-features. It's hard to read shit moving around a page or having a video playing underneath. Giant images and autoplaying video are a waste of my resources on the client side. They drain my battery and eat into my data caps.

The easiest web performance engineering anyone can do is fire any and all marketers, managers, or executives that ask for autoplaying videos and bouncing fade-in star wipe animations as a user scrolls a page.

jameslk•6mo ago
> The easiest web performance engineering anyone can do is fire any and all marketers, managers, or executives

Your solution to web performance issues is to fire people?

giantrobot•6mo ago
When they're the cause of the web performance problems it isn't the worst idea. The individual IC trying to get a site to load in a reasonable amount of time isn't pushing for yet another tracking framework loaded from yet another CDN or just a few more auto-playing videos in a pop-over window that can only be dismissed with a sub-pixel button.
cs02rm0•6mo ago
Theres only so much efficiency you can squeeze out though if, say, you're using AWS Lambda. I can see this helping mitigate those cold start times.
pradn•6mo ago
You can't avoid sending the user photos in a news article, for example. So the best you can do is start fetching/rendering the page 200ms early.
madduci•6mo ago
The lines are more than six for Firefox, since the opinion is not supported
rafram•6mo ago
Hasn't been implemented yet, but Mozilla supports this proposal and plans to implement it:

https://bugzilla.mozilla.org/show_bug.cgi?id=1969396

jameslk•6mo ago
If server resources and scalability are a concern, speculative fetching will add more load to those resources, which may or may not be used. Same deal on the end user’s device. That’s the trade off. Also, this is basically a Blink-only feature so far

The article provides a script that tries to replicate pre-rendering that speculation rules do for Safari and Firefox, but this is only pre-fetching. It doesn’t do the full pre-render. Rendering is often half the battle when it comes to web performance

Another limitation is that if the page is at all dynamic, such as a shopping cart, speculation rules will have the same struggles as caching does: you may serve a stale response

duxup•6mo ago
Definitely a balancing act where you consider how much work you might trigger.

But I can think of a few places I would use this for quality of life type enhancements that are for specific clients and etc.

pyman•6mo ago
I've seen some sites, like Amazon, calculate the probability of a user clicking a link and preload the page. This is called predictive preloading (similar to speculative fetching). It means they load or prepare certain pages or assets before you actually click, based on what you're most likely to do next.

What I like about this is that it's not a guess like the browser does, it's based on probability and real user behaviour. The downside is the implementation cost.

Just wondering if this is something you do too.

jameslk•6mo ago
For a while, there was a library built to do this: https://github.com/guess-js/guess

You can do this with speculation rules too. Your speculation rules are just prescriptive of what you think the user will navigate to next based on your own analytics data (or other heuristics)

Ultimately the pros/cons are similar. You just end up with potentially better (or worse) predictions. I suspect it isn’t much better than simple heuristics such as whether a cursor is hovering over a link or a link is in a viewport. You’d probably have to have a lot of data to keep your guesses accurate

Keep in mind that this will just help with the network load piece, not so much for the rendering piece. Often rendering is actually what is slowing down most heavy frontends. Especially when the largest above-the-fold content you want to display is an image or video

rafram•6mo ago
> This includes fetching all sub-resources like CSS, JavaScript, and images, and executing the JavaScript.

So not necessarily any website, because that could cause issues if one of the prerendered pages runs side-effectful JavaScript.

echoangle•6mo ago
Then it’s a badly designed website, GET requests (and arguably the JS delivered with a GET-requested HTMl page) should be side-effect free. Side effects should come from explicit user interaction.
rafram•6mo ago
> and arguably the JS delivered with a GET-requested HTMl page

That's pretty hard to achieve.

echoangle•6mo ago
Why? What side effects does loading and executing the JS of a normal website have? Except for analytics, I don’t see any.
cs02rm0•6mo ago
Annoyingly, Brave has disabled support for this.

https://github.com/brave/brave-browser/issues/41164

slig•6mo ago
Does this basically replaces the need for `instant.page`?
babanooey21•6mo ago
It does, but currently is supported only in Chromium-based browsers. Also with pre-rendering on hover pages are displayed instantly unlike with instant.page where rendering happens on link click which might take a few hundred ms before displaying the page.

Update: Actually instant.page also uses Speculation Rules API where it's supported

petters•6mo ago
Try navigating this site to get an idea of how it feels: https://instant.page/
pyman•6mo ago
Fast on desktop, decent on mid-tier mobile, and slow on low-end devices.
tbeseda•6mo ago
I mean that's generally the curve of performance across those devices where "fast", "decent", and "slow" are relative.
pyman•6mo ago
That's right. I was just pointing out that speed can vary depending on a bunch of things:

https://www.gsma.com/r/wp-content/uploads/2023/10/The-State-...

accrual•6mo ago
This is cool but man, it feels like we're pushing more and more complexity into the browser to build webpages that work like desktop apps.

Just reading "Chrome Speculation Rules API" makes my skin crawl a bit. We already have speculative CPU instructions, now we need to speculate which pages to preload in order to help mitigate the performance issues of loading megabytes of app in the browser?

I understand the benefits and maybe this is just me yelling at clouds, but it feels crazy coming from what the web used to be.

theZilber•6mo ago
It is less about performance issues of loading megabytes on the browser (which is also an issue). It is about those cases where a fetch request may take a noticable amount of time just because of server distance, maybe the server needs to perform some work (ssr) to create the page (sometimes from data fetched from an external api).

If you have a desktop app it will also have to do the same work by fetching all the data it needs from the server, and it might sometimes cache some of the data locally (like user profile etc...). This allows the developers to load the data on user intent(hover, and some other configurable logic) instead of when application is loaded(slow preload), or when the user clicks (slow response).

Even if the the target page is 1byte, the network latency alone makes things feel slugish. This allows low effort fast ui with good opinionated api.

One of the reasons I can identify svelte sites within 5 seconds of visiting a page, is because they preload on hover, and navigating between pages feels instant. This is great and fighting against it seems unreasonable.

But I agree that in other cases where megabytes of data needs to be fetched upon navigating, using these features will probably cause more harm then good, unless applied with additional intelligent logic (if these features allow such extension).

Edit: i addressed preloading, regarding pretending its a whole new set of issues which i am less experienced with. Making web apps became easier but unfortunately them having slow rendering times and other issues.. well is a case of unmitigated tech debt that comes from making web application building more accessible.

bobro•6mo ago
Can anyone give me a sense of how much load time this will really save? How much friction is too much?
pyman•6mo ago
It depends on how heavy the assets are and the user's connection.
youngtaff•6mo ago
On the right site it can make navigation feel instant
CyberDildonics•6mo ago
Instead of this clickbait title it should have just been about using preloading instead of making your page load fast in the first place.

When you go to an old page with a modern computer and internet it load instantly.

_1tem•6mo ago
The main issue I had with TurboLinks and similar hacks was that it broke scripts like the Stripe SDK which expected to be loaded exactly once. If you preloaded a page with the Stripe SDK and then navigated away, your browser console would become polluted with errors. I'm assuming this doesn't happen with a browser-native preloader because execution contexts are fully isolated (I would hope).
pelagicAustral•6mo ago
Man, the amount of headaches turbo give me... I have ended up with apps polluted with "data-turbo=false" for this exact same reason... But I also admit that when it works, it's a really nice thing to have
nickromano•6mo ago
TurboLinks only replaces the <body> so you can put any scripts you'd like loaded exactly once into the <head> tag. You can use <script async> to keep it from blocking.
_1tem•6mo ago
yeah but I needed it loaded exactly once only on certain pages and not on others.
bberenberg•6mo ago
As someone who uses Docuseal, please don’t focus on this and add UX improvements for end users. For example, filters for who has signed things.
jgalt212•6mo ago
How does this not mess up your apache logs? They just show was Chrome is guessing, not what content your users are consuming.
zersiax•6mo ago
From the article I'd assume this wouldn't work in any way for mobile given no hover, not for screen reader users because a website often has no idea where a screen reader's cursor is, and potentially not for keyboard users (haven't checked if keyboard focus triggers this prefetch/prerender or literally just mouse hover), so ... limited applicability, I'd say.
Imustaskforhelp•6mo ago
maybe its the fact that its really easy adding something like this, and this (I think) or something which basically acomplishes the same thing (but in a better way?) are used by some major meta frameworks like nextJS etc.

I guess it has limited applicability but maybe its the small little gains that add victories. I really may be going on in a tangent but I always used to think that hardware is boring / there aren't too many optimizations its all transistors with and,or,not but then.. I read about all the crazy stuff like L1 cache and the marvel machinery that is known as computers. It blew my mind into shredders. Compilers are some madman's work too, the amount of optimization is just bonkers just for tiny performances but those tiny performance boosts in the whole stack makes everything run so fast. Its so cool.

MrJohz•6mo ago
Part of the design of the feature is that the website doesn't have to specify "on hover" or "on focus", but instead they can just write "eagerness: moderate" (or "conservative", or "eager") and let the browser decide what the exact triggers are. If it turns out that keyboard focus is a useful predictor of whether a link is likely to be clicked, then browsers can add that as a trigger at one of the indicator levels, and websites will automatically get the new behaviour "for free".

Currently, "eagerness: conservative" activates on mouse/pointer down (as opposed to mouse/pointer up, which is when navigation normally happens), which will work for mobile devices as well. And "eagerness: moderate" includes the more conservative triggers, which means that even on devices with no hover functionality, prerendering can still kick in slightly before the navigation occurs.

Imustaskforhelp•6mo ago
So I was watching this youtube short which said that this is sorta how instagram's approach to a similar problem was

So instagram founders worked at google and they found that if you had written your username, you had 80% or some high% chance to create an account since (I think the barrier of friction has been crossed and its all easier from now, so why miss, why do all efforts and leave now, I am invested into this now and I will use this now)

So insta founders basically made it so that whenever you upload a photo it would silently upload in the background and then you would mostly write some captions of the image and that would take some time too, so in that time, the picture gets loaded into the database and that's how it was so fast compared to its own peers while using the same technology

If someone scraps the picture/story and doesn't put it, they just delete it from the system.

I will link to the youtube short since that clearly explained it better than me but this was really nice how things are so connected that what I watched on youtube is helping on HN.

drabbiticus•6mo ago
Can someone explain how this works with links that cause changes? (i.e. changing the amount of an item in a cart, or removing an item from a cart)

I assume you would have to tailor the prefetch/prerender targets to avoid these types of links? In other words, take some care with these specific wildcard targets in the link depending on your site?

qingcharles•6mo ago
Yeah, you have to be careful. Don't prefetch a logout or unsubscribe link that has no confirmation.
deanebarker•6mo ago
How does this affect analytics on the hosting site? Will they get phantom requests for pages that might not ever be viewed?
mpyne•6mo ago
Yes, but it's in concept always been true that they could get page views that wouldn't be viewed, whether due to bot scraping or even sometimes where a human clicks and then just... doesn't read it.
youngtaff•6mo ago
Analytics can detect prerendered pages and choose how they report them - GA ignores them I believe
autoexec•6mo ago
We train users to hover over links to see where it would send you before you click on them because some websites will link to malicious content/domains. Now I guess some of those users will end up silently browsing to and executing code in the background for those sites every time they do that.

Seems like a great way to track users too. Will hovering over ads count as a click through? Should users have to worry about where their mouse rests on a page or what it passes over?

MrJohz•6mo ago
In practice, this is almost entirely going to be used for internal links within a domain - you are not going to want to prerender domains you don't control, because you can't be sure they'll be prerender-safe. And I suspect most internal navigation will be obvious to the user - it's typically clear when I'm clicking links in a nav menu, or different product pages on a shopping site. So I suspect your first issue will not come up in practice - users will typically not need to check the sorts of links that will be prerendered.

Tracking is a legitimate concern, but quite frankly that's already happening, and at a much finer, more granular level than anything this feature can provide. Theoretically, this gives the possibility to add slightly more tracking for users that disable JS, but given the small proportion of such users, and the technical hoops you'd need to jump through to get useful tracking out of this, it's almost certainly not worth it.

G_o_D•6mo ago
https://github.com/GoogleChromeLabs/quicklink https://criticalcss.com/ Been using this for years
corentin88•6mo ago
Too much JavaScript for me. Why not offering something as simple as `<a href="…" rel="prefetch">`
Jotalea•6mo ago
sniff sniff yup, this smells like worse performance overall